Sample records for coda normalization method

  1. Attenuation Tomography of Northern California and the Yellow Sea/Korean Peninsula from Coda-Source Normalized and Direct LG Amplitudes

    DTIC Science & Technology

    2008-09-01

    method correlate slightly with global Vs30 measurements . While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements ...Ford et al., 2008), we compared 1-D methods to measure QLg and attempted to assess the error associated with the results. The assessment showed the...reverse two-station (RTS), source-pair/receiver-pair (SPRP), and the new coda-source normalization (CS) methods to measure Q of the regional phase, Lg

  2. Attenuation Tomography of Northern California and the Yellow Sea / Korean Peninsula from Coda-source Normalized and Direct Lg Amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Dreger, D S; Phillips, W S

    2008-07-16

    Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less

  3. Frequency dependent Qα and Qβ in the Umbria-Marche (Italy) region using a quadratic approximation of the coda-normalization method

    NASA Astrophysics Data System (ADS)

    de Lorenzo, Salvatore; Bianco, Francesca; Del Pezzo, Edoardo

    2013-06-01

    The coda normalization method is one of the most used methods in the inference of attenuation parameters Qα and Qβ. Since, in this method, the geometrical spreading exponent γ is an unknown model parameter, the most part of studies assumes a fixed γ, generally equal to 1. However γ and Q could be also jointly inferred from the non-linear inversion of coda-normalized logarithms of amplitudes, but the trade-off between γ and Q could give rise to unreasonable values of these parameters. To minimize the trade-off between γ and Q, an inversion method based on a parabolic expression of the coda-normalization equation has been developed. The method has been applied to the waveforms recorded during the 1997 Umbria-Marche seismic crisis. The Akaike criterion has been used to compare results of the parabolic model with those of the linear model, corresponding to γ = 1. A small deviation from the spherical geometrical spreading has been inferred, but this is accompanied by a significant variation of Qα and Qβ values. For almost all the considered stations, Qα smaller than Qβ has been inferred, confirming that seismic attenuation, in the Umbria-Marche region, is controlled by crustal pore fluids.

  4. Evaluating the Coda Phase Delay Method for Determining Temperature Ratios in Windy Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur

    2017-07-01

    We evaluate the acoustic coda phase delay method for estimating changes in atmospheric phenomena in realistic environments. Previous studies verifying the method took place in an environment with negligible wind. The equation for effective sound speed, which the method is based upon, shows that the influence of wind is equal to the square of temperature. Under normal conditions, wind is significant and therefore cannot be ignored. Results from this study con rm the previous statement. The acoustic coda phase delay method breaks down in non-ideal environments, namely those where wind speed and direction varies across small distances. We suggest thatmore » future studies make use of gradiometry to better understand the effect of wind on the acoustic coda and subsequent phase delays.« less

  5. Attenuation of Lg waves in the New Madrid seismic zone of the central United States using the coda normalization method

    NASA Astrophysics Data System (ADS)

    Nazemi, Nima; Pezeshk, Shahram; Sedaghati, Farhad

    2017-08-01

    Unique properties of coda waves are employed to evaluate the frequency dependent quality factor of Lg waves using the coda normalization method in the New Madrid seismic zone of the central United States. Instrument and site responses are eliminated and source functions are isolated to construct the inversion problem. For this purpose, we used 121 seismograms from 37 events with moment magnitudes, M, ranging from 2.5 to 5.2 and hypocentral distances from 120 to 440 km recorded by 11 broadband stations. A singular value decomposition (SVD) algorithm is used to extract Q values from the data, while the geometric spreading exponent is assumed to be a constant. Inversion results are then fitted with a power law equation from 3 to 12 Hz to derive the frequency dependent quality factor function. The final results of the analysis are QVLg (f) = (410 ± 38) f0.49 ± 0.05 for the vertical component and QHLg (f) = (390 ± 26) f0.56 ± 0.04 for the horizontal component, where the term after ± sign represents one standard error. For stations within the Mississippi embayment with an average sediment depth of 1 km around the Memphis metropolitan area, estimation of quality factor using the coda normalization method is not well-constrained at low frequencies (f < 3 Hz). There may be several reasons contributing to this issue, such as low frequency surface wave contamination, site effects, or even a change in coda wave scattering regime which can exacerbate the scatter of the data.

  6. The Spanish of Ponce, Puerto Rico: A Phonetic, Phonological, and Intonational Analysis

    ERIC Educational Resources Information Center

    Luna, Kenneth Vladimir

    2010-01-01

    This study investigates four aspects of Puerto Rican Spanish as represented in the Autonomous Municipality of Ponce: the behavior of coda /[alveolar flap]/, the behavior of /r/, the different realizations of coda /s/, and its intonational phonology. Previous studies on Puerto Rican Spanish report that coda /[alveolar flap]/ is normally realized as…

  7. P and S wave Coda Calibration in Central Asia and South Korea

    NASA Astrophysics Data System (ADS)

    Kim, D.; Mayeda, K.; Gok, R.; Barno, J.; Roman-Nieves, J. I.

    2017-12-01

    Empirically derived coda source spectra provide unbiased, absolute moment magnitude (Mw) estimates for events that are normally too small for accurate long-period waveform modeling. In this study, we obtain coda-derived source spectra using data from Central Asia (Kyrgyzstan networks - KN and KR, and Tajikistan - TJ) and South Korea (Korea Meteorological Administration, KMA). We used a recently developed coda calibration module of Seismic WaveForm Tool (SWFT). Seismic activities during this recording period include the recent Gyeongju earthquake of Mw=5.3 and its aftershocks, two nuclear explosions from 2009 and 2013 in North Korea, and a small number of construction and mining-related explosions. For calibration, we calculated synthetic coda envelopes for both P and S waves based on a simple analytic expression that fits the observed narrowband filtered envelopes using the method outlined in Mayeda et al. (2003). To provide an absolute scale of the resulting source spectra, path and site corrections are applied using independent spectral constraints (e.g., Mw and stress drop) from three Kyrgyzstan events and the largest events of the Gyeongju sequence in Central Asia and South Korea, respectively. In spite of major tectonic differences, stable source spectra were obtained in both regions. We validated the resulting spectra by comparing the ratio of raw envelopes and source spectra from calibrated envelopes. Spectral shapes of earthquakes and explosions show different patterns in both regions. We also find (1) the source spectra derived from S-coda is more robust than that from the P-coda at low frequencies; (2) unlike earthquake events, the source spectra of explosions have a large disagreement between P and S waves; and (3) similarity is observed between 2016 Gyeongju and 2011 Virginia earthquake sequence in the eastern U.S.

  8. Time patterns of sperm whale codas recorded in the Mediterranean Sea 1985-1996.

    PubMed

    Pavan, G; Hayward, T J; Borsani, J F; Priano, M; Manghi, M; Fossati, C; Gordon, J

    2000-06-01

    A distinctive vocalization of the sperm whale, Physeter macrocephalus (=P. catodon), is the coda: a short click sequence with a distinctive stereotyped time pattern [Watkins and Schevill, J. Acoust. Soc. Am. 62, 1485-1490 (1977)]. Coda repertoires have been found to vary both geographically and with group affiliation [Weilgart and Whitehead, Behav. Ecol. Sociobiol. 40, 277-285 (1997)]. In this work, the click timings and repetition patterns of sperm whale codas recorded in the Mediterranean Sea are characterized statistically, and the context in which the codas occurred are also taken into consideration. A total of 138 codas were recorded in the central Mediterranean in the years 1985-1996 by several research groups using a number of different detection instruments, including stationary and towed hydrophones, sonobuoys and passive sonars. Nearly all (134) of the recorded codas share the same "3+1" (/// /) click pattern. Coda durations ranged from 456 to 1280 ms, with an average duration of 908 ms and a standard deviation of 176 ms. Most of the codas (a total of 117) belonged to 20 coda series. Each series was produced by an individual, in most cases by a mature male in a small group, and consisted of between 2 and 16 codas, emitted in one or more "bursts" of 1 to 13 codas spaced fairly regularly in time. The mean number of codas in a burst was 3.46, and the standard deviation was 2.65. The time interval ratios within a coda are parameterized by the coda duration and by the first two interclick intervals normalized by coda duration. These three parameters remained highly stable within each coda series, with coefficients of variation within the series averaging less than 5%. The interval ratios varied somewhat across the data sets, but were highly stable over 8 of the 11 data sets, which span 11 years and widely dispersed geographic locations. Somewhat different interval ratios were observed in the other three data sets; in one of these data sets, the variant codas were produced by a young whale. Two sets of presumed sperm whale codas recorded in 1996 had 5- and 6-click patterns; the observation of these new patterns suggests that sperm whale codas in the Mediterranean may have more variations than previously believed.

  9. Regional Body-Wave Attenuation Using a Coda Source Normalization Method: Application to MEDNET Records of Earthquakes in Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, W R; Mayeda, K; Malagnini, L

    2007-02-01

    We develop a new methodology to determine apparent attenuation for the regional seismic phases Pn, Pg, Sn, and Lg using coda-derived source spectra. The local-to-regional coda methodology (Mayeda, 1993; Mayeda and Walter, 1996; Mayeda et al., 2003) is a very stable way to obtain source spectra from sparse networks using as few as one station, even if direct waves are clipped. We develop a two-step process to isolate the frequency-dependent Q. First, we correct the observed direct wave amplitudes for an assumed geometrical spreading. Next, an apparent Q, combining path and site attenuation, is determined from the difference between themore » spreading-corrected amplitude and the independently determined source spectra derived from the coda methodology. We apply the technique to 50 earthquakes with magnitudes greater than 4.0 in central Italy as recorded by MEDNET broadband stations around the Mediterranean at local-to-regional distances. This is an ideal test region due to its high attenuation, complex propagation, and availability of many moderate sized earthquakes. We find that a power law attenuation of the form Q(f) = Q{sub 0}f{sup Y} fit all the phases quite well over the 0.5 to 8 Hz band. At most stations, the measured apparent Q values are quite repeatable from event to event. Finding the attenuation function in this manner guarantees a close match between inferred source spectra from direct waves and coda techniques. This is important if coda and direct wave amplitudes are to produce consistent seismic results.« less

  10. Auto Correlation Analysis of Coda Waves from Local Earthquakes for Detecting Temporal Changes in Shallow Subsurface Structures: the 2011 Tohoku-Oki, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Nakahara, Hisashi

    2015-02-01

    For monitoring temporal changes in subsurface structures I propose to use auto correlation functions of coda waves from local earthquakes recorded at surface receivers, which probably contain more body waves than surface waves. Use of coda waves requires earthquakes resulting in decreased time resolution for monitoring. Nonetheless, it may be possible to monitor subsurface structures in sufficient time resolutions in regions with high seismicity. In studying the 2011 Tohoku-Oki, Japan earthquake (Mw 9.0), for which velocity changes have been previously reported, I try to validate the method. KiK-net stations in northern Honshu are used in this analysis. For each moderate earthquake normalized auto correlation functions of surface records are stacked with respect to time windows in the S-wave coda. Aligning the stacked, normalized auto correlation functions with time, I search for changes in phases arrival times. The phases at lag times of <1 s are studied because changes at shallow depths are focused. Temporal variations in the arrival times are measured at the stations based on the stretching method. Clear phase delays are found to be associated with the mainshock and to gradually recover with time. The amounts of the phase delays are 10 % on average with the maximum of about 50 % at some stations. The deconvolution analysis using surface and subsurface records at the same stations is conducted for validation. The results show the phase delays from the deconvolution analysis are slightly smaller than those from the auto correlation analysis, which implies that the phases on the auto correlations are caused by larger velocity changes at shallower depths. The auto correlation analysis seems to have an accuracy of about several percent, which is much larger than methods using earthquake doublets and borehole array data. So this analysis might be applicable in detecting larger changes. In spite of these disadvantages, this analysis is still attractive because it can be applied to many records on the surface in regions where no boreholes are available.

  11. Using seismic coda waves to resolve intrinsic and scattering attenuation

    NASA Astrophysics Data System (ADS)

    Wang, W.; Shearer, P. M.

    2016-12-01

    Seismic attenuation is caused by two factors, scattering and intrinsic absorption. Characterizing scattering and absorbing properties and the power spectrum of crustal heterogeneity is a fundamental problem for informing strong ground motion estimates at high frequencies, where scattering and attenuation effects are critical. Determining the relative amount of attenuation caused by scattering and intrinsic absorption has been a long-standing problem in seismology. The wavetrain following the direct body wave phases is called the coda, which is caused by scattered energy. Many studies have analyzed the coda of local events to constrain crustal and upper-mantle scattering strength and intrinsic attenuation. Here we examine two popular attenuation inversion methods, the Multiple Lapse Time Window Method (MLTWM) and the Coda Qc Method. First, based on our previous work on California attenuation structure, we apply an efficient and accurate method, the Monte Carlo Approach, to synthesize seismic envelope functions. We use this code to generate a series of synthetic data based on several complex and realistic forward models. Although the MLTWM assumes a uniform whole space, we use the MLTWM to invert for both scattering and intrinsic attenuation from the synthetic data to test how accurately it can recover the attenuation models. Results for the coda Qc method depend on choices for the length and starting time of the coda-wave time window. Here we explore the relation between the inversion results for Qc, the windowing parameters, and the intrinsic and scattering Q structure of our synthetic model. These results should help assess the practicality and accuracy of the Multiple Lapse Time Window Method and Coda Qc Method when applied to realistic crustal velocity and attenuation models.

  12. Quantitative ultrasonic coda wave (diffuse field) NDE of carbon-fiber reinforced polymer plates

    NASA Astrophysics Data System (ADS)

    Livings, Richard A.

    The increasing presence and applications of composite materials in aerospace structures precipitates the need for improved Nondestructive Evaluation (NDE) techniques to move from simple damage detection to damage diagnosis and structural prognosis. Structural Health Monitoring (SHM) with advanced ultrasonic (UT) inspection methods can potentially address these issues. Ultrasonic coda wave NDE is one of the advanced methods currently under investigation. Coda wave NDE has been applied to concrete and metallic specimens to assess damage with some success, but currently the method is not fully mature or ready to be applied for SHM. Additionally, the damage diagnosis capabilities and limitations of coda wave NDE applied to fibrous composite materials have not been widely addressed in literature. The central objective of this work, therefore, is to develop a quantitative foundation for the use of coda wave NDE for the inspection and evaluation of fibrous composite materials. Coda waves are defined as the superposition of late arriving wave modes that have been scattered or reflected multiple times. This results in long, complex signals where individual wave modes cannot be discriminated. One method of interpreting the changes in such signals caused by the introduction or growth of damage is to isolate and quantify the difference between baseline and damage signals. Several differential signal features are used in this work to quantify changes in the coda waves which can then be correlated to damage size and growth. Experimental results show that coda wave differential features are effective in detecting drilled through-holes as small as 0.4 mm in a 50x100x6 mm plate and discriminating between increasing hole diameter and increasing number of holes. The differential features are also shown to have an underlying basis function that is dependent on the hole volume and can be scaled by a material dependent coefficient to estimate the feature amplitude and size holes. The fundamental capabilities of the coda wave measurements, such as error, repeatability, and reproducibility, are also examined. Damage detection was found to be repeatable, reproducible, and relatively insensitive to noise. The measurements are found to be sensitive to thermal changes and absorbing boundaries. Several propagation models are also presented and discussed along with a brief analysis of coda wave signals and spectra.

  13. Coda Wave Analysis in Central-Western North America Using Earthscope Transportable Array Data

    NASA Astrophysics Data System (ADS)

    Escudero, C. R.; Doser, D. I.

    2011-12-01

    We determined seismic wave attenuation in the western and central United States (e.g. Washington, Oregon, California, Idaho, Nevada, Montana, Wyoming, Colorado, New Mexico, North Dakota, South Dakota, Nebraska, Kansas, Oklahoma, and Texas) using coda waves. We selected approximately twenty moderate earthquakes (magnitude between 5.5 and 6.5) located along the Mexican subduction zone, Gulf of California, southern and northern California, and off the coast of Oregon for the analysis. These events were recorded by the EarthScope transportable array (TA) network from 2008 to 2011. In this study we implemented a method based on the assumption that coda waves are single backscattered waves from randomly distributed heterogeneities to calculate the coda Q. The frequencies studied lie between 1 and 15 Hz. The scattering attenuation is calculated for frequency bands centered at 1.5, 3, 5, 7.5, 10.5, and 13.5 Hz. In this work, we present coda Q resolution maps along with a correlation analysis between coda Q and seismicity, tectonic and geology setting. We observed higher attenuation (low coda Q values) in regions of sedimentary cover, and lower attenuation (high coda Q values) in hard rock regions. Using the 4-6 Hz frequency band, we found the best general correlation between coda Q and central-western North America bedrock geology.

  14. Full-Waveform Envelope Templates for Low Magnitude Discrimination and Yield Estimation at Local and Regional Distances with Application to the North Korean Nuclear Tests

    NASA Astrophysics Data System (ADS)

    Yoo, S. H.

    2017-12-01

    Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  16. Auto correlation analysis of coda waves from local earthquakes for detecting temporal changes in shallow subsurface structures - The 2011 Tohoku-Oki, Japan, earthquake -

    NASA Astrophysics Data System (ADS)

    Nakahara, H.

    2013-12-01

    For monitoring temporal changes in subsurface structures, I propose to use auto correlation functions of coda waves from local earthquakes recorded at surface receivers, which probably contain more body waves than surface waves. Because the use of coda waves requires earthquakes, time resolution for monitoring decreases. But at regions with high seismicity, it may be possible to monitor subsurface structures in sufficient time resolutions. Studying the 2011 Tohoku-Oki (Mw 9.0), Japan, earthquake for which velocity changes have been already reported by previous studies, I try to validate the method. KiK-net stations in northern Honshu are used in the analysis. For each moderate earthquake, normalized auto correlation functions of surface records are stacked with respect to time windows in S-wave coda. Aligning the stacked normalized auto correlation functions with time, I search for changes in arrival times of phases. The phases at lag times of less than 1s are studied because changes at shallow depths are focused. Based on the stretching method, temporal variations in the arrival times are measured at the stations. Clear phase delays are found to be associated with the mainshock and to gradually recover with time. Amounts of the phase delays are in the order of 10% on average with the maximum of about 50% at some stations. For validation, the deconvolution analysis using surface and subsurface records at the same stations are conducted. The results show that the phase delays from the deconvolution analysis are slightly smaller than those from the auto correlation analysis, which implies that the phases on the auto correlations are caused by larger velocity changes at shallower depths. The auto correlation analysis seems to have an accuracy of about several percents, which is much larger than methods using earthquake doublets and borehole array data. So this analysis might be applicable to detect larger changes. In spite of these disadvantages, this analysis is still attractive because it can be applied to many records on the surface in regions where no boreholes are available. Acknowledgements: Seismograms recorded by KiK-net managed by National Research Institute for Earth Science and Disaster Prevention (NIED) were used in this study. This study was partially supported by JST J-RAPID program and JSPS KAKENHI Grant Numbers 24540449 and 23540449.

  17. Comparison of techniques that use the single scattering model to compute the quality factor Q from coda waves

    USGS Publications Warehouse

    Novelo-Casanova, D. A.; Lee, W.H.K.

    1991-01-01

    Using simulated coda waves, the resolution of the single-scattering model to extract coda Q (Qc) and its power law frequency dependence was tested. The back-scattering model of Aki and Chouet (1975) and the single isotropic-scattering model of Sato (1977) were examined. The results indicate that: (1) The input Qc models are reasonably well approximated by the two methods; (2) almost equal Qc values are recovered when the techniques sample the same coda windows; (3) low Qc models are well estimated in the frequency domain from the early and late part of the coda; and (4) models with high Qc values are more accurately extracted from late code measurements. ?? 1991 Birkha??user Verlag.

  18. Attenuation of seismic waves obtained by coda waves analysis in the West Bohemia earthquake swarm region

    NASA Astrophysics Data System (ADS)

    Bachura, Martin; Fischer, Tomas

    2014-05-01

    Seismic waves are attenuated by number of factors, including geometrical spreading, scattering on heterogeneities and intrinsic loss due the anelasticity of medium. Contribution of the latter two processes can be derived from the tail part of the seismogram - coda (strictly speaking S-wave coda), as these factors influence the shape and amplitudes of coda. Numerous methods have been developed for estimation of attenuation properties from the decay rate of coda amplitudes. Most of them work with the S-wave coda, some are designed for the P-wave coda (only on teleseismic distances) or for the whole waveforms. We used methods to estimate the 1/Qc - attenuation of coda waves, methods to separate scattering and intrinsic loss - 1/Qsc, Qi and methods to estimate attenuation of direct P and S wave - 1/Qp, 1/Qs. In this study, we analyzed the S-wave coda of local earthquake data recorded in the West Bohemia/Vogtland area. This region is well known thanks to the repeated occurrence of earthquake swarms. We worked with data from the 2011 earthquake swarm, which started late August and lasted with decreasing intensity for another 4 months. During the first week of swarm thousands of events were detected with maximum magnitudes ML = 3.6. Amount of high quality data (including continuous datasets and catalogues with an abundance of well-located events) is available due to installation of WEBNET seismic network (13 permanent and 9 temporary stations) monitoring seismic activity in the area. Results of the single-scattering model show seismic attenuations decreasing with frequency, what is in agreement with observations worldwide. We also found decrease of attenuation with increasing hypocentral distance and increasing lapse time, which was interpreted as a decrease of attenuation with depth (coda waves on later lapse times are generated in bigger depths - in our case in upper lithosphere, where attenuations are small). We also noticed a decrease of frequency dependence of 1/Qc with depth, where 1/Qc seems to be frequency independent in depth range of upper lithosphere. Lateral changes of 1/Qc were also reported - it decreases in the south-west direction from the Novy Kostel focal zone, where the attenuation is the highest. Results from more advanced methods that allow for separation of scattering and intrinsic loss show that intrinsic loss is a dominant factor for attenuating of seismic waves in the region. Determination of attenuation due to scattering appears ambiguous due to small hypocentral distances available for the analysis, where the effects of scattering in frequency range from 1 to 24 Hz are not significant.

  19. Teleseismic P wave coda from oceanic trench and other bathymetric features

    NASA Astrophysics Data System (ADS)

    Wu, W.; Ni, S.

    2012-12-01

    Teleseismic P waves are essential for studying rupture processes of great earthquakes, either in the back projection method or in finite fault inversion method involving of quantitative waveform modeling. In these studies, P waves are assumed to be direct P waves generated by localized patches of the ruptured fault. However, for some oceanic earthquakes happening near the subductiontrenches or mid-ocean ridges, we observed strong signals between P and PP are often observed on theat telseseismic networkdistances. These P wave coda signals show strong coherence and their amplitudes are sometimes comparable with those of the direct P wave or even higher for some special frequenciesfrequency band. With array analysis, we find that the coda's slowness is very close to that of the direct P wave, suggesting that they are generated near the source region. As the earthquakes occur near the trenches or mid-ocean ridges which are both featured by rapid variation of bathymetry, the coda waves are very probably generated by the scattered surface wave or S wave at the irregular bathymetry. Then, we apply the realistic bathymetry data to calculate the 3D synthetics and the coda can be well predicted by the synthetics. So the topography/bathymetry is confirmed to be the main source of the coda. The coda waves are so strong that it may affect the imaging rupture processes of ocean earthquakes, so the topography/bathymetry effect should be taken into account. However, these strong coda waves can also be used utilized to locate the oceanic earthquakes. The 3D synthetics demonstrate that the coda waves are dependent on both the specific bathymetry and the location of the earthquake. Given the determined bathymetry, the earthquake location can be constrained by the coda, e.g. the distance between trench and the earthquake can be determine from the relative arrival between the P wave and its coda which is generated by the trench. In order to locate the earthquakes using the bathymetry, it is indispensible to get all the 3D synthetics with possible different horizontal locations and depths of the earthquakes. However, the computation will be very expensive if using the numerical simulation in the whole medium. Considering that the complicated structure is only near the source region, we apply ray theory to interface full wave field from spectral-element simulation to get the teleseismic P waves. With this approach, computation efficiency is greatly improved and the relocation of the earthquake can be completed more efficiently. As for the relocation accuracy, it can be as high as 10km for the earthquakes near the trench. So it provides us another, sometimes most favorable, method to locate the ocean earthquakes with ground-truth accuracy.

  20. Improved mb-Ms Discrimination Using mb(P-coda) and MsU with Application to the Six North Korean Nuclear Tests

    NASA Astrophysics Data System (ADS)

    Napoli, V.; Yoo, S. H.; Russell, D. R.

    2017-12-01

    To improve discrimination of small explosions and earthquakes, we developed a new magnitude scale based on the standard Ms:mb discrimination method. In place of 20 second Ms measurements we developed a unified Rayleigh and Love wave magnitude scale (MsU) that is designed to maximize available information from single stations and then combine magnitude estimates into network averages. Additionally, in place of mb(P) measurements we developed an mb(P-Coda) magnitude scale as the properties of the coda make sparse network mb(P-Coda) more robust and less variable than network mb(P) estimates. A previous mb:MsU study conducted in 2013 in the Korean Peninsula shows that the use of MsU in place of standard 20 second Ms, leads to increased population separation and reduced scattering. The goals of a combined mb(P-coda):MsU scale are reducing scatter, ensuring applicability at small magnitudes with sparse networks, and improving the overall distribution for mb:Ms earthquake and explosion populations. To test this method we are calculating mb(P-coda)and MsU for a catalog earthquakes located in and near the Korean Peninsula, for the six North Korean nuclear tests (4.1 < mb < 6.3) and for the 3 aftershocks to date that occurred after the sixth test (2.6 < ML < 4.0). Compared to the previous 2013 study, we expect to see greater separation in the populations and less scattering with the inclusion of mb(P-coda) and with the implementation of additional filters for MsU to improve signal-to-noise levels; this includes S-transform filtering for polarization and off-azimuth signal reduction at regional distances. As we are expanding our database of mb(P-coda):MsU measurements in the Korean Peninsula to determine the earthquake and explosion distribution, this research will address the limitations and potential for discriminating small magnitude events using sparse networks.

  1. Near-source attenuation of high-frequency body waves beneath the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Pezeshk, Shahram; Sedaghati, Farhad; Nazemi, Nima

    2018-03-01

    Attenuation characteristics in the New Madrid Seismic Zone (NMSZ) are estimated from 157 local seismograph recordings out of 46 earthquakes of 2.6 ≤ M ≤ 4.1 with hypocentral distances up to 60 km and focal depths down to 25 km. Digital waveform seismograms were obtained from local earthquakes in the NMSZ recorded by the Center for Earthquake Research and Information (CERI) at the University of Memphis. Using the coda normalization method, we tried to determine Q values and geometrical spreading exponents at 13 center frequencies. The scatter of the data and trade-off between the geometrical spreading and the quality factor did not allow us to simultaneously derive both these parameters from inversion. Assuming 1/ R 1.0 as the geometrical spreading function in the NMSZ, the Q P and Q S estimates increase with increasing frequency from 354 and 426 at 4 Hz to 729 and 1091 at 24 Hz, respectively. Fitting a power law equation to the Q estimates, we found the attenuation models for the P waves and S waves in the frequency range of 4 to 24 Hz as Q P = (115.80 ± 1.36) f (0.495 ± 0.129) and Q S = (161.34 ± 1.73) f (0.613 ± 0.067), respectively. We did not consider Q estimates from the coda normalization method for frequencies less than 4 Hz in the regression analysis since the decay of coda amplitude was not observed at most bandpass filtered seismograms for these frequencies. Q S/ Q P > 1, for 4 ≤ f ≤ 24 Hz as well as strong intrinsic attenuation, suggest that the crust beneath the NMSZ is partially fluid-saturated. Further, high scattering attenuation indicates the presence of a high level of small-scale heterogeneities inside the crust in this region.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken, T; Mayeda, K; Hofstetter, A

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  3. Monitoring localized cracks on under pressure concrete nuclear containment wall using linear and nonlinear ultrasonic coda wave interferometry

    NASA Astrophysics Data System (ADS)

    Legland, J.-B.; Abraham, O.; Durand, O.; Henault, J.-M.

    2018-04-01

    Civil engineering is constantly demanding new methods for evaluation and non-destructive testing (NDT), particularly to prevent and monitor serious damage to concrete structures. Tn this work, experimental results are presented on the detection and characterization of cracks using nonlinear modulation of coda waves interferometry (NCWT) [1]. This method consists in mixing high-amplitude low-frequency acoustic waves with multi-scattered probe waves (coda) and analyzing their effects by interferometry. Unlike the classic method of coda analysis (CWT), the NCWT does not require the recording of a coda as a reference before damage to the structure. Tn the framework of the PTA-ENDE project, a 1/3 model of a preconstrained concrete containment (EDF VeRCoRs mock-up) is placed under pressure to study the leakage of the structure. During this evaluation protocol, specific areas are monitored by the NCWT (during 5 days, which correspond to the protocol of nuclear power plant pressurization under maintenance test). The acoustic nonlinear response due to the high amplitude of the acoustic modulation gives pertinent information about the elastic and dissipative nonlinearities of the concrete. Tts effective level is evaluated by two nonlinear observables extracted from the interferometry. The increase of nonlinearities is in agreement with the creation of a crack with a network of microcracks located at its base; however, a change in the dynamics of the evolution of the nonlinearities may indicate the opening of a through crack. Tn addition, as during the experimental campaign, reference codas have been recorded. We used CWT to follow the stress evolution and the gas leaks ratio of the structure. Both CWT and NCWT results are presented in this paper.

  4. Evaluation of crack status in a meter-size concrete structure using the ultrasonic nonlinear coda wave interferometry.

    PubMed

    Legland, Jean-Baptiste; Zhang, Yuxiang; Abraham, Odile; Durand, Olivier; Tournat, Vincent

    2017-10-01

    The field of civil engineering is in need of new methods of non-destructive testing, especially in order to prevent and monitor the serious deterioration of concrete structures. In this work, experimental results are reported on fault detection and characterization in a meter-scale concrete structure using an ultrasonic nonlinear coda wave interferometry (NCWI) method. This method entails the nonlinear mixing of strong pump waves with multiple scattered probe (coda) waves, along with analysis of the net effect using coda wave interferometry. A controlled damage protocol is implemented on a post-tensioned, meter-scale concrete structure in order to generate cracking within a specific area being monitored by NCWI. The nonlinear acoustic response due to the high amplitude of acoustic modulation yields information on the elastic nonlinearities of concrete, as evaluated by two specific nonlinear observables. The increase in nonlinearity level corresponds to the creation of a crack with a network of microcracks localized at its base. In addition, once the crack closes as a result of post-tensioning, the residual nonlinearities confirm the presence of the closed crack. Last, the benefits and applicability of this NCWI method to the characterization and monitoring of large structures are discussed.

  5. Micro-crack detection in CFRP laminates using coda wave NDE

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Barnard, Dan; Livings, Richard

    2018-04-01

    Coda Waves or diffuse field has been touted to be an NDE method that does not require the damage to be in the path of the ultrasound. The object is insonified with ultrasound and instead of catching the first or second arrival, the waves are allowed to bounce multiple times. This aspect is very important in structural health monitoring (SHM) where the potential damage development location is unknown. Researchers have used Coda waves in the interrogation of seismic damage and metallic materials. In this work we have applied the technique to composite material, and present the results herein. The coda wave and acoustic emission signals are recorded simultaneously and corroborated. Development of small incipient damage in the form of micro-crack and their detection is the objective of this work.

  6. Predicting Lg Coda Using Synthetic Seismograms and Media With Stochastic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Tibuleac, I. M.; Stroujkova, A.; Bonner, J. L.; Mayeda, K.

    2005-12-01

    Recent examinations of the characteristics of coda-derived Sn and Lg spectra for yield estimation have shown that the spectral peak of Nevada Test Site (NTS) explosion spectra is depth-of-burial dependent, and that this peak is shifted to higher frequencies for Lop Nor explosions at the same depths. To confidently use coda-based yield formulas, we need to understand and predict coda spectral shape variations with depth, source media, velocity structure, topography, and geological heterogeneity. We present results of a coda modeling study to predict Lg coda. During the initial stages of this research, we have acquired and parameterized a deterministic 6 deg. x 6 deg. velocity and attenuation model centered on the Nevada Test Site. Near-source data are used to constrain density and attenuation profiles for the upper five km. The upper crust velocity profiles are quilted into a background velocity profile at depths greater than five km. The model is parameterized for use in a modified version of the Generalized Fourier Method in two dimensions (GFM2D). We modify this model to include stochastic heterogeneities of varying correlation lengths within the crust. Correlation length, Hurst number and fractional velocity perturbation of the heterogeneities are used to construct different realizations of the random media. We use nuclear explosion and earthquake cluster waveform analysis, as well as well log and geological information to constrain the stochastic parameters for a path between the NTS and the seismic stations near Mina, Nevada. Using multiple runs, we quantify the effects of variations in the stochastic parameters, of heterogeneity location in the crust and attenuation on coda amplitude and spectral characteristics. We calibrate these parameters by matching synthetic earthquake Lg coda envelopes to coda envelopes of local earthquakes with well-defined moments and mechanisms. We generate explosion synthetics for these calibrated deterministic and stochastic models. Secondary effects, including a compensated linear vector dipole source, are superposed on the synthetics in order to adequately characterize the Lg generation. We use this technique to characterize the effects of depth of burial on the coda spectral shapes.

  7. Broadband Evaluation of DPRK Explosions, Collapse Event, and Induced Aftershocks

    NASA Astrophysics Data System (ADS)

    Mayeda, K.; Roman-Nieves, J. I.; Wagner, G.; Jeon, Y. S.

    2017-12-01

    We report on the past 6 declared DPRK nuclear explosions, a collapse event, and recent associated induced shear dislocation sources using long-period waveform modeling, direct regional phases, and stable P-coda and S-coda spectral ratios. We find that the recent September 3rd, 2017 explosion is well modeled with an MM71 explosion source model at normal scale depth, but the previous 5 smaller yield explosions exhibit much larger relative high frequency radiation, strongly suggesting they are all over buried by varying amounts. The collapse event that occurred 8 minutes following the September 3rd DPRK explosion shares significant similarities with a number of NTS collapse events for explosions of comparable yield, both in absolute amplitude and spectral fall-off. A large number of smaller sources have been observed, which from stable coda spectral analysis and waveform modeling, are consistent with shallow shear dislocations likely caused by stress redistribution following the past nuclear explosions. We conclude with testing of a new discriminant that is specific to this region.

  8. Children's Acquisition of English Onset and Coda /l/: Articulatory Evidence

    ERIC Educational Resources Information Center

    Lin, Susan; Demuth, Katherine

    2015-01-01

    Purpose: The goal of this study was to better understand how and when onset /l/ ("leap") and coda /l/ ("peel") are acquired by children by examining both the articulations involved and adults' perceptions of the produced segments. Method: Twenty-five typically developing Australian English-speaking children aged 3;0…

  9. Coda Wave Interferometry Method Applied in Structural Monitoring to Assess Damage Evolution in Masonry and Concrete Structures

    NASA Astrophysics Data System (ADS)

    Masera, D.; Bocca, P.; Grazzini, A.

    2011-07-01

    In this experimental program the main goal is to monitor the damage evolution in masonry and concrete structures by Acoustic Emission (AE) signal analysis applying a well-know seismic method. For this reason the concept of the coda wave interferometry is applied to AE signal recorded during the tests. Acoustic Emission (AE) are very effective non-destructive techniques applied to identify micro and macro-defects and their temporal evolution in several materials. This technique permits to estimate the velocity of ultrasound waves propagation and the amount of energy released during fracture propagation to obtain information on the criticality of the ongoing process. By means of AE monitoring, an experimental analysis on a set of reinforced masonry walls under variable amplitude loading and strengthening reinforced concrete (RC) beams under monotonic static load has been carried out. In the reinforced masonry wall, cyclic fatigue stress has been applied to accelerate the static creep and to forecast the corresponding creep behaviour of masonry under static long-time loading. During the tests, the evaluation of fracture growth is monitored by coda wave interferometry which represents a novel approach in structural monitoring based on AE relative change velocity of coda signal. In general, the sensitivity of coda waves has been used to estimate velocity changes in fault zones, in volcanoes, in a mining environment, and in ultrasound experiments. This method uses multiple scattered waves, which travelled through the material along numerous paths, to infer tiny temporal changes in the wave velocity. The applied method has the potential to be used as a "damage-gauge" for monitoring velocity changes as a sign of damage evolution into masonry and concrete structures.

  10. Children's Acquisition of English Onset and Coda /l/: Articulatory Evidence

    PubMed Central

    Demuth, Katherine

    2015-01-01

    Purpose The goal of this study was to better understand how and when onset /l/ (leap) and coda /l/ (peel) are acquired by children by examining both the articulations involved and adults' perceptions of the produced segments. Method Twenty-five typically developing Australian English–speaking children aged 3;0 (years;months) to 7;11 participated in an elicited imitation task, during which audio, video, and lingual ultrasound images were collected. Transcribers perceptually rated audio, whereas video and ultrasound images were visually examined for the presence of adult-like articulations. Results Data from this study establish that for Australian English–learning children, coda /l/s are acquired later than onset /l/s, and older children produce greater proportions of adultlike /l/s in both onset and coda positions, roughly following established norms for American English–speaking children. However, although perceptibility of coda /l/s was correlated with their articulations, onset /l/s were nearly uniformly perceived as adultlike despite substantial variation in the articulations used to produce them. Conclusions The disparity in the production and perception of children's singleton onset /l/s is linked to both physiological and phonological development. Suggestions are made for future research to tease these factors apart. PMID:25321384

  11. A Resource Management Tool for Public Health Continuity of Operations During Disasters

    PubMed Central

    Turner, Anne M.; Reeder, Blaine; Wallace, James C.

    2014-01-01

    Objective We developed and validated a user-centered information system to support the local planning of public health continuity of operations for the Community Health Services Division, Public Health - Seattle & King County, Washington. Methods The Continuity of Operations Data Analysis (CODA) system was designed as a prototype developed using requirements identified through participatory design. CODA uses open-source software that links personnel contact and licensing information with needed skills and clinic locations for 821 employees at 14 public health clinics in Seattle and King County. Using a web-based interface, CODA can visualize locations of personnel in relationship to clinics to assist clinic managers in allocating public health personnel and resources under dynamic conditions. Results Based on user input, the CODA prototype was designed as a low-cost, user-friendly system to inventory and manage public health resources. In emergency conditions, the system can run on a stand-alone battery-powered laptop computer. A formative evaluation by managers of multiple public health centers confirmed the prototype design’s usefulness. Emergency management administrators also provided positive feedback about the system during a separate demonstration. Conclusions Validation of the CODA information design prototype by public health managers and emergency management administrators demonstrates the potential usefulness of building a resource management system using open-source technologies and participatory design principles. PMID:24618165

  12. SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R E; Mayeda, K; Walter, W R

    2007-07-10

    The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less

  13. Intrinsic and scattering attenuation of high-frequency S-waves in the central part of the External Dinarides

    NASA Astrophysics Data System (ADS)

    Majstorović, Josipa; Belinić, Tena; Namjesnik, Dalija; Dasović, Iva; Herak, Davorka; Herak, Marijan

    2017-09-01

    The central part of the External Dinarides (CED) is a geologically and tectonically complex region formed in the collision between the Adriatic microplate and the European plate. In this study, the contributions of intrinsic and scattering attenuation ( Q i - 1 and Q sc - 1 , respectively) to the total S-wave attenuation were calculated for the first time. The multiple lapse-time window analysis (MLTWA method), based on the assumptions of multiple isotropic scattering in a homogeneous medium with uniformly distributed scatterers, was applied to seismograms of 450 earthquakes recorded at six seismic stations. Selected events have hypocentral distances between 40 and 90 km with local magnitudes between 1.5 and 4.7. The analysis was performed over 11 frequency bands with central frequencies between 1.5 and 16 Hz. Results show that the seismic albedo of the studied area is less than 0.5 and Q i - 1 > Q sc - 1 at all central frequencies and for all stations. These imply that the intrinsic attenuation dominates over scattering attenuation in the whole study area. Calculated total S-wave and expected coda wave attenuation for CED are in a very good agreement with the ones measured in previous studies using the coda normalization and the coda-Q methods. All estimated attenuation factors decrease with increasing frequency. The intrinsic attenuation for CED is among the highest observed elsewhere, which could be due to the highly fractured and fluid-filled carbonates in the upper crust. The scattering and the total S-wave attenuation for CED are close to the average values obtained in other studies performed worldwide. In particular, good agreement of frequency dependence of total attenuation in CED and in the regions that contributed most strong-motion records for ground motion prediction equations used in PSHA in Croatia indicates that those were well chosen and applicable to this area as far as their attenuation properties are concerned.

  14. Spatial variation of crustal coda Q in California

    USGS Publications Warehouse

    Philips, W.S.; Lee, W.H.K.; Newberry, J.T.

    1988-01-01

    Coda wave data from California microearthquakes were studied in order to delineate regional fluctuations of apparent crustal attenuation in the band 1.5 to 24 Hz. Apparent attenuation was estimated using a single back scattering model of coda waves. The coda wave data were restricted to ???30 s following the origin time; this insures that crustal effects dominate the results as the backscattered shear waves thought to form the coda would not have had time to penetrate much deeper. Results indicate a strong variation in apparent crustal attenuation at high frequencies between the Franciscan and Salinian regions of central California and the Long Valley area of the Sierra Nevada. Although the coda Q measurements coincide at 1.5 Hz (Qc=100), at 24 Hz there is a factor of four difference between the measurements made in Franciscan (Qc=525) and Long Valley (Qc=2100) with the Salinian midway between (Qc=900). These are extremely large variations compared to measures of seismic velocities of comparable resolution, demonstrating the exceptional sensitivity of the high frequency coda Q measurement to regional geology. In addition, the frequency trend of the results is opposite to that seen in a compilation of coda Q measurements made worldwide by other authors which tend to converge at high and diverge at low frequencies, however, the worldwide results generally were obtained without limiting the coda lengths and probably reflect upper mantle rather than crustal properties. Our results match those expected due to scattering in random media represented by Von Karman autocorrelation functions of orders 1/2 to 1/3. The Von Karman medium of order 1/3 corresponding to the Franciscan coda Q measurement contains greater amounts of high wavenumber fluctuations. This indicates relatively large medium fluctuations with wavelengths on the order of 100 m in the highly deformed crust associated with the Franciscan, however, the influence of scattering on the coda Q measurement is currently a matter of controversy. ?? 1988 Birkha??user Verlag.

  15. Mass, height of burst, and source–receiver distance constraints on the acoustic coda phase delay method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur

    Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less

  16. Mass, height of burst, and source–receiver distance constraints on the acoustic coda phase delay method

    DOE PAGES

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur; ...

    2018-04-23

    Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less

  17. Broadband bearing-time records of three-component seismic array data and their application to the study of local earthquake coda

    NASA Astrophysics Data System (ADS)

    Wagner, Gregory S.; Owens, Thomas J.

    1993-09-01

    High-frequency three-component array d, are used to study the P and S coda produced by* cal earthquakes. The data are displayed as broadba bearing-time records which allow us to examine a compl, time history of the propagation directions and arrival tin of direct and scattered phases crossing the array. This ~ sualization technique is used to examine the wavefield ~ two scale lengths using two sub-arrays~of sensors. Resu suggest that P coda is dominated by P energy propag, ing sub-parallel to the direct P arrival. The S coda pro agates in all directions and appears to be composed p~ dominantly of S and/or surface wave energy. Significant more 0e coda appears on the smaller scale length sub-art relative to the larger scale array suggesting that much, the ~, coda remains coherent for only very short distanc

  18. Distribution of fine-scale mantle heterogeneity from observations of Pdiff coda

    USGS Publications Warehouse

    Earle, P.S.; Shearer, P.M.

    2001-01-01

    We present stacked record sections of Global Seismic Network data that image the average amplitude and polarization of the high-frequency Pdiff coda and investigate their implications on the depth extent of fine-scale (~10 km) mantle heterogeneity. The extended 1-Hz coda lasts for at least 150 sec and is observed to a distance of 130??. The coda's polarization angle is about the same as the main Pdiff arrival (4.4 sec/deg) and is nearly constant with time. Previous studies show that multiple scattering from heterogeneity restricted to the lowermost mantle generates an extended Pdiff coda with a constant polarization. Here we present an alternative model that satisfies our Pdiff observations. The model consists of single scattering from weak (~1%) fine-scale (~2 km) structures distributed throughout the mantle. Although this model is nonunique, it demonstrates that Pdiff coda observations do not preclude the existence of scattering contributions from the entire mantle.

  19. A surface wave reflector in Southwestern Japan

    NASA Astrophysics Data System (ADS)

    Mak, S.; Koketsu, K.; Miyake, H.; Obara, K.; Sekine, S.

    2009-12-01

    Surface waves at short periods (<35s) are affected severely by heterogeneities in the crust and the uppermost mantle. When the scale of heterogeneity is sufficiently large, its effect can be studied in a deterministic way using conventional concepts of reflection and refraction. A well-known example is surface wave refraction at continental margin. We present a case study to investigate the composition of surface wave coda in a deterministic approach. A long duration of surface wave coda with a predominant period of 20s is observed during various strong earthquakes around Japan. The coda shows an unambiguous propagation direction, implying a deterministic nature. Beamforming and particle motion analysis suggest that the surface wave later arrivals could be explained by Love wave reflections by a point reflector located at offshore southeast to Kyushu. The reflection demonstrates a seemingly incidence-independent favorable azimuth in emitting strength. In additional to beamforming, we use a new regional crustal velocity model to perform a grid-search ray-tracing with the assumption of point reflector to further constrain to location of coda generation. Because strong velocity anomalies exist near the zone of interest, we decide to use a network shortest-path ray-tracing method, instead of analytical methods like shooting and bending, to avoid the problems like convergence, shadow zone, and smooth model assumption. Two geological features are found to be related to the formation of the coda. The primary one is the intersection between the Kyushu-Palau Ridge and the Nankai Trough at offshore southeast to Kyushu (hereafter referred as "KPR-NT"), which may act as a point reflector. There is a strong Love wave phase velocity anomaly at KPR-NT but not other parts of the ridge, implying that topography is irrelevant. Rayleigh wave phase velocity does not experience a strong anomaly there, which is consistent to the absence of Rayleigh wave reflections implied by the observed particle motions. The secondary one is a low phase velocity (<2km/s for T=20s) at the accretionary wedge of the Nankai Trough due to the thick sediment. Such a long and narrow low velocity zone, with its southwest tip at KPR-NT, is a potential wave-guide to channel waves towards KPR-NT. The longer duration of deterministic later arrivals than the direct arrival is partially explained by multi-pathing due to the wave-guide. The surface wave coda is observable for earthquakes whose propagation path does not include the accretionary wedge, implying that the wedge is an enhancer but not indispensable of the formation of the observed coda.

  20. Studies of the seismic coda using an earthquake cluster as a deeply buried seismograph array

    NASA Astrophysics Data System (ADS)

    Spudich, Paul; Bostwick, Todd

    1987-09-01

    Loosely speaking, the principle of Green's function reciprocity means that the source and receiver positions in a seismic experiment can be exchanged without affecting the observed seismograms. Consequently, the seismograms observed at a single observation location o and caused by a cluster of microearthquakes at locations {ei} are identical to the time series that would be measured by an array of stress meters emplaced at positions {ei}, recording waves generated by a source acting at o. By applying array analysis techniques like slant stacking and frequency-wave number analysis to these seismograms, we can determine the directions and velocities of the component waves as they travel in the earthquake focal region rather than at the surface. We have developed a computationally rapid plane-wave decomposition which we have applied to single-station recordings of aftershocks of the 1984 Morgan Hill, California, earthquake. The analysis is applied to data from three seismic stations having considerably different site geologies. One is a relatively hard rock station situated on Franciscan metamorphics, one is within the Calaveras fault zone, and one is on semiconsolidated sand and gravels. We define the early coda to be the part of the coda initiating immediately after the direct S wave and ending at twice the S wave lapse time. The character of the S wave and early coda varies from being impulsive at the first station to highly reverberative at the last. We examine waves in sequential time windows starting at the S wave and continuing through the early part of the coda. At all seismic stations the early coda is dominated by a persistent signal that must be caused by multiple scattering, probably within 2 km of each seismic station. Despite clear station-to-station differences in the character of the early coda, coda Q values measured in the late coda (greater than twice the S lapse time) agree well among stations, implying that the mechanisms causing the varying behavior of the early coda do not control the coda decay rate at the stations we have considered. Coda Q values measured on horizontal components of motion agree within a factor of 2 with those measured on vertical components. We have not been able to determine the composition of the late coda because of a low signal-to-noise ratio. Our analysis technique, however, is quite appropriate for such a task.

  1. Are syllabification and resyllabification strategies phonotactically directed in French children with dyslexia? A preliminary report.

    PubMed

    Maïonchi-Pino, Norbert; de Cara, Bruno; Ecalle, Jean; Magnan, Annie

    2012-04-01

    In this study, the authors queried whether French-speaking children with dyslexia were sensitive to consonant sonority and position within syllable boundaries to influence a phonological syllable-based segmentation in silent reading. Participants included 15 French-speaking children with dyslexia, compared with 30 chronological age-matched and reading level-matched controls. Children were tested with an audiovisual recognition task. A target pseudoword (TOLPUDE) was simultaneously presented visually and auditorily and then was compared with a printed test pseudoword that either was identical or differed after the coda deletion (TOPUDE) or the onset deletion (TOLUDE). The intervocalic consonant sequences had either a sonorant coda-sonorant onset (TOR.LADE), sonorant coda-obstruent onset (TOL.PUDE), obstruent coda-sonorant onset (DOT.LIRE), or obstruent coda-obstruent onset (BIC.TADE) sonority profile. All children processed identity better than they processed deletion, especially with the optimal sonorant coda-obstruent onset sonority profile. However, children preserved syllabification (coda deletion; TO.PUDE) rather than resyllabification (onset deletion; TO.LUDE) with intervocalic consonant sequence reductions, especially when sonorant codas were deleted but the optimal intersyllable contact was respected. It was surprising to find that although children with dyslexia generally exhibit phonological and acoustic-phonetic impairments (voicing), they showed sensitivity to the optimal sonority profile and a preference for preserved syllabification. The authors proposed a sonority-modulated explanation to account for phonological syllable-based processing. Educational implications are discussed.

  2. Transitioning the Coda Methodology to Full 2-D for P and S Codas (Postprint)

    DTIC Science & Technology

    2011-12-30

    had great success at local and near-regional distances for simple regions, for crustal S transitioning to Lg coda types, and at longer distances for...coda. This effect was critical for yield estimation work and will be equally critical in other areas of low crustal Q and Lg blockage, such as Iran...for making a change to the methodology is quite simple. First, regions of monitoring interest are rarely tectonically simple, and in fact, most

  3. Active doublet method for measuring small changes in physical properties

    DOEpatents

    Roberts, Peter M.; Fehler, Michael C.; Johnson, Paul A.; Phillips, W. Scott

    1994-01-01

    Small changes in material properties of a work piece are detected by measuring small changes in elastic wave velocity and attenuation within a work piece. Active, repeatable source generate coda wave responses from a work piece, where the coda wave responses are temporally displaced. By analyzing progressive relative phase and amplitude changes between the coda wave responses as a function of elapsed time, accurate determinations of velocity and attenuation changes are made. Thus, a small change in velocity occurring within a sample region during the time periods between excitation origin times (herein called "doublets") will produce a relative delay that changes with elapsed time over some portion of the scattered waves. This trend of changing delay is easier to detect than an isolated delay based on a single arrival and provides a direct measure of elastic wave velocity changes arising from changed material properties of the work piece.

  4. Site Amplification Characteristics of the Several Seismic Stations at Jeju Island, in Korea, using S-wave Energy, Background Noise, and Coda waves from the East Japan earthquake (Mar. 11th, 2011) Series.

    NASA Astrophysics Data System (ADS)

    Seong-hwa, Y.; Wee, S.; Kim, J.

    2016-12-01

    Observed ground motions are composed of 3 main factors such as seismic source, seismic wave attenuation and site amplification. Among them, site amplification is also important factor and should be considered to estimate soil-structure dynamic interaction with more reliability. Though various estimation methods are suggested, this study used the method by Castro et. al.(1997) for estimating site amplification. This method has been extended to background noise, coda waves and S waves recently for estimating site amplification. This study applied the Castro et. al.(1997)'s method to 3 different seismic waves, that is, S-wave Energy, Background Noise, and Coda waves. This study analysed much more than about 200 ground motions (acceleration type) from the East Japan earthquake (March 11th, 2011) Series of seismic stations at Jeju Island (JJU, SGP, HALB, SSP and GOS; Fig. 1), in Korea. The results showed that most of the seismic stations gave similar results among three types of seismic energies. Each station showed its own characteristics of site amplification property in low, high and specific resonance frequency ranges. Comparison of this study to other studies can give us much information about dynamic amplification of domestic sites characteristics and site classification.

  5. The global short-period wavefield modelled with a Monte Carlo seismic phonon method

    USGS Publications Warehouse

    Shearer, Peter M.; Earle, Paul

    2004-01-01

    At high frequencies (∼1 Hz), much of the seismic energy arriving at teleseismic distances is not found in the main phases (e.g. P, PP, S, etc.) but is contained in the extended coda that follows these arrivals. This coda results from scattering off small-scale velocity and density perturbations within the crust and mantle and contains valuable information regarding the depth dependence and strength of this heterogeneity as well as the relative importance of intrinsic versus scattering attenuation. Most analyses of seismic coda to date have concentrated on S-wave coda generated from lithospheric scattering for events recorded at local and regional distances. Here, we examine the globally averaged vertical-component, 1-Hz wavefield (>10° range) for earthquakes recorded in the IRIS FARM archive from 1990 to 1999. We apply an envelope-function stacking technique to image the average time–distance behavior of the wavefield for both shallow (≤50 km) and deep (≥500 km) earthquakes. Unlike regional records, our images are dominated by P and P coda owing to the large effect of attenuation on PPand S at high frequencies. Modelling our results is complicated by the need to include a variety of ray paths, the likely contributions of multiple scattering and the possible importance of P-to-S and S-to-P scattering. We adopt a stochastic, particle-based approach in which millions of seismic phonons are randomly sprayed from the source and tracked through the Earth. Each phonon represents an energy packet that travels along the appropriate ray path until it is affected by a discontinuity or a scatterer. Discontinuities are modelled by treating the energy normalized reflection and transmission coefficients as probabilities. Scattering probabilities and scattering angles are computed in a similar fashion, assuming random velocity and density perturbations characterized by an exponential autocorrelation function. Intrinsic attenuation is included by reducing the energy contained in each particle as an appropriate function of traveltime. We find that most scattering occurs in the lithosphere and upper mantle, as previous results have indicated, but that some lower-mantle scattering is likely also required. A model with 3 to 4 per cent rms velocity heterogeneity at 4-km scale length in the upper mantle and 0.5 per cent rms velocity heterogeneity at 8-km scale length in the lower mantle (with intrinsic attenuation of Qα= 450 above 200 km depth andQα= 2500 below 200 km) provides a reasonable fit to both the shallow- and deep-earthquake observations, although many trade-offs exist between the scale length, depth extent and strength of the heterogeneity.

  6. 77 FR 65886 - Century Metal Recycling PVT. LTD v. Dacon Logistics, LLC dba CODA Forwarding, Great American...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ..., LLC dba CODA Forwarding, Great American Alliance Insurance Company, Avalon Risk Management, HAPAG... Logistics, LLC dba Coda Forwarding (Dacon); Great American Alliance Insurance Company; Avalon Risk Management; Hapag Lloyd America, Inc. (Hapag Lloyd); and Mitsui OSK Lines (Mitsui), hereinafter ``Respondents...

  7. Differences in coda voicing trigger changes in gestural timing: A test case from the American English diphthong /aɪ/

    PubMed Central

    Dahan, Delphine

    2016-01-01

    We investigate the hypothesis that duration and spectral differences in vowels before voiceless versus voiced codas originate from a single source, namely the reorganization of articulatory gestures relative to one another in time. As a test case, we examine the American English diphthong /aɪ/, in which the acoustic manifestations of the nucleus /a/ and offglide /ɪ/ gestures are relatively easy to identify, and we use the ratio of nucleus-to-offglide duration as an index of the temporal distance between these gestures. Experiment 1 demonstrates that, in production, the ratio is smaller before voiceless codas than before voiced codas; this effect is consistent across speakers as well as changes in speech rate and phrasal position. Experiment 2 demonstrates that, in perception, diphthongs with contextually incongruent ratios delay listeners’ identification of target words containing voiceless codas, even when the other durational and spectral correlates of voicing remain intact. This, we argue, is evidence that listeners are sensitive to the gestural origins of voicing differences. Both sets of results support the idea that the voicing contrast triggers changes in timing: gestures are close to one another in time before voiceless codas, but separated from one another before voiced codas. PMID:26966337

  8. Detonation charge size versus coda magnitude relations in California and Nevada

    USGS Publications Warehouse

    Brocher, T.M.

    2003-01-01

    Magnitude-charge size relations have important uses in forensic seismology and are used in Comprehensive Nuclear-Test-Ban Treaty monitoring. I derive empirical magnitude versus detonation-charge-size relationships for 322 detonations located by permanent seismic networks in California and Nevada. These detonations, used in 41 different seismic refraction or network calibration experiments, ranged in yield (charge size) between 25 and 106 kg; coda magnitudes reported for them ranged from 0.5 to 3.9. Almost all represent simultaneous (single-fired) detonations of one or more boreholes. Repeated detonations at the same shotpoint suggest that the reported coda magnitudes are repeatable, on average, to within 0.1 magnitude unit. An empirical linear regression for these 322 detonations yields M = 0.31 + 0.50 log10(weight [kg]). The detonations compiled here demonstrate that the Khalturin et al. (1998) relationship, developed mainly for data from large chemical explosions but which fits data from nuclear blasts, can be used to estimate the minimum charge size for coda magnitudes between 0.5 and 3.9. Drilling, loading, and shooting logs indicate that the explosive specification, loading method, and effectiveness of tamp are the primary factors determining the efficiency of a detonation. These records indicate that locating a detonation within the water table is neither a necessary nor sufficient condition for an efficient shot.

  9. The California Oak Disease and Arthropod (CODA) Database

    Treesearch

    Tedmund J. Swiecki; Elizabeth A. Bernhardt; Richard A. Arnold

    1997-01-01

    The California Oak Disease and Arthropod (CODA) host index database is a compilation of information on agents that colonize or feed on oaks in California. Agents in the database include plant-feeding insects and mites, nematodes, microorganisms, viruses, and abiotic disease agents. CODA contains summarized information on hosts, agents, information sources, and the...

  10. Envelope of coda waves for a double couple source due to non-linear elasticity

    NASA Astrophysics Data System (ADS)

    Calisto, Ignacia; Bataille, Klaus

    2014-10-01

    Non-linear elasticity has recently been considered as a source of scattering, therefore contributing to the coda of seismic waves, in particular for the case of explosive sources. This idea is analysed further here, theoretically solving the expression for the envelope of coda waves generated by a point moment tensor in order to compare with earthquake data. For weak non-linearities, one can consider each point of the non-linear medium as a source of scattering within a homogeneous and linear medium, for which Green's functions can be used to compute the total displacement of scattered waves. These sources of scattering have specific radiation patterns depending on the incident and scattered P or S waves, respectively. In this approach, the coda envelope depends on three scalar parameters related to the specific non-linearity of the medium; however these parameters only change the scale of the coda envelope. The shape of the coda envelope is sensitive to both the source time function and the intrinsic attenuation. We compare simulations using this model with data from earthquakes in Taiwan, with a good fit.

  11. Analysis of intermediate period correlations of coda from deep earthquakes

    NASA Astrophysics Data System (ADS)

    Poli, Piero; Campillo, Michel; de Hoop, Maarten

    2017-11-01

    We aim at assessing quantitatively the nature of the signals that appear in coda wave correlations at periods >20 s. These signals contain transient constituents with arrival times corresponding to deep seismic phases. These (body-wave) constituents can be used for imaging. To evaluate this approach, we calculate the autocorrelations of the vertical component seismograms for the Mw 8.4 sea of Okhotsk earthquake at 400 stations in the Eastern US, using data from 1 h before to 50 h after the earthquake. By using array analysis and modes identification, we discover the dominant role played by high quality factor normal modes in the emergence of strong coherent phases as ScS-like, and P'P'df-like. We then make use of geometrical quantization to derive the constituent rays associated with particular modes, and gain insights about the ballistic reverberation of the rays that contributes to the emergence of body waves. Our study indicates that the signals measured in the spatially averaged autocorrelations have a physical significance, but a direct interpretation of ScS-like and P'P'df-like is not trivial. Indeed, even a single simple measurement of long period late coda in a limited period band could provide valuable information on the deep structure by using the temporal information of its autocorrelation, a procedure that could be also useful for planetary exploration.

  12. Lateral Variations of Lg Coda Q in Southern Mexico

    NASA Astrophysics Data System (ADS)

    Yamamoto, J.; Quintanar, L.; Herrmann, R. B.; Fuentes, C.

    Broad band digital three-component data recorded at UNM, a GEOSCOPE station, were used to estimate Lg coda Q for 34 medium size (3.9 <=mb<= 6.3) earthquakes with travel paths laying in different geological provinces of southern Mexico in an effort to establish the possible existence of geological structures acting as wave guides and/or travel paths of low attenuation between the Pacific coast and the Valley of Mexico. The stacked spectral ratio method proposed by XIE and NUTTLI (1988) was chosen for computing the coda Q. The variation range of Q0 (Q at 1Hz) and the frequency dependence parameter η estimates averaged on the frequency interval of 0.5 to 2Hz for the regions and the three components considered are: i) Guerrero region 173 <=Q0<= 182 and 0.6 <=Q0<= 0.7, ii) Oaxaca region 183 <=Q0<= 198 and 0.6 <=Q0<= 0.8, iii) Michoacan-Jalisco region 187 <=Q0<= 204 and 0.7 <=Q0<= 0.8 and iv) eastern portion of the Transmexican Volcanic Belt (TMVB) 313 <=Q0<= 335 and η = 0.9. The results show a very high coda Q for the TMVB as compared to other regions of southern Mexico. This unexpected result is difficult to reconcile with the geophysical characteristics of the TMVB, e.g., low seismicity, high volcanic activity and high heat flow typical of a highly attenuating (low Q) region. Visual inspection of seismograms indicates that for earthquakes with seismic waves traveling along the TMVB, the amplitude decay of Lg coda is anomalously slow as compared to other earthquakes in southern Mexico. Thus, it seems that the high Q value found does not entirely reflect the attenuation characteristics of the TMVB but it is probably contaminated by a wave-guide effect. This phenomenon produces an enhancement in the time duration of the Lg wave trains travelling along this geological structure. This result is important to establish the role played by the transmission medium in the extremely long duration of ground motion observed during the September 19, 1985 Michoacan earthquake. The overall spatial distribution of coda Q values indicates that events with focus in the Michoacan-Jalisco and Oaxaca regions yield slightly higher values than those from Guerrero. This feature is more pronounced for the horizontal component of coda Q. A slight dependence of average coda Q-1 on earthquake focal depth is observed in the frequency range of 0.2 to 1.0Hz approximately on the horizontal component. Deeper (h > 50km) events yield lower values of Q-1 than shallower events. For frequencies higher than 1.0Hz no clear dependence of Q-1 on focal depth is observed. However, due to the estimates uncertainties this result is not clearly established.

  13. Estimation of the intrinsic absorption and scattering attenuation in Northeastern Venezuela (Southeastern Caribbean) using coda waves

    USGS Publications Warehouse

    Ugalde, A.; Pujades, L.G.; Canas, J.A.; Villasenor, A.

    1998-01-01

    Northeastern Venezuela has been studied in terms of coda wave attenuation using seismograms from local earthquakes recorded by a temporary short-period seismic network. The studied area has been separated into two subregions in order to investigate lateral variations in the attenuation parameters. Coda-Q-1 (Q(c)-1) has been obtained using the single-scattering theory. The contribution of the intrinsic absorption (Q(i)-1) and scattering (Q(s)-1) to total attenuation (Q(t)-1) has been estimated by means of a multiple lapse time window method, based on the hypothesis of multiple isotropic scattering with uniform distribution of scatterers. Results show significant spatial variations of attenuation: the estimates for intermediate depth events and for shallow events present major differences. This fact may be related to different tectonic characteristics that may be due to the presence of the Lesser Antilles subduction zone, because the intermediate depth seismic zone may be coincident with the southern continuation of the subducting slab under the arc.

  14. Acquisition of Codas in Spanish as a First Language: The Role of Accuracy, Markedness and Frequency

    ERIC Educational Resources Information Center

    Polo, Nuria

    2018-01-01

    Studies on the acquisition of Spanish as a first language do not agree on the patterns and factors relevant for coda development. In order to shed light on the questions involved, a longitudinal study of coda development in Northern European Spanish was carried out to explore the relationship between accuracy, markedness and frequency. The study…

  15. 2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K M; Malagnini, L; Phillips, W S

    2005-07-13

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regionsmore » of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  16. Direct modeling of coda wave interferometry: comparison of numerical and experimental approaches

    NASA Astrophysics Data System (ADS)

    Azzola, Jérôme; Masson, Frédéric; Schmittbuhl, Jean

    2017-04-01

    The sensitivity of coda waves to small changes of the propagation medium is the principle of the coda waves interferometry, a technique which has been found to have a large range of applications over the past years. It exploits the evolution of strongly scattered waves in a limited region of space, to estimate slight changes like the wave velocity of the medium but also the location of scatterer positions or the stress field. Because of the sensitivity of the method, it is of a great value for the monitoring of geothermal EGS reservoir in order to detect fine changes. The aim of this work is thus to monitor the impact of different scatterer distributions and of the loading condition evolution using coda wave interferometry in the laboratory and numerically by modelling the scatter wavefield. In the laboratory, we analyze the scattering of an acoustic wave through a perforated loaded plate of DURAL. Indeed, the localized damages introduced behave as a scatter source. Coda wave interferometry is performed computing correlations of waveforms under different loading conditions, for different scatter distributions. Numerically, we used SPECFEM2D (a 2D spectral element code, (Komatitsch and Vilotte (1998)) to perform 2D simulations of acoustic and elastic seismic wave propagation and enables a direct comparison with laboratory and field results. An unstructured mesh is thus used to simulate the propagation of a wavelet in a loaded plate, before and after introduction of localized damages. The linear elastic deformation of the plate is simulated using Code Aster. The coda wave interferometry is performed similarly to experimental measurements. The accuracy of the comparison of the numerically and laboratory obtained results is strongly depending on the capacity to adapt the laboratory and numerical simulation conditions. In laboratory, the capacity to illuminate the medium in a similar way to that used in the numerical simulation deeply conditions among others the comparison. In the simulation, the gesture of the mesh and its dispersion also influences the rightness of the comparison and interpretation. Moreover, the spectral elements distribution of the mesh and its relative refinement could also be considered as an interesting scatter source.

  17. A Method to Retrieve the Multi-Receiver Moho Reflection Response from SH-Wave Scattering Coda in the Radiative Transfer Regime

    NASA Astrophysics Data System (ADS)

    Hartstra, I.; Wapenaar, C. P. A.

    2015-12-01

    We discuss a method to retrieve the multi-receiver Moho reflection response by interferometry from SH-wave coda in the 0.5-3 Hz frequency range. An image derived from a reflection response with a well defined virtual source would provide deterministic impedance contrasts, which can complement transmission tomography. For an accurate retrieval, cross-correlation interferometry requires the coda wave field to sample the imaging target and isotropically illuminate the receiver array. When these illumination requirements are not or only partially met, the stationary phase cannot be fully captured and artifacts will contaminate the retrieved reflection response. Here we conduct numerical scalar 2D finite difference simulations to investigate the challenging situation in which only shallow crustal earthquake sources illuminate the Moho and the response is recorded by a 2D linear array. We quantify to what extent the prevalence of scatterers in the crust can improve the illumination conditions and thus the retrieval of the Moho reflection. The accuracy of the retrieved reflection is evaluated for two physically different scattering regimes: the Rayleigh and Mie regime. We only use the earlier part of the scattering coda, because we have found that the later diffusive part does not significantly improve the retrieval. The density of the spherical scatterers is varied in order to change the scattering mean free path. This characteristic length scale is calculated for each model with the 2D radiative transfer equation, which is the governing equation in the earlier part of the scattering coda. The experiment is repeated for models of different geological settings derived from existing S-wave tomographies, which vary in Moho depth and reflectivity. The scattering mean free path can be approximated for real data if intrinsic attenuation is known, because the wavenumber-dependent scattering attenuation of the coherent wave amplitude is dependent on the scattering mean free path. This link makes it possible to determine in which spatial and temporal bandwidth retrieval is most optimal for a specific geological setting.

  18. 2-D or not 2-D, that is the question: A Northern California test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K; Malagnini, L; Phillips, W S

    2005-06-06

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions ofmore » approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  19. DEVELOPING AND EXPLOITING A UNIQUE SEISMIC DATA SET FROM SOUTH AFRICAN GOLD MINES FOR SOURCE CHARACTERIZATION AND WAVE PROPAGATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Julia, J; Nyblade, A A; Gok, R

    2008-07-08

    In this project, we are developing and exploiting a unique seismic data set to address the characteristics of small seismic events and the associated seismic signals observed at local (< 200 km) and regional (< 2000 km) distances. The dataset is being developed using mining-induced events from 3 deep gold mines in South Africa recorded on inmine networks (< 1 km) comprised of tens of high-frequency sensors, a network of 4 broadband stations installed as part of this project at the surface around the mines (1-10 km), and a network of existing broadband seismic stations at local/regional distances (50-1000 km)more » from the mines. After 1 year of seismic monitoring of mine activity (2007), over 10,000 events in the range -3.4 < ML < 4.4 have been catalogued and recorded by the in-mine networks. Events with positive magnitudes are generally well recorded by the surface-mine stations, while magnitudes 3.0 and larger are seen at regional distances (up to {approx}600 km) in high-pass filtered recordings. We have analyzed in-mine recordings in detail at one of the South African mines (Savuka) to (i) improve on reported hypocentral locations, (ii) verify sensor orientations, and (iii) determine full moment tensor solutions. Hypocentral relocations on all catalogued events have been obtained from P- and S-wave travel-times reported by the mine network operator through an automated procedure that selects travel-times falling on Wadati lines with slopes in the 0.6-0.7 range; sensor orientations have been verified and, when possible, corrected by correlating P-, SV-, and SH-waveforms obtained from theoretical and empirical (polarization filter) rotation angles; full moment tensor solutions have been obtained by inverting P-, SV-, and SH- spectral amplitudes measured on the theoretically rotated waveforms with visually assigned polarities. The relocation procedure has revealed that origin times often necessitate a negative correction of a few tenths of second and that hypocentral locations may move a few hundreds of meters. The full moment tensor determination has revealed that the most common focal mechanism (47 out of 82 solutions for events in the 0.2 < ML < 4.1 range) consists of a similar percentage of isotropic (implosive) and deviatoric components, with a normal fault-type best double couple. We have also calibrated the regional stations for seismic coda derived source spectra and moment magnitude using the envelope methodology of Mayeda et al (2003). We tie the coda Mw to independent values from waveform modeling. The resulting coda-based source spectra of shallow mining-related events show significant spectral peaking that is not seen in deeper tectonic earthquakes. This coda peaking may be an independent method of identifying shallow events and is similar to coda peaking previously observed for Nevada explosions, where the frequency of the observed spectral peak correlates with depth of burial (Murphy et al., 2008).« less

  20. Detecting metastable olivine wedge beneath Japan Sea with deep earthquake coda wave interferometry

    NASA Astrophysics Data System (ADS)

    Shen, Z.; Zhan, Z.

    2017-12-01

    It has been hypothesized for decades that the lower-pressure olivine phase would kinetically persist in the interior of slab into the transition zone, forming a low-velocity "Metastable Olivine Wedge" (MOW). MOW, if exists, would play a critical role in generating deep earthquakes and parachuting subducted slabs with its buoyancy. However, seismic evidences for MOW are still controversial, and it is suggested that MOW can only be detected using broadband waveforms given the wavefront healing effects for travel times. On the other hand, broadband waveforms are often complicated by shallow heterogeneities. Here we propose a new method using the source-side interferometry of deep earthquake coda to detect MOW. In this method, deep earthquakes are turned into virtual sensors with the reciprocity theorem, and the transient strain from one earthquake to the other is estimated by cross-correlating the coda from the deep earthquake pair at the same stations. This approach effectively isolates near-source structure from complicated shallow structures, hence provide finer resolution to deep slab structures. We apply this method to Japan subduction zone with Hi-Net data, and our preliminary result does not support a large MOW model (100km thick at 410km) as suggested by several previous studies. Metastable olivine at small scales or distributed in an incoherent manner in deep slabs may still be possible.

  1. Coda Calibration Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, Travis; Barno, Justin; Dodge, Doug

    CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.

  2. 3D Coda Attenuation Tomography of Acoustic Emission Data from Laboratory Samples as a tool for imaging pre-failure deformation mechanisms

    NASA Astrophysics Data System (ADS)

    Vinciguerra, S.; King, T. I.; Benson, P. M.; De Siena, L.

    2017-12-01

    In recent years, 3D and 4D seismic tomography have unraveled medium changes during the seismic cycle or before eruptive events. As our resolving power increases, however, complex structures increasingly affect images. Being able to interpret and understand these features requires a multi-discipline approach combining different methods, each sensitive to particular properties of the sub-surface. Rock deformation laboratory experiments can relate seismic properties to the evolving medium quantitatively. Here, an array of 1 MHz Piezo-Electric Transducers has recorded high-quality low-noise acoustic emission (AE) data during triaxial compressional experiments. Samples of Carrara Marble, Darley Dale Sandstone and Westerly Granite were deformed in saturated conditions representative of a depth of about 1 km until brittle failure. Using a time window around sample failure, AE data were filtered between 5 and 75 KHz and processed using a 3D P-coda attenuation-tomography method. Ratios of P-direct to P-coda energies calculated for each source-receiver path were inverted using the coda normalisation method for values of Q (P-wave quality factor). The results show Q-variation with respect to an average Q. Q is a combination of the effects of scattering attenuation (Qs) and intrinsic attenuation Q (Qi), which can be correlated to the sample structure. Qs primary controls energy dissipation in the presence at acoustic impedance (AI) surfaces and at fracture tips, independently of rock type, while pore fluid effects dissipate energy (Qi). Damaged zones appear as high-Q and low-Q anomalies in unsaturated and saturated samples, respectively. We have attributed frequency-dependent high-Q to resonance in the presence of AI surfaces. Low Q areas appear behind AI surfaces and are interpreted as energy shadows. These shadows can affect attenuation tomography imaging at field scale.

  3. Scattered surface wave energy in the seismic coda

    USGS Publications Warehouse

    Zeng, Y.

    2006-01-01

    One of the many important contributions that Aki has made to seismology pertains to the origin of coda waves (Aki, 1969; Aki and Chouet, 1975). In this paper, I revisit Aki's original idea of the role of scattered surface waves in the seismic coda. Based on the radiative transfer theory, I developed a new set of scattered wave energy equations by including scattered surface waves and body wave to surface wave scattering conversions. The work is an extended study of Zeng et al. (1991), Zeng (1993) and Sato (1994a) on multiple isotropic-scattering, and may shed new insight into the seismic coda wave interpretation. The scattering equations are solved numerically by first discretizing the model at regular grids and then solving the linear integral equations iteratively. The results show that scattered wave energy can be well approximated by body-wave to body wave scattering at earlier arrival times and short distances. At long distances from the source, scattered surface waves dominate scattered body waves at surface stations. Since surface waves are 2-D propagating waves, their scattered energies should in theory follow a common decay curve. The observed common decay trends on seismic coda of local earthquake recordings particular at long lapse times suggest that perhaps later seismic codas are dominated by scattered surface waves. When efficient body wave to surface wave conversion mechanisms are present in the shallow crustal layers, such as soft sediment layers, the scattered surface waves dominate the seismic coda at even early arrival times for shallow sources and at later arrival times for deeper events.

  4. Evidence for small-scale heterogeneity in Earth's inner core from a global study of PKiKP coda waves

    NASA Astrophysics Data System (ADS)

    Koper, Keith D.; Franks, Jill M.; Dombrovskaya, Marina

    2004-12-01

    Recent seismic observations have provided evidence that the inner core contains strong heterogeneity at a scale-length of tens of kilometers. The corresponding lateral variations in elastic properties could be caused by pockets of partial melt, alignment of iron crystals, or variations in chemistry. However, the relevant seismic observations (precritical PKiKP coda waves) were subtle and were made using historic seismic data. Furthermore, it has been suggested that the seismic data might be explainable by scatterers in the lower mantle or by a complex inner core boundary. To address these issues, we investigate a preexisting global database of precritical PKiKP waveforms at distances of 10°-50°, and a second, newly generated global data base of PKiKP waveforms at distances of 50°-90°. We analyze the data using standard array processing techniques and identify PKiKP coda waves based on travel time, ray parameter, amplitude, and coherence. Although it remains unclear whether the scattered energy is being created within the inner core or along its boundary, we find three lines of evidence which support the idea that it is in fact related to the inner core: at smaller distances the decay rate of PKiKP coda is significantly lower than the decay rates of the corresponding PcP and ScP codas; at larger distances, we find examples of emergent, spindle-shaped PKiKP coda waves that exist without the parent PKiKP phase; and at larger distances, we infer a PKiKP coda decay rate similar to that determined from the data at the smaller distances. It is likely that many more PKiKP coda observations can be made with existing data sets, and hence seismologists possess a new, extraordinarily fine probe for inferring inner core structure.

  5. Attenuation characteristics in eastern Himalaya and southern Tibetan Plateau: An understanding of the physical state of the medium

    NASA Astrophysics Data System (ADS)

    Singh, Sagar; Singh, Chandrani; Biswas, Rahul; Mukhopadhyay, Sagarika; Sahu, Himanshu

    2016-08-01

    Attenuation characteristics of the crust in the eastern Himalaya and the southern Tibetan Plateau are investigated using high quality data recorded by Himalayan Nepal Tibet Seismic Experiment (HIMNT) during 2001-2003. The present study aims to provide an attenuation model that can address the physical mechanism governing the attenuation characteristics in the underlying medium. We have studied the Coda wave attenuation (Qc) in the single isotropic scattering model hypothesis, S wave attenuation (Qs) by using the coda normalization method and intrinsic (Qi-1) and scattering (Qsc-1) quality factors by the multiple Lapse Time Window Analysis (MLTWA) method under the assumption of multiple isotropic scattering in a 3-D half space within the frequency range 2-12 Hz. All the values of Q exhibit frequency dependent nature for a seismically active area. At all the frequencies intrinsic absorption is predominant compared to scattering attenuation and seismic albedo (B0) are found to be lower than 0.5. The observed discrepancies between the observed and theoretical models can be corroborated by the depth-dependent velocity and attenuation structure as well as the assumption of a uniform distribution of scatterers. Our results correlate well with the existing geo-tectonic model of the area, which may suggest the possible existence of trapped fluids in the crust or its thermal nature. Surprisingly the underlying cause of high attenuation in the crust of eastern Himalaya and southern Tibet makes this region distinct from its adjacent western Himalayan segment. The results are comparable with the other regions reported globally.

  6. Multiple scattering from icequakes at Erebus volcano, Antarctica: Implications for imaging at glaciated volcanoes

    NASA Astrophysics Data System (ADS)

    Chaput, J.; Campillo, M.; Aster, R. C.; Roux, P.; Kyle, P. R.; Knox, H.; Czoski, P.

    2015-02-01

    We examine seismic coda from an unusually dense deployment of over 100 short-period and broadband seismographs in the summit region of Mount Erebus volcano on a network with an aperture of approximately 5 km. We investigate the energy-partitioning properties of the seismic wavefield generated by thousands of small icequake sources originating on the upper volcano and use them to estimate Green's functions via coda cross correlation. Emergent coda seismograms suggest that this locale should be particularly amenable to such methods. Using a small aperture subarray, we find that modal energy partition between S and P wave energy between ˜1 and 4 Hz occurs in just a few seconds after event onset and persists for tens of seconds. Spatially averaged correlograms display clear body and surface waves that span the full aperture of the array. We test for stable bidirectional Green's function recovery and note that good symmetry can be achieved at this site even with a geographically skewed distribution of sources. We estimate scattering and absorption mean free path lengths and find a power law decrease in mean free path between 1.5 and 3.3 Hz that suggests a quasi-Rayleigh or Rayleigh-Gans scattering situation. Finally, we demonstrate the existence of coherent backscattering (weak localization) for this coda wavefield. The remarkable properties of scattered seismic wavefields in the vicinity of active volcanoes suggests that the abundant small icequake sources may be used for illumination where temporal monitoring of such dynamic structures is concerned.

  7. 2D Variations in Coda Amplitudes in the Middle East

    DOE PAGES

    Pasyanos, Michael E.; Gok, Rengin; Walter, William R.

    2016-08-16

    Here, coda amplitudes have proven to be a stable feature of seismograms, allowing one to reliably measure magnitudes for moderate to large-sized (M≥3) earthquakes over broad regions. Since smaller (M<3) earthquakes are only recorded at higher frequencies where we find larger interstation scatter, amplitude and magnitude estimates for these events are more variable, regional, and path dependent. In this study, we investigate coda amplitude measurements in the Middle East for 2-D variations in attenuation structure.

  8. 2D Variations in Coda Amplitudes in the Middle East

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasyanos, Michael E.; Gok, Rengin; Walter, William R.

    Here, coda amplitudes have proven to be a stable feature of seismograms, allowing one to reliably measure magnitudes for moderate to large-sized (M≥3) earthquakes over broad regions. Since smaller (M<3) earthquakes are only recorded at higher frequencies where we find larger interstation scatter, amplitude and magnitude estimates for these events are more variable, regional, and path dependent. In this study, we investigate coda amplitude measurements in the Middle East for 2-D variations in attenuation structure.

  9. Monitoring stress changes in a concrete bridge with coda wave interferometry.

    PubMed

    Stähler, Simon Christian; Sens-Schönfelder, Christoph; Niederleithinger, Ernst

    2011-04-01

    Coda wave interferometry is a recent analysis method now widely used in seismology. It uses the increased sensitivity of multiply scattered elastic waves with long travel-times for monitoring weak changes in a medium. While its application for structural monitoring has been shown to work under laboratory conditions, the usability on a real structure with known material changes had yet to be proven. This article presents experiments on a concrete bridge during construction. The results show that small velocity perturbations induced by a changing stress state in the structure can be determined even under adverse conditions. Theoretical estimations based on the stress calculations by the structural engineers are in good agreement with the measured velocity variations.

  10. Monitoring the englacial fracture state using virtual-reflector seismology

    NASA Astrophysics Data System (ADS)

    Lindner, F.; Weemstra, C.; Walter, F.; Hadziioannou, C.

    2017-12-01

    Fracturing and changes in the englacial macroscopic water content change the elastic bulk properties of ice bodies. Small seismic velocity variations, resulting from such changes, can be measured using a technique called coda-wave interferometry. Here, coda refers to the later-arriving, multiply scattered waves. Often, this technique is applied to so-called virtual-source responses, which can be obtained using seismic interferometry (a simple crosscorrelation process). Compared to other media (e.g., the Earth's crust), however, ice bodies exhibit relatively little scattering. This complicates the application of coda-wave interferometry to the retrieved virtual-source responses. In this work, we therefore investigate the applicability of coda-wave interferometry to virtual-source responses obtained using two alternative seismic interferometric techniques, namely, seismic interferometry by multidimensional deconvolution (SI by MDD), and virtual-reflector seismology (VRS). To that end, we use synthetic data, as well as active-source glacier data acquired on Glacier de la Plaine Morte, Switzerland. Both SI by MDD and VRS allow the retrieval of more accurate virtual-source responses. In particular, the dependence of the retrieved virtual-source responses on the illumination pattern is reduced. We find that this results in more accurate glacial phase-velocity estimates. In addition, VRS introduces virtual reflections from a receiver contour (partly) enclosing the medium of interest. By acting as a sort of virtual reverberation, the coda resulting from the application of VRS significantly increases seismic monitoring capabilities, in particular in cases where natural scattering coda is not available.

  11. A model of seismic coda arrivals to suppress spurious events.

    NASA Astrophysics Data System (ADS)

    Arora, N.; Russell, S.

    2012-04-01

    We describe a model of coda arrivals which has been added to NET-VISA (Network processing Vertically Integrated Seismic Analysis) our probabilistic generative model of seismic events, their transmission, and detection on a global seismic network. The scattered energy that follows a seismic phase arrival tends to deceive typical STA/LTA based arrival picking software into believing that a real seismic phase has been detected. These coda arrivals which tend to follow all seismic phases cause most network processing software including NET-VISA to believe that multiple events have taken place. It is not a simple matter of ignoring closely spaced arrivals since arrivals from multiple events can indeed overlap. The current practice in NET-VISA of pruning events within a small space-time neighborhood of a larger event works reasonably well, but it may mask real events produced in an after-shock sequence. Our new model allows any seismic arrival, even coda arrivals, to trigger a subsequent coda arrival. The probability of such a triggered arrival depends on the amplitude of the triggering arrival. Although real seismic phases are more likely to generate such coda arrivals. Real seismic phases also tend to generate coda arrivals with more strongly correlated parameters, for example azimuth and slowness. However, the SNR (Signal to Noise Ratio) of a coda arrival immediately following a phase arrival tends to be lower because of the nature of the SNR calculation. We have calibrated our model on historical statistics of such triggered arrivals and our inference accounts for them while searching for the best explanation of seismic events their association to the arrivals and the coda arrivals. We have tested our new model on one week of global seismic data spanning March 22, 2009 to March 29, 2009. Our model was trained on two and half months of data from April 5, 2009 to June 20, 2009. We use the LEB bulletin produced by the IDC (International Data Center) as the ground truth and computed the precision (percentage of reported events which are true) and recall (percentage of true events which are reported). The existing model has a precision of 32.2 and recall of 88.6 which changes to a precision of 50.7 and recall of 88.5 after pruning. The new model has a precision of 56.8 and recall of 86.9 without any pruning and the corresponding precision recall curve is dramatically improved. In contrast, the performance of the current automated bulletin at the IDC, SEL3, has a precision of 46.2 and recall of 69.7.

  12. Effects of Word Position on the Acoustic Realization of Vietnamese Final Consonants.

    PubMed

    Tran, Thi Thuy Hien; Vallée, Nathalie; Granjon, Lionel

    2018-05-28

    A variety of studies have shown differences between phonetic features of consonants according to their prosodic and/or syllable (onset vs. coda) positions. However, differences are not always found, and interactions between the various factors involved are complex and not well understood. Our study compares acoustical characteristics of coda consonants in Vietnamese taking into account their position within words. Traditionally described as monosyllabic, Vietnamese is partially polysyllabic at the lexical level. In this language, tautosyllabic consonant sequences are prohibited, and adjacent consonants are only found at syllable boundaries either within polysyllabic words (CVC.CVC) or across monosyllabic words (CVC#CVC). This study is designed to examine whether or not syllable boundary types (interword vs. intraword) have an effect on the acoustic realization of codas. The results show significant acoustic differences in consonant realizations according to syllable boundary type, suggesting different coarticulation patterns between nuclei and codas. In addition, as Vietnamese voiceless stops are generally unreleased in coda position, with no burst to carry consonantal information, our results show that a vowel's second half contains acoustic cues which are available to aid in the discrimination of place of articulation of the vowel's following consonant. © 2018 S. Karger AG, Basel.

  13. Perception of English palatal codas by Korean speakers of English

    NASA Astrophysics Data System (ADS)

    Yeon, Sang-Hee

    2003-04-01

    This study aimed at looking at perception of English palatal codas by Korean speakers of English to determine if perception problems are the source of production problems. In particular, first, this study looked at the possible first language effect on the perception of English palatal codas. Second, a possible perceptual source of vowel epenthesis after English palatal codas was investigated. In addition, individual factors, such as length of residence, TOEFL score, gender and academic status, were compared to determine if those affected the varying degree of the perception accuracy. Eleven adult Korean speakers of English as well as three native speakers of English participated in the study. Three sets of a perception test including identification of minimally different English pseudo- or real words were carried out. The results showed that, first, the Korean speakers perceived the English codas significantly worse than the Americans. Second, the study supported the idea that Koreans perceived an extra /i/ after the final affricates due to final release. Finally, none of the individual factors explained the varying degree of the perceptional accuracy. In particular, TOEFL scores and the perception test scores did not have any statistically significant association.

  14. Identifying individual sperm whales acoustically using self-organizing maps

    NASA Astrophysics Data System (ADS)

    Ioup, Juliette W.; Ioup, George E.

    2005-09-01

    The Littoral Acoustic Demonstration Center (LADC) is a consortium at Stennis Space Center comprising the University of New Orleans, the University of Southern Mississippi, the Naval Research Laboratory, and the University of Louisiana at Lafayette. LADC deployed three Environmental Acoustic Recording System (EARS) buoys in the northern Gulf of Mexico during the summer of 2001 to study ambient noise and marine mammals. Each LADC EARS was an autonomous, self-recording buoy capable of 36 days of continuous recording of a single channel at an 11.7-kHz sampling rate (bandwidth to 5859 Hz). The hydrophone selected for this analysis was approximately 50 m from the bottom in a water depth of 800 m on the continental slope off the Mississippi River delta. This paper contains recent analysis results for sperm whale codas recorded during a 3-min period. Results are presented for the identification of individual sperm whales from their codas, using the acoustic properties of the clicks within each coda. The recorded time series, the Fourier transform magnitude, and the wavelet transform coefficients are each used separately with a self-organizing map procedure for 43 codas. All show the codas as coming from four or five individual whales. [Research supported by ONR.

  15. Temporal evolution of the Green's function reconstruction in the seismic coda

    NASA Astrophysics Data System (ADS)

    Clerc, V.; Roux, P.; Campillo, M.

    2013-12-01

    In presence of multiple scattering, the wavefield evolves towards an equipartitioned state, equivalent to ambient noise. CAMPILLO and PAUL (2003) reconstructed the surface wave part of the Green's function between three pairs of stations in Mexico. The data indicate that the time asymmetry between causal and acausal part of the Green's function is less pronounced when the correlation is performed in the later windows of the coda. These results on the correlation of diffuse waves provide another perspective on the reconstruction of Green function which is independent of the source distribution and which suggests that if the time of observation is long enough, a single source could be sufficient. The paper by ROUX et al. (2005) provides a theoretical frame for the reconstruction of the Green's function in a homogeneous middle. In a multiple scattering medium with a single source, scatterers behave as secondary sources according to the Huygens principle. Coda waves are relevant to multiple scattering, a regime which can be approximated by diffusion for long lapse times. We express the temporal evolution of the correlation function between two receivers as a function of the secondary sources. We are able to predict the effect of the persistence of the net flux of energy observed by CAMPILLO and PAUL (2003) in numerical simulations. This method is also effective in order to retrieve the scattering mean free path. We perform a partial reconstruction of the Green's function in a strongly scattering medium in numerical simulations. The prediction of the flux asymmetry allows defining the parts of the coda providing the same information as ambient noise cross correlation.

  16. Wave Propagation in Non-Stationary Statistical Mantle Models at the Global Scale

    NASA Astrophysics Data System (ADS)

    Meschede, M.; Romanowicz, B. A.

    2014-12-01

    We study the effect of statistically distributed heterogeneities that are smaller than the resolution of current tomographic models on seismic waves that propagate through the Earth's mantle at teleseismic distances. Current global tomographic models are missing small-scale structure as evidenced by the failure of even accurate numerical synthetics to explain enhanced coda in observed body and surface waveforms. One way to characterize small scale heterogeneity is to construct random models and confront observed coda waveforms with predictions from these models. Statistical studies of the coda typically rely on models with simplified isotropic and stationary correlation functions in Cartesian geometries. We show how to construct more complex random models for the mantle that can account for arbitrary non-stationary and anisotropic correlation functions as well as for complex geometries. Although this method is computationally heavy, model characteristics such as translational, cylindrical or spherical symmetries can be used to greatly reduce the complexity such that this method becomes practical. With this approach, we can create 3D models of the full spherical Earth that can be radially anisotropic, i.e. with different horizontal and radial correlation functions, and radially non-stationary, i.e. with radially varying model power and correlation functions. Both of these features are crucial for a statistical description of the mantle in which structure depends to first order on the spherical geometry of the Earth. We combine different random model realizations of S velocity with current global tomographic models that are robust at long wavelengths (e.g. Meschede and Romanowicz, 2014, GJI submitted), and compute the effects of these hybrid models on the wavefield with a spectral element code (SPECFEM3D_GLOBE). We finally analyze the resulting coda waves for our model selection and compare our computations with observations. Based on these observations, we make predictions about the strength of unresolved small-scale structure and extrinsic attenuation.

  17. Neutral axis determination of full size concrete structures using coda wave measurements

    NASA Astrophysics Data System (ADS)

    Jiang, Hanwan; Zhan, Hanyu; Zhuang, Chenxu; Jiang, Ruinian

    2018-03-01

    Coda waves experiencing multiple scattering behaviors are sensitive to weak changes occurring in media. In this paper, a typical four-point bending test with varied external loads is conducted on a 30-meter T-beam that is removed from a bridge after being in service for 15 years, and the coda wave signals are collected with a couple of sources-receivers pairs. Then the observed coda waves at different loads are compared to calculate their relative velocity variations, which are utilized as the parameter to distinct the compression and tensile zones as well as determine the neutral axis position. Without any prior knowledge of the concrete beam, the estimated axis position agrees well with the associated strain gage measurement results, and the zones bearing stress and tension behaviors are indicated. The presented work offers significant potential for Non-Destructive Testing and Evaluation of full-size concrete structures in future work.

  18. Medium change based image estimation from application of inverse algorithms to coda wave measurements

    NASA Astrophysics Data System (ADS)

    Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian

    2018-03-01

    Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.

  19. A resource management tool for public health continuity of operations during disasters.

    PubMed

    Turner, Anne M; Reeder, Blaine; Wallace, James C

    2013-04-01

    We developed and validated a user-centered information system to support the local planning of public health continuity of operations for the Community Health Services Division, Public Health - Seattle & King County, Washington. The Continuity of Operations Data Analysis (CODA) system was designed as a prototype developed using requirements identified through participatory design. CODA uses open-source software that links personnel contact and licensing information with needed skills and clinic locations for 821 employees at 14 public health clinics in Seattle and King County. Using a web-based interface, CODA can visualize locations of personnel in relationship to clinics to assist clinic managers in allocating public health personnel and resources under dynamic conditions. Based on user input, the CODA prototype was designed as a low-cost, user-friendly system to inventory and manage public health resources. In emergency conditions, the system can run on a stand-alone battery-powered laptop computer. A formative evaluation by managers of multiple public health centers confirmed the prototype design's usefulness. Emergency management administrators also provided positive feedback about the system during a separate demonstration. Validation of the CODA information design prototype by public health managers and emergency management administrators demonstrates the potential usefulness of building a resource management system using open-source technologies and participatory design principles.

  20. Material State Awareness for Composites Part I: Precursor Damage Analysis Using Ultrasonic Guided Coda Wave Interferometry (CWI).

    PubMed

    Patra, Subir; Banerjee, Sourav

    2017-12-16

    Detection of precursor damage followed by the quantification of the degraded material properties could lead to more accurate progressive failure models for composite materials. However, such information is not readily available. In composite materials, the precursor damages-for example matrix cracking, microcracks, voids, interlaminar pre-delamination crack joining matrix cracks, fiber micro-buckling, local fiber breakage, local debonding, etc.-are insensitive to the low-frequency ultrasonic guided-wave-based online nondestructive evaluation (NDE) or Structural Health Monitoring (SHM) (~100-~500 kHz) systems. Overcoming this barrier, in this article, an online ultrasonic technique is proposed using the coda part of the guided wave signal, which is often neglected. Although the first-arrival wave packets that contain the fundamental guided Lamb wave modes are unaltered, the coda wave packets however carry significant information about the precursor events with predictable phase shifts. The Taylor-series-based modified Coda Wave Interferometry (CWI) technique is proposed to quantify the stretch parameter to compensate the phase shifts in the coda wave as a result of precursor damage in composites. The CWI analysis was performed on five woven composite-fiber-reinforced-laminate specimens, and the precursor events were identified. Next, the precursor damage states were verified using high-frequency Scanning Acoustic Microscopy (SAM) and optical microscopy imaging.

  1. Survey of Current Clinical and Curriculum Practices of Postgraduate Pediatric Dentistry Programs in Nonintravenous Conscious Sedation in the United States.

    PubMed

    Morin, Aline; Ocanto, Romer; Drukteinis, Lesbia; Hardigan, Patrick C

    2016-10-15

    The purposes of this study were to: (1) describe the sedation protocols of postgraduate pediatric dentistry programs (PPDPs) in the U.S.; (2) evaluate how consistent they were with current American Academy of Pediatric Dentistry sedation guidelines and Commission on Dental Accreditation (CODA) sedation curriculum requirements; (3) identify barriers to and tools for implementing these guidelines; and (4) determine the independent association between PPDPs' adherence to guidelines and the program setting. In February 2015, a 40-item questionnaire was e-mailed to all postgraduate pediatric dentistry program directors (PPDPDs) of CODA-accredited programs in the U.S. (n equals 74). Data were analyzed using descriptive statistics and Kruskal-Wallis and pairwise Nemenyi tests. Fifty-two PPDPDs responded (70 percent). Since the 2013 change in CODA sedation requirements, only a limited number of PPDPs (36 percent) were found to be noncompliant with CODA standards. PPDPDs trained at hospital-based programs were found to direct programs that were more compliant with CODA sedation standards (P<.05). A major perceived barrier to increasing the number of sedation cases was the lack of a patient pool (37 percent). Further efforts should be made by teaching institutions for programs to be compliant with American Academy of Pediatric Dentistry and Commission on Dental Accreditation sedation standards.

  2. Conduit dynamics and post explosion degassing on Stromboli: A combined UV camera and numerical modeling treatment

    PubMed Central

    McGonigle, A. J. S.; James, M. R.; Tamburello, G.; Aiuppa, A.; Delle Donne, D.; Ripepe, M.

    2016-01-01

    Abstract Recent gas flux measurements have shown that Strombolian explosions are often followed by periods of elevated flux, or “gas codas,” with durations of order a minute. Here we present UV camera data from 200 events recorded at Stromboli volcano to constrain the nature of these codas for the first time, providing estimates for combined explosion plus coda SO2 masses of ≈18–225 kg. Numerical simulations of gas slug ascent show that substantial proportions of the initial gas mass can be distributed into a train of “daughter bubbles” released from the base of the slug, which we suggest, generate the codas, on bursting at the surface. This process could also cause transitioning of slugs into cap bubbles, significantly reducing explosivity. This study is the first attempt to combine high temporal resolution gas flux data with numerical simulations of conduit gas flow to investigate volcanic degassing dynamics. PMID:27478285

  3. Inter-station coda wavefield studies using a novel icequake database on Erebus volcano

    NASA Astrophysics Data System (ADS)

    Chaput, J. A.; Campillo, M.; Roux, P.; Aster, R. C.

    2013-12-01

    Recent theoretical advances pertaining to the properties of multiply scattered wavefields have yielded a plethora of numerical and controlled source studies aiming to better understand what information may be derived from these otherwise chaotic signals. Practically, multiply scattered wavefields are difficult to compare to numerically derived models due to a combination of source paucity/directionality and array density limitations, particularly in passive seismology scenarios. Furthermore, in situations where data quantities are abundant, such as for ambient noise correlations, it remains very difficult to recover pseudo-Green's function symmetry in the ballistic components of the wavefield, let alone in the coda of the correlations. In this study, we use a large network of short period and broadband instruments on Erebus volcano to show that actual Green's function recovery is indeed possible in some cases. We make use of a large database of small impulsive icequakes distributed randomly on the summit plateau and, using fundamental theoretical properties of equipartitioned wavefields and interstation icequake coda correlations, are able to directly derive notoriously difficult quantities such as the bulk elastic mean free path for the volcano, demonstrations of correlation coda symmetry and its dependence on the number of icequakes used, and a theoretically predicted coherent backscattering amplification factor associated with weak localization. We furthermore show that stable equipartition and H^2/V^2 ratios may be consistently observed for icequake coda, and we perform simple depth inversions of these frequency dependent quantities to compare with known structures.

  4. Monitoring temporal variations of seismic properties of the crust induced by the 2013 Ruisui earthquake in eastern Taiwan from coda wave interferometry with ambient seismic and strain fields

    NASA Astrophysics Data System (ADS)

    Dai, W. P.; Hung, S. H.; Wu, S. M.; Hsu, Y. J.

    2017-12-01

    Owing to the rapid development in ambient noise seismology, time-lapse variations in delay time and waveform decorrelation of coda derived from noise cross correlation (NCF) have been proved very effective to monitor slight changes in seismic velocity and scattering properties of the crust induced by various loadings such as the earthquake and healing process. In this study, we employ coda wave interferometry to detect the crustal perturbations immediately preceding and following the 2013 Mw 6.2 Ruisui Earthquake which struck the northern segment of the Longitudinal Valley Fault in eastern Taiwan, a seismically very active thrust suture zone separating the Eurasian and Philippine Sea Plate. By comparing the pre- and post-event coda waves extracted from the auto- and cross-correlation functions (ACFs and CCFs) of ambient seismic and strain fields recorded by the seismometers and borehole strainmeters, respectively, in the vicinity of the source region, we present a strong case that not only coseismic velocity reduction but also preceding decorrelation of waveforms are explicitly revealed in both the seismic and strain CCFs filtered in the secondary microseism frequency band of 0.1-0.9 Hz. Such precursory signals susceptible to the scattering properties of the crust are more unequivocally identified in the coda retrieved from the strainmeter data, suggesting that the ambient strain field can act as a more sensible probe to detect tiny structural perturbations in the critically stressed fault zone at the verge of failure. In addition to coseismic velocity changes detected in both the seismic and strain NCFs, we find quasi-periodic velocity variations that only appear in the strain retrieved coda signals, with a predominant cycle of 3-4 months correlating with the groundwater fluctuations observed at Ruisui.

  5. Towards monitoring the englacial fracture state using virtual-reflector seismology

    NASA Astrophysics Data System (ADS)

    Lindner, F.; Weemstra, C.; Walter, F.; Hadziioannou, C.

    2018-04-01

    In seismology, coda wave interferometry (CWI) is an effective tool to monitor time-lapse changes using later arriving, multiply scattered coda waves. Typically, CWI relies on an estimate of the medium's impulse response. The latter is retrieved through simple time-averaging of receiver-receiver cross-correlations of the ambient field, i.e. seismic interferometry (SI). In general, the coda are induced by heterogeneities in the Earth. Being comparatively homogeneous, however, ice bodies such as glaciers and ice sheets exhibit little scattering. In addition, the temporal stability of the time-averaged cross-correlations suffers from temporal variations in the distribution and amplitude of the passive seismic sources. Consequently, application of CWI to ice bodies is currently limited. Nevertheless, fracturing and changes in the englacial macroscopic water content alter the bulk elastic properties of ice bodies, which can be monitored with cryoseismological measurements. To overcome the current limited applicability of CWI to ice bodies, we therefore introduce virtual-reflector seismology (VRS). VRS relies on a so-called multidimensional deconvolution (MDD) process of the time-averaged crosscorrelations. The technique results in the retrieval of a medium response that includes virtual reflections from a contour of receivers enclosing the region of interest (i.e., the region to be monitored). The virtual reflections can be interpreted as artificial coda replacing the (lacking) natural scattered coda. Hence, this artificial coda might be exploited for the purpose of CWI. From an implementation point of view, VRS is similar to SI by MDD, which, as its name suggests, also relies on a multidimensional deconvolution process. SI by MDD, however, does not generate additional virtual reflections. Advantageously, both techniques mitigate spurious coda changes associated with temporal variations in the distribution and amplitude of the passive seismic sources. In this work, we apply SI by MDD and VRS to synthetic and active seismic surface-wave data. The active seismic data were acquired on Glacier de la Plaine Morte, Switzerland. We successfully retrieve virtual reflections through the application of VRS to this active seismic data. In application to both synthetic and active seismic data, we show the potential of VRS to monitor time-lapse changes. In addition, we find that SI by MDD allows for a more accurate determination of phase velocity.

  6. Analysis of P and Pdiff Coda Arrivals for Water Reverberations to Evaluate Shallow Slip Extent in Large Megathrust Earthquakes

    NASA Astrophysics Data System (ADS)

    Rhode, A.; Lay, T.

    2017-12-01

    Determining the up-dip rupture extent of large megathrust ruptures is important for understanding their tsunami excitation, frictional properties of the shallow megathrust, and potential for separate tsunami earthquake occurrence. On land geodetic data have almost no resolution of the up-dip extent of faulting and teleseismic observations have limited resolution that is strongly influenced by typically poorly known shallow seismic velocity structure near the toe of the accretionary prism. The increase in ocean depth as slip on the megathrust approaches the trench has significant influence on the strength and azimuthal distribution of water reverberations in the far-field P wave coda. For broadband P waves from large earthquakes with dominant signal periods of about 10 s, water reverberations generated by shallow fault slip under deep water may persist for over a minute after the direct P phases have passed, giving a clear signal of slip near the trench. As the coda waves can be quickly evaluated following the P signal, recognition of slip extending to the trench and associated enhanced tsunamigenic potential could be achieved within a few minutes after the P arrival, potentially contributing to rapid tsunami hazard assessment. We examine the broadband P wave coda at distances from 80 to 120° for a large number of recent major and great earthquakes with independently determined slip distributions and known tsunami excitation to evaluate the prospect for rapidly constraining up-dip rupture extent of large megathrust earthquakes. Events known to have significant shallow slip, at least locally extending to the trench (e.g., 2016 Illapel, Chile; 2010 Maule, 2010 Mentawai) do have relatively enhanced coda levels at all azimuths, whereas events that do not rupture the shallow megathrust (e.g., 2007 Sumatra, 2014 Iquique, 2003 Hokkaido) do not. Some events with slip models lacking shallow slip show strong coda generation, raising questions about the up-dip resolution of slip of their finite-fault models, and others show strong azimuthal patterns in coda strength that suggest propagation from the slip zone to the deep near-trench environments is involved rather than slip near the trench. The various behaviors will be integrated into an assessment of this approach.

  7. Twenty-Four-Month-Olds' Perception of Word-Medial Onsets and Codas

    ERIC Educational Resources Information Center

    Wang, Yuanyuan; Seidl, Amanda

    2016-01-01

    Recent work has shown that children have detailed phonological representations of consonants at both word-initial and word-final edges. Nonetheless, it remains unclear whether onsets and codas are equally represented by young learners since word edges are isomorphic with syllable edges in this work. The current study sought to explore toddler's…

  8. Regionalization and dependence of coda Q on frequency and lapse time in the seismically active Peloritani region (northeastern Sicily, Italy)

    NASA Astrophysics Data System (ADS)

    Giampiccolo, Elisabetta; Tuvè, Tiziana

    2018-05-01

    The Peloritani region is one of the most seismically active regions in Italy and, consequently, the quantification of attenuation of the medium plays an important role for seismic risk evaluation. Moreover, it is necessary for the prediction of earth ground motion and future seismic source studies. An in depth analysis has been made here to understand the frequency and lapse time dependence of attenuation characteristics of the region by using the coda of local earthquakes. A regionalization is likewise performed in order to investigate the spatial variation of coda Q across the whole region. Finally, our results are jointly interpreted with those obtained from recently published 3D velocity tomographies for further insights.

  9. Attenuation of coda waves in the Aswan Reservoir area, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, H. H.; Mukhopadhyay, S.; Sharma, J.

    2010-09-01

    Coda attenuation characteristics of Aswan Reservoir area of Egypt were analyzed using data recorded by a local earthquake network operated around the reservoir. 330 waveforms obtained from 28 earthquakes recorded by a network of 13 stations were used for this analysis. Magnitude of these earthquakes varied between 1.4 and 2.5. The maximum epicentral distance and depth of focus of these earthquakes were 45 km and 16 km respectively. Single back-scattering method was used for estimation of coda Q ( Qc). The Q0 values ( Qc at 1 Hz) vary between 54 and 100 and frequency dependence parameter " n" values vary between 1 and 1.2 for lapse time varying between 15 s and 60 s. It is observed that coda Q ( Qc) and related parameters are similar at similar lapse times to those observed for those for Koyna, India, where reservoir induced seismicity is also observed. For both regions these parameters are also similar to those observed for tectonically active regions of the world, although Aswan is located in a moderately active region and Koyna is located in a tectonically stable region. However, Qc does not increase uniformly with increasing lapse time, as is observed for several parts of the world. Converting lapse time to depth/distance it is observed that Qc becomes lower or remains almost constant at around 70 to 90 km and 120 km depth/distance. This indicates presence of more attenuative material at those depth levels or distances compared to their immediate surroundings. It is proposed that this variation indicates presence of fluid filled fractures and/or partial melts at some depths/distance from the area of study. The Qc values are higher than those obtained for the Gulf of Suez and Al Dabbab region of Egypt at distances greater than 300 km from the study area by other workers. The turbidity decreases with depth in the study area.

  10. The Prosodic Licensing of Coda Consonants in Early Speech: Interactions with Vowel Length

    ERIC Educational Resources Information Center

    Miles, Kelly; Yuen, Ivan; Cox, Felicity; Demuth, Katherine

    2016-01-01

    English has a word-minimality requirement that all open-class lexical items must contain at least two moras of structure, forming a bimoraic foot (Hayes, 1995).Thus, a word with either a long vowel, or a short vowel and a coda consonant, satisfies this requirement. This raises the question of when and how young children might learn this…

  11. Bimodal Bilingual Language Development of Hearing Children of Deaf Parents

    ERIC Educational Resources Information Center

    Hofmann, Kristin; Chilla, Solveig

    2015-01-01

    Adopting a bimodal bilingual language acquisition model, this qualitative case study is the first in Germany to investigate the spoken and sign language development of hearing children of deaf adults (codas). The spoken language competence of six codas within the age range of 3;10 to 6;4 is assessed by a series of standardised tests (SETK 3-5,…

  12. A local earthquake coda magnitude and its relation to duration, moment M sub O, and local Richter magnitude M sub L

    NASA Technical Reports Server (NTRS)

    Suteau, A. M.; Whitcomb, J. H.

    1977-01-01

    A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.

  13. Frequency selection for coda wave interferometry in concrete structures.

    PubMed

    Fröjd, Patrik; Ulriksen, Peter

    2017-09-01

    This study contributes to the establishment of frequency recommendations for use in coda wave interferometry structural health monitoring (SHM) systems for concrete structures. To this end, codas with widely different central frequencies were used to detect boreholes with different diameters in a large concrete floor slab, and to track increasing damage in a small concrete beam subjected to bending loads. SHM results were obtained for damage that can be simulated by drilled holes on the scale of a few mm or microcracks due to bending. These results suggest that signals in the range of 50-150kHz are suitable in large concrete structures where it is necessary to account for the high attenuation of high-frequency signals. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Fine Structure of the Outermost Solid Core from Analysis of PKiKP Coda Waves

    NASA Astrophysics Data System (ADS)

    Krasnoshchekov, D.; Kaazik, P.; Ovtchinnikov, V.

    2006-05-01

    Near surface heterogeneities in the Earth's inner core have recently been confirmed to exist, and pods of partial melt or variations in seismic anisotropy either due to orientation of iron crystals or changes in strength were indicated as possible sources for such peculiarities. In the same time, analysis of the phase reflected from the inner core boundary (PKiKP) predicts complex character of the reflecting discontinuity in the form of local thin transition layers resulting in mosaic structure of the Earth's inner core's surface. Precritical PKiKP waveforms and coda waves provide necessary seismological constraints to investigate fine structure of the upper part of the Earth's inner core and its boundary, and rank high among researches that detected the described specifics of the solid core. PKiKP coda studies have to do with weak amplitudes and subtle effects, which frequently requires using a reference core related seismic phase and array data processing, as well as eliminating max number of factors biasing the resulting estimates (for example, source related inaccuracies typical for earthquake analysis). In this work we report new observations of PKiKP coda waves detected on records of a group of Underground Nuclear Explosions (UNEs) carried out in USSR and recorded at distances from 6 to 95 degrees by stations of the world seismological network. Our dataset benefits from using accurate ground truth information on source parameters (locations, origin times, depths, etc.), requires no accounting for different source radiation patterns and contains records corresponding to the whole range of precritical reflection including so called transparent zone where amplitudes of direct PKiKP phase are negligible. The processed dataset incorporates records of the array of sources consisted of the same magnitude explosions closely carried out at Semipalatinsk Test Site and recorded by stations located in Eurasia, Africa and North America. We detect PKiKP coda waves on records of all stations that registered this array. The performed frequency-wavenumber analysis and stacking of the array data reveal both scattering mechanism tracked in the form of slight dependence of PKiKP coda's frequency content on epicentral distance, and reflective mechanism evidenced by detection of distinct arrivals of waves reflected from isotropic or anisotropic discontinuities below the inner core boundary. We infer, that PKiKP coda is built by both volumetric scattering and reverberations on reflectors in the upper portion of the inner core. We also find no significant evidence for the presence of a constant depth global isotropic reflector all through 300 km below the ICB and attribute different types of the observed PKiKP coda patterns to variability in properties of the outermost portion of the Earth's inner core either due to its anisotropy or local specifics. The research described was made possible in part by contribution from grant RUG1-2675-MO-05 of the US Civilian Research & Development Foundation for the Independent States of the Former Soviet Union (CRDF) and the President Grant MK-1600.2005.5.

  15. Correlation of Coseismic Velocity and Static Volumetric Strain Changes Induced by the 2010 Mw6.3 Jiasian Earthquake under the Southern Taiwan Orogenic Belt

    NASA Astrophysics Data System (ADS)

    Wu, S. M.; Hung, S. H.

    2015-12-01

    Earthquake-induced temporal changes in seismic velocity of the earth's crust have been demonstrated to be monitored effectively by the time-lapse shifts of coda waves recently. Velocity drop during the coseismic rupture has been explicitly observed in proximity to the epicenters of large earthquakes with different styles of faulting. The origin of such sudden perturbation in crustal properties is closely related to the damage and/or volumetric strain change influenced by seismic slip distribution. In this study, we apply a coda wave interferometry method to investigate potential velocity change in both space and time related to the moderate-sized (Mw6.3) 2010 Jiasian earthquake, which nucleated deeply in the crust (~23 km), ruptured and terminated around the depth of 10 km along a previously unidentified blind thrust fault near the lithotectonic boundary of the southern Taiwan orogenic belt. To decipher the surface and crustal response to this relatively deep rupture, we first measure relative time-lapse changes of coda between different short-term time frames spanning one year covering the pre- and post-seismic stages by using the Moving Window Cross Spectral Method. Rather than determining temporal velocity variations based on a long-term reference stack, we conduct a Bayesian least-squares inversion to obtain the optimal estimates by minimizing the inconsistency between the relative time-lapse shifts of individual short-term stacks. The results show the statistically significant velocity reduction immediately after the mainshock, which is most pronounced at the pairs with the interstation paths traversing through the hanging-wall block of the ruptured fault. The sensitivity of surface wave coda arrivals mainly in the periods of 3-5 s to shear wave speed perturbation is confined within the depth of 10 km, where the crust mostly experienced extensional strain changes induced by the slip distribution from the finite-fault model. Compared with coseismic slip distribution from GPS data and finite-fault inversion, peak ground velocity, and static volumetric strain field following the earthquake, the velocity decrease observed in the hanging wall side of the shallow crust is most likely attributed to pervasive dilatational strain changes induced by the slip rupture on the underlying blind thrust.

  16. Syllabic Strategy as Opposed to Coda Optimization in the Segmentation of Spanish Letter-Strings Using Word Spotting

    ERIC Educational Resources Information Center

    Álvarez, Carlos J.; Taft, Marcus; Hernández-Cabrera, Juan A.

    2017-01-01

    A word-spotting task is used in Spanish to test the way in which polysyllabic letter-strings are parsed in this language. Monosyllabic words (e.g., "bar") embedded at the beginning of a pseudoword were immediately followed by either a coda-forming consonant (e.g., "barto") or a vowel (e.g., "baros"). In the former…

  17. English Language Learners' Nonword Repetition Performance: The Influence of Age, L2 Vocabulary Size, Length of L2 Exposure, and L1 Phonology.

    PubMed

    Duncan, Tamara Sorenson; Paradis, Johanne

    2016-02-01

    This study examined individual differences in English language learners' (ELLs) nonword repetition (NWR) accuracy, focusing on the effects of age, English vocabulary size, length of exposure to English, and first-language (L1) phonology. Participants were 75 typically developing ELLs (mean age 5;8 [years;months]) whose exposure to English began on average at age 4;4. Children spoke either a Chinese language or South Asian language as an L1 and were given English standardized tests for NWR and receptive vocabulary. Although the majority of ELLs scored within or above the monolingual normal range (71%), 29% scored below. Mixed logistic regression modeling revealed that a larger English vocabulary, longer English exposure, South Asian L1, and older age all had significant and positive effects on ELLs' NWR accuracy. Error analyses revealed the following L1 effect: onset consonants were produced more accurately than codas overall, but this effect was stronger for the Chinese group whose L1s have a more limited coda inventory compared with English. ELLs' NWR performance is influenced by a number of factors. Consideration of these factors is important in deciding whether monolingual norm referencing is appropriate for ELL children.

  18. Reliability and validity of CODA motion analysis system for measuring cervical range of motion in patients with cervical spondylosis and anterior cervical fusion.

    PubMed

    Gao, Zhongyang; Song, Hui; Ren, Fenggang; Li, Yuhuan; Wang, Dong; He, Xijing

    2017-12-01

    The aim of the present study was to evaluate the reliability of the Cartesian Optoelectronic Dynamic Anthropometer (CODA) motion system in measuring the cervical range of motion (ROM) and verify the construct validity of the CODA motion system. A total of 26 patients with cervical spondylosis and 22 patients with anterior cervical fusion were enrolled and the CODA motion analysis system was used to measure the three-dimensional cervical ROM. Intra- and inter-rater reliability was assessed by interclass correlation coefficients (ICCs), standard error of measurement (SEm), Limits of Agreements (LOA) and minimal detectable change (MDC). Independent samples t-tests were performed to examine the differences of cervical ROM between cervical spondylosis and anterior cervical fusion patients. The results revealed that in the cervical spondylosis group, the reliability was almost perfect (intra-rater reliability: ICC, 0.87-0.95; LOA, -12.86-13.70; SEm, 2.97-4.58; inter-rater reliability: ICC, 0.84-0.95; LOA, -13.09-13.48; SEm, 3.13-4.32). In the anterior cervical fusion group, the reliability was high (intra-rater reliability: ICC, 0.88-0.97; LOA, -10.65-11.08; SEm, 2.10-3.77; inter-rater reliability: ICC, 0.86-0.96; LOA, -10.91-13.66; SEm, 2.20-4.45). The cervical ROM in the cervical spondylosis group was significantly higher than that in the anterior cervical fusion group in all directions except for left rotation. In conclusion, the CODA motion analysis system is highly reliable in measuring cervical ROM and the construct validity was verified, as the system was sufficiently sensitive to distinguish between the cervical spondylosis and anterior cervical fusion groups based on their ROM.

  19. The Impact of Sonority on Onset-Rime and Peak-Coda Lexical Decision and Naming of Lexical Items by Children with Different Spelling Ability

    ERIC Educational Resources Information Center

    Leong, Che Kan

    2008-01-01

    The present study used the lexical decision (making YES/NO decision) and the vocalization (naming) paradigms in two reaction time experiments to examine the cohesiveness of onset-rime and peak-coda in the syllable structure of English lexical items. The aim was to study the effect of sonority hierarchy of liquids, nasals and obstruents on the…

  20. 78 FR 68735 - Reduction or Suspension of Safe Harbor Contributions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... forth in section 401(k)(3), called the actual deferral percentage (ADP) test, or one of the design-based... design-based safe harbor method under which a CODA is treated as satisfying the ADP test if the... the design-based alternatives in section 401(m)(10), 401(m)(11), or 401(m)(12). The ACP test in...

  1. Improvements to a Major Digital Archive of Seismic Waveforms from Nuclear Explosions: Borovoye Seismogram Archive

    DTIC Science & Technology

    2008-09-30

    coda) meet expectations. We are also interpreting absolute amplitudes, for those underground nuclear explosions at the Semipalatinsk Test Site (STS...waves, coda) meet expectations. We are also interpreting absolute amplitudes, for those underground nuclear explosions at the Semipalatinsk Test Site ...Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 4.0- Balapan Subregion Semipalatinsk Test Site n- 3.5 - (U CIO ’-3.0 ES UI

  2. Seismic Attenuation, Event Discrimination, Magnitude and Yield Estimation, and Capability Analysis

    DTIC Science & Technology

    2011-09-01

    waves are subject to path-dependent variations in amplitudes. We see geographic similarities between the crustal shear-wave attenuation and the...either Sn or Lg depending on tectonic region, distance, and frequency. Over the past year, we have made great progress on the calibration of surface...between the crustal shear-wave attenuation and the results from the coda attenuation. Calibration of coda in the Middle East and other areas is

  3. Multilevel animal societies can emerge from cultural transmission

    PubMed Central

    Cantor, Maurício; Shoemaker, Lauren G.; Cabral, Reniel B.; Flores, César O.; Varga, Melinda; Whitehead, Hal

    2015-01-01

    Multilevel societies, containing hierarchically nested social levels, are remarkable social structures whose origins are unclear. The social relationships of sperm whales are organized in a multilevel society with an upper level composed of clans of individuals communicating using similar patterns of clicks (codas). Using agent-based models informed by an 18-year empirical study, we show that clans are unlikely products of stochastic processes (genetic or cultural drift) but likely originate from cultural transmission via biased social learning of codas. Distinct clusters of individuals with similar acoustic repertoires, mirroring the empirical clans, emerge when whales learn preferentially the most common codas (conformism) from behaviourally similar individuals (homophily). Cultural transmission seems key in the partitioning of sperm whales into sympatric clans. These findings suggest that processes similar to those that generate complex human cultures could not only be at play in non-human societies but also create multilevel social structures in the wild. PMID:26348688

  4. Seismic Wave Propagation from Underground Chemical Explosions: Sensitivity to Velocity and Thickness of a Weathered Layer

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Ezzedine, S. M.

    2017-12-01

    Recorded motions from underground chemical explosions are complicated by long duration seismic coda as well as motion in the tangential direction. The inability to distinguish the origins of these complexities as either source or path effects comprises a limitation to effective monitoring of underground chemical explosions. With numerical models, it is possible to conduct rigorous sensitivity analyses for chemical explosive sources and their resulting ground motions under the influence of many attributes, including but not limited to complex velocity structure, topography, and non-linear source characteristics. Previously we found that topography can cause significant scattering in the direct wave but leads to relatively little motion in the coda. Here, we aim to investigate the contribution from the low-velocity weathered layer that exists in the shallow subsurface apart from and in combination with surface topography. We use SW4, an anelastic anisotropic fourth order finite difference code to simulate chemical explosive source in a 1D velocity structure consisting of a single weathered layer over a half space. A range of velocity magnitudes are used for the upper weathered layer with the velocities always being lower than that of the granitic underlaying layer. We find that for lower weathered layer velocities, the wave train is highly dispersed and causes a large percentage of energy to be contained in the coda in relation to the entire time series. The percentage of energy contained in the coda grows with distance from the source but saturates at a certain distance that depends on weathered layer velocity and thickness. The saturation onset distance increases with decreasing layer thickness and increasing velocity of the upper layer. Measurements of relative coda energy and coda saturation onset distance from real recordings can provide an additional constraint on the properties of the weathered layer in remote sites as well as test sites like the Nevada National Security Site (NNSS). The results of this modeling study will aid in distinguishing source effects from path effects to the recorded motions in experiments such as the Source Physics Experiment (SPE). This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Attention in western Nevada: Preliminary results from earthquake and explosion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hough, S.E.; Anderson, J.G.; Patton, H.J.

    1989-02-01

    We present preliminary results from a study of the attenuation of regional seismic waves at frequencies between 1 and 15 Hz and distances up to 250 km in Western Nevada. Following the methods of Anderson and Hough (1984) and Hough et al. (1988), we parameterize the asymptote of the high frequency acceleration spectrum by the two-parameter model. We relate the model parameters to a two-layer model for Q/sub i/ and Q/sub d/, the freuqency-independent and the frequency dependent components of the quality factor. We compare our results to previously published Q studies in the Basin and Range and find thatmore » our estimate of total Q, Q/sub t/, in the shallow crust is consistent with shear wave Q at close distances with previous estimates of coda Q (Singh and Hermann, 1983) and LgQ (Chavez and Priestley, 1986), suggesting that both coda Q and LgQ are insensitive to near-surface contributions to attenuation.« less

  6. Monitoring the self-healing process of biomimetic mortar using coda wave interferometry method

    NASA Astrophysics Data System (ADS)

    Liu, Shukui; Basaran, Zeynep; Zhu, Jinying; Ferron, Raissa

    2014-02-01

    Internal stresses might induce microscopic cracks in concrete, which can provide pathways for ingress of harmful chemicals and can lead to loss of strength. Recent research in concrete materials suggests that it might be possible to develop a smart cement-based material that is capable of self-healing by leveraging the metabolic activity of microorganisms to provide biomineralization. Limited research on biomineralization in cement-based systems has shown promising results that healing of cracks can occur on the surface of concrete and reduce permeability. This paper presents the results from an investigation regarding the potential for a cement-based material to repair itself internally through biomineralization. Compressive strength test and coda wave interferometry (CWI) analyses were conducted on mortar samples that were loaded to 70% of their compressive strength and cured in different conditions. Experimental results indicate that the damaged mortar samples with microorganisms showed significantly higher strength development and higher increase of ultrasonic wave velocity compared to samples without microorganisms at 7 and 28 days.

  7. Numerical modeling of nonlinear modulation of coda wave interferometry in a multiple scattering medium with the presence of a localized micro-cracked zone

    NASA Astrophysics Data System (ADS)

    Chen, Guangzhi; Pageot, Damien; Legland, Jean-Baptiste; Abraham, Odile; Chekroun, Mathieu; Tournat, Vincent

    2018-04-01

    The spectral element method is used to perform a parametric sensitivity study of the nonlinear coda wave interferometry (NCWI) method in a homogeneous sample with localized damage [1]. The influence of a strong pump wave on a localized nonlinear damage zone is modeled as modifications to the elastic properties of an effective damage zone (EDZ), depending on the pump wave amplitude. The local change of the elastic modulus and the attenuation coefficient have been shown to vary linearly with respect to the excitation amplitude of the pump wave as in previous experimental studies of Zhang et al. [2]. In this study, the boundary conditions of the cracks, i.e. clapping effects is taken into account in the modeling of the damaged zone. The EDZ is then modeled with random cracks of random orientations, new parametric studies are established to model the pump wave influence with two new parameters: the change of the crack length and the crack density. The numerical results reported constitute another step towards quantification and forecasting of the nonlinear acoustic response of a cracked material, which proves to be necessary for quantitative non-destructive evaluation.

  8. Phonetic Encoding of Coda Voicing Contrast under Different Focus Conditions in L1 vs. L2 English.

    PubMed

    Choi, Jiyoun; Kim, Sahayng; Cho, Taehong

    2016-01-01

    This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience-i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers' native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience-i.e., the Korean speakers' use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection with the L2 speakers' native language experience.

  9. Phonetic Encoding of Coda Voicing Contrast under Different Focus Conditions in L1 vs. L2 English

    PubMed Central

    Choi, Jiyoun; Kim, Sahayng; Cho, Taehong

    2016-01-01

    This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience—i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers’ native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience—i.e., the Korean speakers’ use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection with the L2 speakers’ native language experience. PMID:27242571

  10. Analyses of unusual long-period earthquakes with extended coda recorded at Katmai National Park, Alaska, USA

    USGS Publications Warehouse

    De Angelis, Silvio

    2006-01-01

    A swarm of six long-period (LP) events with slowly decaying coda wave amplitudes and durations up to 120 s, was recorded by seismic stations located in the proximity of Mt. Griggs, a fumarolically active volcano in the Katmai National Park, Alaska, during December 8–21, 2004. Spectral analyses reveal the quasi-monochromatic character of the waveforms, dominated by a 2.5 Hz mode frequently accompanied by a weaker high-frequency onset (6.0–9.0 Hz). Particle motion azimuths and inclination angles show a dominant WNW-ESE direction of polarization for all the signals, and suggest that seismic energy is radiated by a stable source at shallow depth. Damping coefficients between 0.0014 and 0.0063 are estimated by fitting an exponential decay model to the signal's coda; corresponding quality factors range from 78 to 351. The source of the waveforms is modelled as a resonant cavity filled with a fluid/gas mixture.

  11. Seismic Activity at tres Virgenes Volcanic and Geothermal Field

    NASA Astrophysics Data System (ADS)

    Antayhua, Y. T.; Lermo, J.; Quintanar, L.; Campos-Enriquez, J. O.

    2013-05-01

    The volcanic and geothermal field Tres Virgenes is in the NE portion of Baja California Sur State, Mexico, between -112°20'and -112°40' longitudes, and 27°25' to 27°36' latitudes. Since 2003 Power Federal Commission and the Engineering Institute of the National Autonomous University of Mexico (UNAM) initiated a seismic monitoring program. The seismograph network installed inside and around the geothermal field consisted, at the beginning, of Kinemetrics K2 accelerometers; since 2009 the network is composed by Guralp CMG-6TD broadband seismometers. The seismic data used in this study covered the period from September 2003 - November 2011. We relocated 118 earthquakes with epicenter in the zone of study recorded in most of the seismic stations. The events analysed have shallow depths (≤10 km), coda Magnitude Mc≤2.4, with epicentral and hypocentral location errors <2 km. These events concentrated mainly below Tres Virgenes volcanoes, and the geothermal explotation zone where there is a system NW-SE, N-S and W-E of extensional faults. Also we obtained focal mechanisms for 38 events using the Focmec, Hash, and FPFIT methods. The results show normal mechanisms which correlate with La Virgen, El Azufre, El Cimarron and Bonfil fault systems, whereas inverse and strike-slip solutions correlate with Las Viboras fault. Additionally, the Qc value was obtained for 118 events. This value was calculated using the Single Back Scattering model, taking the coda-waves train with window lengths of 5 sec. Seismograms were filtered at 4 frequency bands centered at 2, 4, 8 and 16 Hz respectively. The estimates of Qc vary from 62 at 2 Hz, up to 220 at 16 Hz. The frequency-Qc relationship obtained is Qc=40±2f(0.62±0.02), representing the average attenuation characteristics of seismic waves at Tres Virgenes volcanic and geothermal field. This value correlated with those observed at other geothermal and volcanic fields.

  12. Prevalence of phonological disorders and phonological processes in typical and atypical phonological development.

    PubMed

    Ceron, Marizete Ilha; Gubiani, Marileda Barichello; Oliveira, Camila Rosa de; Gubiani, Marieli Barichello; Keske-Soares, Márcia

    2017-05-08

    To determine the occurrence of phonological disorders by age, gender and school type, and analyze the phonological processes observed in typical and atypical phonological development across different age groups. The sample consisted of 866 children aged between 3:0 and 8:11 years, recruited from public and private schools in the city of Santa Maria/RS. A phonological evaluation was performed to analyze the operative phonological processes. 15.26% (n = 132) of the sample presented atypical phonological acquisition (phonological disorders). Phonological impairments were more frequent in public school students across all age groups. Phonological alterations were most frequent between ages 4 -to 6, and more prevalent in males than females in all but the youngest age group. The most common phonological processes in typical phonological acquisition were: cluster reduction; nonlateral liquid deletion in coda; nonlateral liquid substitution in onset; semivocalization of lateral liquids in coda; and unstressed syllable deletion. In children with phonological disorders, the most common phonological processes were: lateral and nonlateral liquid substitution in onset position; nonlateral liquid deletion; fronting of fricatives in onset position; unstressed syllable deletion; semivocalization of nonlateral liquid in coda; and nonlateral liquid deletion in coda position. Phonological processes were highly prevalent in the present sample, and occurred more often in boys than in girls. Information regarding the type and frequency of phonological processes in both typical phonological acquisition and phonological disorders may contribute to early diagnosis and increase the efficiency of treatment planning.

  13. Estimation of Coda Wave Attenuation in Northern Morocco

    NASA Astrophysics Data System (ADS)

    Boulanouar, Abderrahim; Moudnib, Lahcen El; Padhy, Simanchal; Harnafi, Mimoun; Villaseñor, Antonio; Gallart, Josep; Pazos, Antonio; Rahmouni, Abdelaali; Boukalouch, Mohamed; Sebbani, Jamal

    2018-03-01

    We studied the attenuation of coda waves and its frequency and lapse-time dependence in northern Morocco. We analysed coda waves of 66 earthquakes recorded in this region during 2008 for four lapse time windows of length 30, 40, 50, and 60 s, and at five frequency bands with central frequency in the range of 0.75-12 Hz. We determined the frequency dependent Q c relation for the horizontal (NS and EW) and vertical (Z) component seismograms. We analyzed three-component broadband seismograms of 66 local earthquakes for determining coda-Q based on the single back-scattering model. The Q c values show strong frequency dependence in 1.5-12 Hz that is related to high degree of heterogeneity of the medium. The lapse time dependence of Q c shows that Q 0 ( Q c at 1 Hz) significantly increases with lapse time that is related to the depth dependence of attenuation and hence of the level of heterogeneity of the medium. The average frequency-dependent Q c( f) values are Qc = (143.75 ± 1.09)f^{(0.864 ± 0.006)}, Qc = (149.12 ± 1.08)f^{(0.85 ± 0.005)} and Qc = (140.42 ± 1.81)f^{(0.902 ± 0.004)} for the vertical, north-south and east-west components of motion, respectively. The frequency-dependent Q c(f) relations are useful for evaluating source parameters (Singh et al. 2001), which are the key inputs for seismic hazard assessment of the region.

  14. Interferometric Seismic Sources on the Core Mantle Boundary Revealed by Seismic Coda Crosscorrelation

    NASA Astrophysics Data System (ADS)

    Pham, T. S.; Tkalcic, H.; Sambridge, M.

    2017-12-01

    The crosscorrelation of earthquake coda can be used to extract seismic body waves which are sensitive to deep Earth interior. The retrieved peaks in crosscorrelation of two seismic records are commonly interpreted as seismic phases that originate at a point source collocated with the first recorder (Huygens-Fresnel principle), reflected upward from prominent underground reflectors and reaching the second recorder. From the time shift of these peaks measured at different interstation distances, new travel time curves can be constructed. This study focuses on a previously unexplained interferometric phase (named temporarily a ghost or "G phase") observed in crosscorrelogram stack sections utilizing seismic coda. In particular, we deploy waveforms recorded by two regional seismic networks, one in Australia and another in Alaska. We show that the G phase cannot be explained by as a reflection. Moreover, we demonstrate that the G phase is explained through the principle of energy partitioning, and specifically, conversions from compressional to shear motions at the core-mantle boundary (CMB). This can be thought of in terms of a continuous distribution of Huygens sources across the CMB that are "activated" in long-range wavefield coda following significant earthquakes. The newly explained phase is renamed to cPS, to indicate a CMB origin and the P to S conversion. This mechanism explains a range of newly observed global interferometric phases that can be used in combination with existing phases to constrain Earth structure.

  15. ESA Atmospheric Toolbox

    NASA Astrophysics Data System (ADS)

    Niemeijer, Sander

    2017-04-01

    The ESA Atmospheric Toolbox (BEAT) is one of the ESA Sentinel Toolboxes. It consists of a set of software components to read, analyze, and visualize a wide range of atmospheric data products. In addition to the upcoming Sentinel-5P mission it supports a wide range of other atmospheric data products, including those of previous ESA missions, ESA Third Party missions, Copernicus Atmosphere Monitoring Service (CAMS), ground based data, etc. The toolbox consists of three main components that are called CODA, HARP and VISAN. CODA provides interfaces for direct reading of data from earth observation data files. These interfaces consist of command line applications, libraries, direct interfaces to scientific applications (IDL and MATLAB), and direct interfaces to programming languages (C, Fortran, Python, and Java). CODA provides a single interface to access data in a wide variety of data formats, including ASCII, binary, XML, netCDF, HDF4, HDF5, CDF, GRIB, RINEX, and SP3. HARP is a toolkit for reading, processing and inter-comparing satellite remote sensing data, model data, in-situ data, and ground based remote sensing data. The main goal of HARP is to assist in the inter-comparison of datasets. By appropriately chaining calls to HARP command line tools one can pre-process datasets such that two datasets that need to be compared end up having the same temporal/spatial grid, same data format/structure, and same physical unit. The toolkit comes with its own data format conventions, the HARP format, which is based on netcdf/HDF. Ingestion routines (based on CODA) allow conversion from a wide variety of atmospheric data products to this common format. In addition, the toolbox provides a wide range of operations to perform conversions on the data such as unit conversions, quantity conversions (e.g. number density to volume mixing ratios), regridding, vertical smoothing using averaging kernels, collocation of two datasets, etc. VISAN is a cross-platform visualization and analysis application for atmospheric data and can be used to visualize and analyze the data that you retrieve using the CODA and HARP interfaces. The application uses the Python language as the means through which you provide commands to the application. The Python interfaces for CODA and HARP are included so you can directly ingest product data from within VISAN. Powerful visualization functionality for 2D plots and geographical plots in VISAN will allow you to directly visualize the ingested data. All components from the ESA Atmospheric Toolbox are Open Source and freely available. Software packages can be downloaded from the BEAT website: http://stcorp.nl/beat/

  16. Site Effects on Regional Seismograms Recorded in the Vicinity of Weston Observatory

    DTIC Science & Technology

    1993-09-30

    flanks of the active volcanoes of Mauna Loa and Kilauea . The distances between the sites ranged from a few km to over 100 km. Although there is little...on the island of Hawaii using S-wave coda spectral ratios for frequencies between 1.5 and 15-Hz. They used 40 vertical I-Hz seismometers, and recorded...for the island of Hawaii , Bull. Seis Soc. Am-, 12 No- 3 1151-1185. Mayeda, K., S. Koyanagi, and K. Aki (1991). Site amplification from S-wave coda in

  17. Comparison of Outcomes of antibiotic Drugs and Appendectomy (CODA) trial: a protocol for the pragmatic randomised study of appendicitis treatment

    PubMed Central

    Davidson, Giana H; Flum, David R; Talan, David A; Kessler, Larry G; Lavallee, Danielle C; Bizzell, Bonnie J; Farjah, Farhood; Stewart, Skye D; Krishnadasan, Anusha; Carney, Erin E; Wolff, Erika M; Comstock, Bryan A; Monsell, Sarah E; Heagerty, Patrick J; Ehlers, Annie P; DeUgarte, Daniel A; Kaji, Amy H; Evans, Heather L; Yu, Julianna T; Mandell, Katherine A; Doten, Ian C; Clive, Kevin S; McGrane, Karen M; Tudor, Brandon C; Foster, Careen S; Saltzman, Darin J; Thirlby, Richard C; Lange, Erin O; Sabbatini, Amber K; Moran, Gregory J

    2017-01-01

    Introduction Several European studies suggest that some patients with appendicitis can be treated safely with antibiotics. A portion of patients eventually undergo appendectomy within a year, with 10%–15% failing to respond in the initial period and a similar additional proportion with suspected recurrent episodes requiring appendectomy. Nearly all patients with appendicitis in the USA are still treated with surgery. A rigorous comparative effectiveness trial in the USA that is sufficiently large and pragmatic to incorporate usual variations in care and measures the patient experience is needed to determine whether antibiotics are as good as appendectomy. Objectives The Comparing Outcomes of Antibiotic Drugs and Appendectomy (CODA) trial for acute appendicitis aims to determine whether the antibiotic treatment strategy is non-inferior to appendectomy. Methods/Analysis CODA is a randomised, pragmatic non-inferiority trial that aims to recruit 1552 English-speaking and Spanish-speaking adults with imaging-confirmed appendicitis. Participants are randomised to appendectomy or 10 days of antibiotics (including an option for complete outpatient therapy). A total of 500 patients who decline randomisation but consent to follow-up will be included in a parallel observational cohort. The primary analytic outcome is quality of life (measured by the EuroQol five dimension index) at 4 weeks. Clinical adverse events, rate of eventual appendectomy, decisional regret, return to work/school, work productivity and healthcare utilisation will be compared. Planned exploratory analyses will identify subpopulations that may have a differential risk of eventual appendectomy in the antibiotic treatment arm. Ethics and dissemination This trial was approved by the University of Washington’s Human Subjects Division. Results from this trial will be presented in international conferences and published in peer-reviewed journals. Trial registration number NCT02800785. PMID:29146633

  18. Waveform anomaly caused by strong attenuation in the crust and upper mantle in the Okinawa Trough region

    NASA Astrophysics Data System (ADS)

    Padhy, S.; Furumura, T.; Maeda, T.

    2017-12-01

    The Okinawa Trough is a young continental back-arc basin located behind the Ryukyu subduction zone in southwestern Japan, where the Philippine Sea Plate dives beneath the trough, resulting in localized mantle upwelling and crustal thinning of the overriding Eurasian Plate. The attenuation structure of the plates and surrounding mantle in this region associated with such complex tectonic environment are poorly documented. Here we present seismological evidence for these features based on the high-resolution waveform analyses and 3D finite difference method (FDM) simulation. We analyzed regional broadband waveforms recorded by F-net (NIED) of in-slab events (M>4, H>100 km). Using band-passed (0.5-8 Hz), mean-squared envelopes, we parameterized coda-decay in terms of rise-time (time from P-arrival to maximum amplitude in P-coda), decay-time (time from maximum amplitude to theoretical S-arrival), and energy-ratio defined as the ratio of energy in P-coda to that in direct P wave. The following key features are observed. First, there is a striking difference in S-excitation along paths traversing and not traversing the trough: events from SW Japan not crossing the trough show clear S waves, while those occurring in the trough show very weak S waves at a station close to the volcanic front. Second, some trough events exhibit spindle-shaped seismograms with strong P-coda excitation, obscuring the development of S waves, at back-arc stations; these waveforms are characterized by high decay-time (>10s) and high energy-ratio (>>1.0), suggesting strong forward scattering along ray paths. Third, some trough events show weak S-excitation characterized by low decay-time (<5s) and low energy-ratio (<1.0) at fore-arc stations, suggesting high intrinsic absorption. To investigate the mechanism of the observed anomalies, we will conduct FDM simulation for a suite of models comprising the key subduction features like localized mantle-upwelling and crustal thinning expected in the region. It is expected that simulation results help to resolve rift-induced crust and upper mantle anomalies in the trough showing maximum waveform distortion as we observed in broadband records, and will enhance understanding of tectonic processes related to back-arc rifting in the region.

  19. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    NASA Astrophysics Data System (ADS)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  20. Lapse-time-dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    NASA Astrophysics Data System (ADS)

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-10-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: first, we evaluate the contribution of surface- and body-wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time-dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Second, we compare the lapse-time behaviour in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  1. Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training

    PubMed Central

    Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William

    2015-01-01

    Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330

  2. Lunar Structure from Coda Wave Interferometry

    NASA Astrophysics Data System (ADS)

    Nunn, Ceri; Igel, Heiner

    2017-04-01

    As part of the Apollo lunar missions, four seismometers were deployed on the near-side of the Moon between 1969 and 1972, and operated continuously until 1977. There are many difficulties associated with determining lunar structure from these records. As a result, many properties of the moon, such as the thickness, density and porosity of the crust are poorly constrained. This hampers our ability to determine the structure, geochemical composition of the moon, its evolution, and ultimately the evolution of the solar system. We explore the use of coda wave interferometry to reconstruct the near surface structure within the strongly scattering lunar crust.

  3. Feet and syllables in elephants and missiles: a reappraisal.

    PubMed

    Zonneveld, Wim; van der Pas, Brigit; de Bree, Elise

    2007-01-01

    Using data from a case study presented in Chiat (1989), Marshall and Chiat (2003) compare two different approaches to account for the realization of intervocalic consonants in child phonology: "coda capture theory" and the "foot domain account". They argue in favour of the latter account. In this note, we present a reappraisal of this argument using the same data. We conclude that acceptance of the foot domain account, in the specific way developed by the authors, is unmotivated for both theoretical and empirical reasons. We maintain that syllable-based coda capture is (still) the better approach to account for the relevant facts.

  4. Use of abstracts, orientations, and codas in narration by language-disordered and nondisordered children.

    PubMed

    Sleight, C C; Prinz, P M

    1985-11-01

    In this study language-disordered and nondisordered children viewed a nonverbal film, wrote the story, and narrated it to language-disordered and nondisordered peers who were unfamiliar with the film. The narratives were analyzed for the use of abstracts, orientations (background information), and codas. Language-disordered children made fewer references to the orientation clauses of props and activities than nondisordered children. Neither group modified their language in the areas examined to take into account the communicative status of their listeners. Therapeutic implications for the language-disordered children are presented as are suggestions for future research.

  5. Coda-wave and ambient noise interferometry using an offset vertical array at Iwanuma site, northeast Japan

    NASA Astrophysics Data System (ADS)

    Minami, K.; Yamamoto, M.; Nishimura, T.; Nakahara, H.; Shiomi, K.

    2013-12-01

    Seismic interferometry using vertical borehole arrays is a powerful tool to estimate the shallow subsurface structure and its time lapse changes. However, the wave fields surrounding borehole arrays are non-isotropic due to the existence of ground surface and non-uniform distribution of sources, and do not meet the requirements of the seismic interferometry in a strict sense. In this study, to examine differences between wave fields of coda waves and ambient noise, and to estimate their effects on the results of seismic interferometry, we conducted a temporal seismic experiment using zero-offset and offset vertical arrays. We installed two 3-components seismometers (hereafter called Surface1 and Surface2) at the ground surface in the vicinity of NIED Iwanuma site (Miyagi Pref., Japan). Surface1 is placed just above the Hi-net downhole seismometer whose depth is 101 m, and Surface2 is placed 70 m away from Surface1. To extract the wave propagation between these 3 seismometers, we compute the cross-correlation functions (CCFs) of coda-wave and ambient noise for each pair of the zero-offset vertical (Hi-net-Surface1), finite-offset vertical (Hi-net-Surface2), and horizontal (Surface1-Surface2) arrays. We use the frequency bands of 4-8, 8-16 Hz in the CCF computation. The characteristics of obtained CCFs are summarized as follows; (1) in all frequency bands, the peak lag times of CCFs from coda waves are almost the same between the vertical and offset-vertical arrays irrespective of different inter-station distance, and those for the horizontal array are around 0 s. (2) the peak lag times of CCFs from ambient noise show slight differences, that is, those obtained from the vertical array are earlier than those from the offset-vertical array, and those from the horizontal array are around 0.05 s. (3) the peak lag times of CCFs for the vertical array obtained from ambient noise analyses are earlier than those from the coda-wave analyses. These results indicate that wave fields of coda-wave are mainly composed of vertically propagating waves, while those of ambient noise are composed of both vertically and horizontally propagating waves. To explain these characteristics of the CCFs obtained from different wave fields, we conducted a numerical simulation of interferometry based on the concept of stationary phase. Here, we assume isotropic upward incidence of SV-wave into a homogeneous half-space, and compute CCFs for the zero-offset and finite-offset vertical arrays by taking into account the reflection and conversion of P-SV waves at the free surface. Due to the effectively non-isotropic wave field, the simulated CCF for the zero-offset vertical array shows slight delay in peak lag time and its amplitudes decrease in the acausal part. On the other hand, the simulated CCF for finite-offset vertical array shows amplitude decrease and no peak lag time shift. These results are consistent with the difference in peak lag times obtained from coda-wave and ambient noise analyses. Our observations and theoretical consideration suggest that the careful consideration of wave fields is important in the application of seismic interferometry to borehole array data.

  6. Diffusion approximation with polarization and resonance effects for the modelling of seismic waves in strongly scattering small-scale media

    NASA Astrophysics Data System (ADS)

    Margerin, Ludovic

    2013-01-01

    This paper presents an analytical study of the multiple scattering of seismic waves by a collection of randomly distributed point scatterers. The theory assumes that the energy envelopes are smooth, but does not require perturbations to be small, thereby allowing the modelling of strong, resonant scattering. The correlation tensor of seismic coda waves recorded at a three-component sensor is decomposed into a sum of eigenmodes of the elastodynamic multiple scattering (Bethe-Salpeter) equation. For a general moment tensor excitation, a total number of four modes is necessary to describe the transport of seismic waves polarization. Their spatio-temporal dependence is given in closed analytical form. Two additional modes transporting exclusively shear polarizations may be excited by antisymmetric moment tensor sources only. The general solution converges towards an equipartition mixture of diffusing P and S waves which allows the retrieval of the local Green's function from coda waves. The equipartition time is obtained analytically and the impact of absorption on Green's function reconstruction is discussed. The process of depolarization of multiply scattered waves and the resulting loss of information is illustrated for various seismic sources. It is shown that coda waves may be used to characterize the source mechanism up to lapse times of the order of a few mean free times only. In the case of resonant scatterers, a formula for the diffusivity of seismic waves incorporating the effect of energy entrapment inside the scatterers is obtained. Application of the theory to high-contrast media demonstrates that coda waves are more sensitive to slow rather than fast velocity anomalies by several orders of magnitude. Resonant scattering appears as an attractive physical phenomenon to explain the small values of the diffusion constant of seismic waves reported in volcanic areas.

  7. Lunar Structure from Ambient Noise and Coda Wave Interferometry

    NASA Astrophysics Data System (ADS)

    Nunn, C.; Igel, H.

    2016-12-01

    As part of the Apollo lunar missions, four seismometers were deployed on the near-side of the Moon between 1969 and 1972, and operated continuously until 1977. There are many difficulties associated with determining lunar structure from these records. As a result, many properties of the moon, such as the thickness, density and porosity of the crust are poorly constrained. This hampers our ability to determine the structure, geochemical composition of the moon, its evolution, and ultimately the evolution of the solar system. We explore the use of ambient noise and coda wave interferometry to reconstruct the near surface structure within the strongly scattering lunar crust.

  8. Vocal clans in sperm whales (Physeter macrocephalus).

    PubMed Central

    Rendell, L E; Whitehead, H

    2003-01-01

    Cultural transmission may be a significant source of variation in the behaviour of whales and dolphins, especially as regards their vocal signals. We studied variation in the vocal output of 'codas' by sperm whale social groups. Codas are patterns of clicks used by female sperm whales in social circumstances. The coda repertoires of all known social units (n = 18, each consisting of about 11 females and immatures with long-term relationships) and 61 out of 64 groups (about two social units moving together for periods of days) that were recorded in the South Pacific and Caribbean between 1985 and 2000 can be reliably allocated into six acoustic 'clans', five in the Pacific and one in the Caribbean. Clans have ranges that span thousands of kilometres, are sympatric, contain many thousands of whales and most probably result from cultural transmission of vocal patterns. Units seem to form groups preferentially with other units of their own clan. We suggest that this is a rare example of sympatric cultural variation on an oceanic scale. Culture may thus be a more important determinant of sperm whale population structure than genes or geography, a finding that has major implications for our understanding of the species' behavioural and population biology. PMID:12614570

  9. Evaluation of capture techniques for long-billed curlews wintering in Texas

    USGS Publications Warehouse

    Woodin, Marc C.; Skoruppa, Mary K.; Edwardson, Jeremy W.; Austin, Jane E.

    2012-01-01

    Texas coast harbors the largest, eastern-most populations of Long-billed Curlews (Numenius americanus) in North America; however, very little is known about their migration and wintering ecology. Curlews are readily captured on their breeding grounds, but experience with capturing the species during the non-breeding season is extremely limited. We assessed the efficacy of 6 capture techniques for Long-billed Curlews in winter: 1) modified noose ropes, 2) remotely controlled bow net, 3) Coda Netgun, 4) Super Talon net gun, 5) Hawkseye whoosh net, and 6) cast net. The Coda Netgun had the highest rate of captures per unit of effort (CPUE = 0.31; 4 curlew captures/13 d of trapping effort), followed by bow net (CPUE = 0.17; 1 capture/6 d of effort), whoosh net (CPUE = 0.14; 1 capturel7 d of effort), and noose ropes (CPUE = 0.07; 1 capturel15 d of effort). No curlews were captured using the Super Talon net gun or a cast net (3 d and 1 d of effort, respectively). Multiple capture techniques should be readily available for maximum flexibility in matching capture methods with neophobic curlews that often unpredictably change referred feeding locations among extremely different habitat types.

  10. Time-Lapse Monitoring with 4D Seismic Coda Waves in Active, Passive and Ambient Noise Data

    NASA Astrophysics Data System (ADS)

    Lumley, D. E.; Kamei, R.; Saygin, E.; Shragge, J. C.

    2017-12-01

    The Earth's subsurface is continuously changing, due to temporal variations in fluid flow, stress, temperature, geomechanics and geochemistry, for example. These physical changes occur at broad tectonic and earthquake scales, and also at very detailed near-surface and reservoir scales. Changes in the physical states of the earth cause time-varying changes in the physical properties of rocks and fluids, which can be monitored with natural or manmade seismic waves. Time-lapse (4D) seismic monitoring is important for applications related to natural and induced seismicity, hydrocarbon and groundwater reservoir depletion, CO2 sequestration etc. An exciting new research area involves moving beyond traditional methods in order to use the full complex time-lapse scattered wavefield (4D coda waves) for both manmade active-source 3D/4D seismic data, and also to use continuous recordings of natural-source passive seismic data, especially (micro) earthquakes and ocean ambient noise. This research involves full wave-equation approaches including waveform inversion (FWI), interferometry, Large N sensor arrays, "big data" information theory, and high performance supercomputing (HPC). I will present high-level concepts and recent data results that are quite spectacular and highly encouraging.

  11. Attenuation Characteristics of the Armutlu Peninsula (NW Turkey) Using Coda Q

    NASA Astrophysics Data System (ADS)

    Yavuz, Evrim; Çaka, Deniz; Tunç, Berna; Woith, Heiko; Gottfried Lühr, Birger; Barış, Şerif

    2016-04-01

    Attenuation characteristic of seismic waves was determined using coda Q in the frame of MARsite (MARsite has received funding from the European Union's Seventh Programme for research, technological development and demonstration under grant agreement No 308417). Data from 82 earthquakes recorded in 2013-2014 in the Armutlu Peninsula and its vicinity by 9 ARNET seismic stations were used for processing. The earthquake magnitudes (Ml) and depths vary from 1.5 to 3.7 and 1.2-16.9 km, respectively. Epicentral distances closer than 90 km were selected to ensure better signal-to-noise ratios. Lapse times between 20 seconds and 40 seconds at intervals of 5 seconds were used for the calculation of the coda wave quality factor. The coda windows were filtered at central frequencies of 1.5, 3, 6, 9 and 12 Hz bandpass filter. To obtain reliable results, only data with signal-to-noise ratios greater than 5 and correlation coefficents higher than 0.7 were used. The SEISAN software and one of its subroutines (CODAQ) were used for data processing and analyses. In the whole study area, Qc=(51±4)f^(0.91±0.04) for 20 seconds, Qc=(77±7)f^(0.80±0.04) for 30 seconds and Qc=(112±13)f^(0.72±0.06) for 40 seconds lapse times are obtained for coda wave quality factor. The observed quality factor is dependent on frequency and lapse time. The results indicate that the upper lithosphere is more heterogeneous and seismically more active than the lower lithosphere as expected in the region which is tectonically complex refering to the effects of the North Anatolian Fault Zone. By considering earthquake clusters and recorded stations, the scattering area was drawn. The intersection of the scattered areas for 20 seconds lapse time is covering all stations. Quality factor in 1 Hz and frequency dependent values were calculated separately and for the intersection of all scattered areas. Calculated Qo and n values of the intersection area are 50 and 0.89, respectively. Hence, the Qo and n values which are calculated using all stations and both values of the intersection area are very close to each other. Additionally, in the detailed review of TRML station which located in Yalova Province Termal District; Qc=(46±3)f^(0.97±0.04) for 20 seconds, Qc=(61±6)f^(1.03±0.06), for 30 seconds and Qc=(74±6)f^(1.06±0.05) for 40 seconds lapse times are obtained for coda wave quality factor. With these results, both the lower Qo values increasing with lapse times demonstrate high tectonic activity. Furthermore, the increasing n value with lapse times is conformable with the geothermal sources, next to the TRML station.

  12. Teleseismic and regional data analysis for estimating depth, mechanism and rupture process of the 3 April 2017 MW 6.5 Botswana earthquake and its aftershock (5 April 2017, MW 4.5)

    NASA Astrophysics Data System (ADS)

    Letort, J.; Guilhem Trilla, A.; Ford, S. R.; Sèbe, O.; Causse, M.; Cotton, F.; Campillo, M.; Letort, G.

    2017-12-01

    We constrain the source, depth, and rupture process of the Botswana earthquake of April 3, 2017, as well as its largest aftershock (5 April 2017, Mw 4.5). This earthquake is the largest recorded event (Mw 6.5) in the East African rift system since 1970, making one important case study to better understand source processes in stable continental regions. For the two events an automatic cepstrum analysis (Letort et al., 2015) is first applied on respectively 215 and 219 teleseismic records, in order to detect depth phase arrivals (pP, sP) in the P-coda. Coherent detections of depth phases for different azimuths allow us to estimate the hypocentral depths respectively at 28 and 23 km, suggesting that the events are located in the lower crust. A same cepstrum analysis is conducted on five other earthquakes with mb>4 in this area (from 2002 to 2017), and confirms a deep crustal seismicity cluster (around 20-30 km). The source mechanisms are then characterized using a joint inversion method by fitting both regional long-period surface-waves and teleseismic high-frequency body-waves. Combining regional and teleseismic data (as well as systematic comparisons between theoretical and observed regional surface-waves dispersion curves prior to the inversion) allows us to decrease epistemic uncertainties due to lack of regional data and poor knowledge about the local velocity structure. Focal mechanisms are both constrained as normal faulting with a northwest trending, and hypocentral depths are confirmed at 28 and 24 km. Finally, in order to study the mainshock rupture process, we originally apply a kymograph analysis method (an image processing method, commonly used in the field of cell biology for identifying motions of molecular motors, e.g. Mangeol et al., 2016). Here, the kymograph allows us to better identify high-frequency teleseismic P-arrivals inside the P-coda by tracking both reflected depth phase and direct P-wave arrivals radiated from secondary sources during the faulting process. Secondary P-arrivals are thus identified with a significant azimuthal variation of their arrival times (until 4s), allowing the localization of the source that generated these secondary waves. This analysis shows that the mainshock is probably a mix of at least two events, the second being 20-30 km further northwest along the fault.

  13. Lateral and depth variations of coda Q in the Zagros region of Iran

    NASA Astrophysics Data System (ADS)

    Irandoust, Mohsen Ahmadzadeh; Sobouti, Farhad; Rahimi, Habib

    2016-01-01

    We have analyzed more than 2800 local earthquakes recorded by the Iranian National Seismic Network (INSN) and the Iranian Seismological Center (IRSC) to estimate coda wave quality factor, Q c , in the Zagros fold and thrust belt and the Sanandaj-Sirjan metamorphic zone in Iran. We used the single backscattering model to investigate lateral and depth variations of Q c in the study region. In the interior of Zagros, no strong lateral variation in attenuation parameters is observed. In SE Zagros (the Bandar-Abbas region) where transition to the Makran subduction setting begins, the medium shows lower attenuation. The average frequency relations for the SSZ, the Bandar-Abbas region, and the Zagros are Q c = (124 ± 11) f 0.82 ± 0.04, Q c = (109 ± 2) f 0.99 ± 0.01, and Q c = (85 ± 5) f 1.06 ± 0.03, respectively. To investigate the depth variation of Q c , 18 time windows between 5 and 90 s and at two epicentral distance ranges of R < 100 km and 100 < R < 200 km were considered. It was observed that with increasing coda lapse time, Q 0 ( Q c at 1 Hz) and n (frequency dependence factor) show increasing and decreasing trends, respectively. Beneath the SSZ and at depths of about 50 to 80 km, there is a correlation between the reported low velocity medium and the observed sharp change in the trend of Q 0 and n curves. In comparison with results obtained in other regions of the Iranian plateau, the Zagros along with the Alborz Mountains in the north show highest attenuation of coda wave and strongest frequency dependence, an observation that reflects the intense seismicity and active faulting in these mountain ranges. We also observe a stronger depth dependence of attenuation in the Zagros and SSZ compared to central Iran, indicating a thicker lithosphere in the Zagros region than in central Iran.

  14. DEVELOPING AND EXPLOITING A UNIQUE DATASET FROM SOUTH AFRICAN GOLD MINES FOR SOURCE CHARACTERIZATION AND WAVE PROPAGATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Julia, J; Nyblade, A; Gok, R

    2009-07-06

    In this project, we are developing and exploiting a unique seismic dataset to address the characteristics of small seismic events and the associated seismic signals observed at local (< 200 km) and regional (< 2000 km) distances. The dataset is being developed using mining-induced events from three deep gold mines in South Africa recorded on in-mine networks (< 1 km) composed of tens of high-frequency sensors, a network of four broadband stations installed as part of this project at the surface around the mines (1-10 km), and a network of existing broadband seismic stations at local/regional distances (50-1000 km) frommore » the mines. Data acquisition has now been completed and includes: (1) {approx}2 years (2007 and 2008) of continuous recording by the surface broadband array, and (2) tens of thousands of mine tremors in the -3.4 < ML < 4.4 local magnitude range. Events with positive magnitudes are generally well recorded by the surface-mine stations, while magnitudes of 3.0 and larger are seen at regional distances (up to {approx} 600 km) in high-pass filtered recordings. We have now completed the quality control of the in-mine data gathered at the three gold mines included in this project. The quality control consisted of: (1) identification and analysis of outliers among the P- and S-wave travel-time picks reported by the in-mine network operator and (2) verification of sensor orientations. The outliers have been identified through a 'Wadati filter' that searches for the largest subset of P- and S-wave travel-time picks consistent with a medium of uniform wave-speed. They have observed that outliers are generally picked at a few select stations. They have also detected that trigger times were mistakenly reported as origin times by the in-mine network operator, and corrections have been obtained from the intercept times in the Wadati diagrams. Sensor orientations have been verified through rotations into the local ray-coordinate system and, when possible, corrected by correlating waveforms obtained from theoretical and empirical rotation angles. Full moment tensor solutions have been obtained for selected events within the Savuka network volume, with moment magnitudes in the 0.5 < M{sub W} < 2.6 range. The solutions were obtained by inverting P-, SV-, and SH-spectral amplitudes measured on the theoretically rotated waveforms with visually assigned polarities. Most of the solutions have a non-zero implosive contribution (47 out of 76), while a small percentage is purely deviatoric (10 out of 76). The deviatoric moment tensors range from pure double couple to pure non-double couple mechanisms. We have also calibrated the regional stations for seismic coda-derived source spectra and moment magnitude using the envelope methodology of Mayeda et al. (2003). they tie the coda M{sub w} to independent values from waveform modeling. The resulting coda-based source spectra of shallow mining-related events show significant spectral peaking that is not seen in deeper tectonic earthquakes. This coda peaking may be an independent method of identifying shallow events and is similar to coda peaking with previously observed for Nevada explosions, where the frequency of the observed spectral peak correlates with the depth of burial (Murphy et al., 2009).« less

  15. Can repeating glacial seismic events be used to monitor stress changes within the underlying volcano? -Case study from the glacier overlain Katla volcano, Iceland

    NASA Astrophysics Data System (ADS)

    Jonsdottir, K.; Vogfjord, K. S.; Bean, C. J.; Martini, F.

    2013-12-01

    The glacier overlain Katla volcano in South Iceland, is one of the most active and hazardous volcano in Europe. Katla eruptions result in hazardous glacial floods and intense tephra fall. On average there are eruptions every 50 years but the volcano is long overdue and we are now witnessing the longest quiescence period in 1000 years or since the settlement. Because of the hazard the volcano poses, it is under constant surveillance and gets a good share of the seismic stations from the national seismic network. Every year the seismic network records thousands of seismic events at Katla with magnitudes seldom exceeding M3. The bulk of the seismicity is however not due to volcano tectonics but seems to be caused mainly by shallow processes involving glacial deformation. Katla's ice filled caldera forms a glacier plateau of several hundred meters thick ice. The 9x14 km oval caldera is surrounded by higher rims where the glacier in some places gently and in others abruptly falls off tens and up to hundred meters to the surrounding lowland. The glacier surface is marked with dozen depressions or cauldrons which manifest geothermal activity below, probably coinciding with circular faults around the caldera. Our current understanding is that there are several glacial processes which cause seismicity; these include dry calving, where steep valley glaciers fall off cliffs and movements of glacier ice as the cauldrons deform due to hydraulic changes and geothermal activity at the glacier/bedrock boundary. These glacial events share a common feature of containing low frequency (2-4 hz) and long coda. Because of their shallow origin, surface waves are prominent. In our analysis we use waveforms from all of Katla's seismic events between years 2003-2013, with the criteria M>1 and minimum 4 p-wave picks. We correlate the waveforms of these events with each other and group them into families of highly similar events. Looking at the occurrence of these families we find that individual families are usually clustered in time over several months, and sometimes families may reappear even up to several years later. Using families including many events and covering long periods (10-20 months) we compare the coda (the tail) of individual events within a family. This is repeated for all the surrounding stations. The analysis, coda wave interferometry (cwi) is a correlation method that builds on the fact that changes in stress in the edifice lead to changes in seismic velocities. The coda waves are highly sensitive to small stress changes. By using a repeating source, implying we have the same source mechanism and the same path, we can track temporal stress changes in the medium between the source and the receiver. Preliminary results from Katla suggest that by using the repeating glacial events and the coda wave interferometry technique we observe annual seismic velocity changes around the volcano of ca. 0.7%. We find that seismic velocities increase from January through July and decrease in August to December. These changes can be explained by pore-water pressure changes and/or loading and de-loading of the overlain glacier. We do not find immediate precursors for an impending eruption at Katla; however we now have a better understanding of its background seismicity.

  16. Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology

    2010-12-01

    Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium and Arakan-Yuma Zone (BAZ) : Qc= 301 f 0.87, Shillong Plateau Zone (SPZ): Qc=126 fo 0.85. It indicates Northeastern India is seismically active but comparing of all zones in the study region the Shillong Plateau Zone (SPZ): Qc= 126 f 0.85 is seismically most active. Where as the Bengal Alluvium and Arakan-Yuma Zone (BAZ) are less active and out of three the Tibetan Plateau Zone (TPZ)is intermediate active. This study may be useful for the seismic hazard assessment. The estimated seismic moments (Mo), range from 5.98×1020 to 3.88×1023 dyne-cm. The source radii(r) are confined between 152 to 1750 meter, the stress drop ranges between 0.0003×103 bar to 1.04×103 bar, the average radiant energy is 82.57×1018 ergs and the strain drop for the earthquake ranges from 0.00602×10-9 to 2.48×10-9 respectively. The estimated stress drop values for NE India depicts scattered nature of the larger seismic moment value whereas, they show a more systematic nature for smaller seismic moment values. The estimated source parameters are in agreement to previous works in this type of tectonic set up. Key words: Coda wave, Seismic source parameters, Lapse time, single back scattering model, Brune's model, Stress drop and North East India.

  17. Studies of earthquakes stress drops, seismic scattering, and dynamic triggering in North America

    NASA Astrophysics Data System (ADS)

    Escudero Ayala, Christian Rene

    I use the Relative Source Time Function (RSTF) method to determine the source properties of earthquakes within southeastern Alaska-northwestern Canada in a first part of the project, and earthquakes within the Denali fault in a second part. I deconvolve a small event P-arrival signal from a larger event by the following method: select arrivals with a tapered cosine window, fast fourier transform to obtain the spectrum, apply water level deconvolution technique, and bandpass filter before inverse transforming the result to obtain the RSTF. I compare the source processes of earthquakes within the area to determine stress drop differences to determine their relation with the tectonic setting of the earthquakes location. Results show an consistency with previous results, stress drop independent of moment implying self-similarity, correlation of stress drop with tectonic regime, stress drop independent of depth, stress drop depends of focal mechanism where strike-slip present larger stress drops, and decreasing stress drop as function of time. I determine seismic wave attenuation in the central western United States using coda waves. I select approximately 40 moderate earthquakes (magnitude between 5.5 and 6.5) located alocated along the California-Baja California, California-Nevada, Eastern Idaho, Gulf of California, Hebgen Lake, Montana, Nevada, New Mexico, off coast of Northern California, off coast of Oregon, southern California, southern Illinois, Vancouver Island, Washington, and Wyoming regions. These events were recorded by the EarthScope transportable array (TA) network from 2005 to 2009. We obtain the data from the Incorporated Research Institutions for Seismology (IRIS). In this study we implement a method based on the assumption that coda waves are single backscattered waves from randomly distributed heterogeneities to calculate the coda Q. The frequencies studied lie between 1 and 15 Hz. The scattering attenuation is calculated for frequency bands centered at 1.5, 3, 5, 7.5, 10.5, and 13.5 Hz. Coda Q present a great correlation with tectonic and geology setting, as well as the crustal thickness. I analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. I use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw ¿ 7) earthquakes, I first identify local earthquakes that occurred before and after the mainshocks. I then group the local earthquakes within a cluster radius between 75 to 200 km. I obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Based on the lateral variations of the dip along the subducted oceanic plate, I divide the Mexican subduction zone into four segments. I then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes passage of surface waves. I identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. I find no dependence of seismicity changes on mainshock focal mechanism.

  18. Python based integration of GEM detector electronics with JET data acquisition system

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dalley, Simon; Hogben, Colin; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek; Shumack, Amy

    2014-11-01

    This paper presents the system integrating the dedicated measurement and control electronic systems for Gas Electron Multiplier (GEM) detectors with the Control and Data Acquisition system (CODAS) in the JET facility in Culham, England. The presented system performs the high level procedures necessary to calibrate the GEM detector and to protect it against possible malfunctions or dangerous changes in operating conditions. The system also allows control of the GEM detectors from CODAS, setting of their parameters, checking their state, starting the plasma measurement and to reading the results. The system has been implemented using the Python language, using the advanced libraries for implementation of network communication protocols, for object based hardware management and for data processing.

  19. Instantaneous phase estimation to measure weak velocity variations: application to noise correlation on seismic data at the exploration scale

    NASA Astrophysics Data System (ADS)

    Corciulo, M.; Roux, P.; Campillo, M.; Dubucq, D.

    2010-12-01

    Passive imaging from noise cross-correlation is a consolidated analysis applied at continental and regional scale whereas its use at local scale for seismic exploration purposes is still uncertain. The development of passive imaging by cross-correlation analysis is based on the extraction of the Green’s function from seismic noise data. In a completely random field in time and space, the cross-correlation permits to retrieve the complete Green’s function whatever the complexity of the medium. At the exploration scale and at frequency above 2 Hz, the noise sources are not ideally distributed around the stations which strongly affect the extraction of the direct arrivals from the noise cross-correlation process. In order to overcome this problem, the coda waves extracted from noise correlation could be useful. Coda waves describe long and scattered paths sampling the medium in different ways such that they become sensitive to weak velocity variations without being dependent on the noise source distribution. Indeed, scatters in the medium behave as a set of secondary noise sources which randomize the spatial distribution of noise sources contributing to the coda waves in the correlation process. We developed a new technique to measure weak velocity changes based on the computation of the local phase variations (instantaneous phase variation or IPV) of the cross-correlated signals. This newly-developed technique takes advantage from the doublet and stretching techniques classically used to monitor weak velocity variation from coda waves. We apply IPV to data acquired in Northern America (Canada) on a 1-km side square seismic network laid out by 397 stations. Data used to study temporal variations are cross-correlated signals computed on 10-minutes ambient noise in the frequency band 2-5 Hz. As the data set was acquired over five days, about 660 files are processed to perform a complete temporal analysis for each stations pair. The IPV permits to estimate the phase shift all over the signal length without any assumption on the medium velocity. The instantaneous phase is computed using the Hilbert transform of the signal. For each stations pair, we measure the phase difference between successive correlation functions calculated for 10 minutes of ambient noise. We then fit the instantaneous phase shift using a first-order polynomial function. The measure of the velocity variation corresponds to the slope of this fit. Compared to other techniques, the advantage of IPV is a very fast procedure which efficiently provides the measure of velocity variation on large data sets. Both experimental results and numerical tests on synthetic signals will be presented to assess the reliability of the IPV technique, with comparison to the doublet and stretching methods.

  20. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  1. About three cases of ulceroglandular tularemia, is this the re-emergence of Francisella tularensis in Belgium?

    PubMed

    Dupont, E; Van Eeckhoudt, S; Thissen, X; Ausselet, N; Fretin, D; Stefanescu, I; Glupczynski, Y; Delaere, B

    2015-10-01

    Tularemia is a zoonosis caused by Francisella tularensis that can be transmitted by several ways to human being and cause different clinical manifestations. We report three clinical cases of tularemia with ulceroglandular presentation in young males acquired during outdoor activities in Southern Belgium. Confirmation of the diagnosis was established by serology. Only three cases of tularemia have been reported in Belgium between 1950 and 2012 by the National Reference Laboratory CODA-CERVA (Ref Lab CODA-CERVA) but re-emergence of tularemia is established in several European countries and F. tularensis is also well known to be present in animal reservoirs and vectors in Belgium. The diagnosis of tularemia has to be considered in case of suggestive clinical presentation associated with epidemiological risk factors.

  2. Using Earthquake Location and Coda Attenuation Analysis to Explore Shallow Structures Above the Socorro Magma Body, New Mexico

    NASA Astrophysics Data System (ADS)

    Schmidt, J. P.; Bilek, S. L.; Worthington, L. L.; Schmandt, B.; Aster, R. C.

    2017-12-01

    The Socorro Magma Body (SMB) is a thin, sill-like intrusion with a top at 19 km depth covering approximately 3400 km2 within the Rio Grande Rift. InSAR studies show crustal uplift patterns linked to SMB inflation with deformation rates of 2.5 mm/yr in the area of maximum uplift with some peripheral subsidence. Our understanding of the emplacement history and shallow structure above the SMB is limited. We use a large seismic deployment to explore seismicity and crustal attenuation in the SMB region, focusing on the area of highest observed uplift to investigate the possible existence of fluid/magma in the upper crust. We would expect to see shallower earthquakes and/or higher attenuation if high heat flow, fluid or magma is present in the upper crust. Over 800 short period vertical component geophones situated above the northern portion of the SMB were deployed for two weeks in 2015. This data is combined with other broadband and short period seismic stations to detect and locate earthquakes as well as to estimate seismic attenuation. We use phase arrivals from the full dataset to relocate a set of 33 local/regional earthquakes recorded during the deployment. We also measure amplitude decay after the S-wave arrival to estimate coda attenuation caused by scattering of seismic waves and anelastic processes. Coda attenuation is estimated using the single backscatter method described by Aki and Chouet (1975), filtering the seismograms at 6, 9 and 12 Hz center frequencies. Earthquakes occurred at 2-13 km depth during the deployment, but no spatial patterns linked with the high uplift region were observed over this short duration. Attenuation results for this deployment suggest Q ranging in values of 130 to 2000, averaging around Q of 290, comparable to Q estimates of other studies of the western US. With our dense station coverage, we explore attenuation over smaller scales, and find higher attenuation for stations in the area of maximum uplift relative to stations outside of the maximum uplift, which could indicate upper crustal heterogeneities with shallow process above the magma body in this area.

  3. Constraining the depth of the time-lapse changes of P- and S-wave velocities in the first year after the 2011 Tohoku earthquake, Japan

    NASA Astrophysics Data System (ADS)

    Sawazaki, K.; Kimura, H.; Uchida, N.; Takagi, R.; Snieder, R.

    2012-12-01

    Using deconvolutions of vertical array of KiK-net (nationwide strong-motion seismograph digital network in Japan) records and applying coda wave interferometry (CWI) to Hi-net (high-sensitivity seismograph network in Japan; collocated with a borehole receiver of KiK-net) borehole records, we constrain the responsible depth of the medium changes associated with the 2011 Tohoku earthquake (MW9.0). There is a systematic reduction in VS up to 6% in the shallow subsurface which experienced strong dynamic strain by the Tohoku earthquake. In contrast, both positive and negative changes are observed for VP, which are less than 2% for both directions. We propose that this discrepancy between the changes of VS and VP is explained by the behavior of shear and bulk moduli of a porous medium exposed to an increase of excess pore fluid pressure. At many stations, VS recovers proportional to logarithm of the lapse time after the mainshock, and mostly recovers to the reference value obtained before the mainshock in one year. However, some stations that have been exposed by additional strong motions of aftershocks and/or other earthquakes take much longer time for the recovery. The CWI technique applied to horizontal components of S-coda reveals a velocity reduction up to 0.2% widely along the coastline of northeastern Japan. For the vertical component of P-coda, however, the velocity change is mostly less than 0.1% at the same region. From single scattering model including P-S and S-P conversion scatterings, we verify that both components are sensitive to VS change around the source, but the vertical component of P-coda is sensitive to VP change around the receiver. Consequently, the difference in velocity changes revealed from the horizontal and vertical components represents the difference of VS and VP changes near the receiver. As the conclusion, VS reduction ratio in the deep lithosphere is smaller than that at the shallow ground by 1 to 2 orders.

  4. A study on off-fault aftershock pattern at N-Adria microplate

    NASA Astrophysics Data System (ADS)

    Bressan, Gianni; Barnaba, Carla; Magrin, Andrea; Rossi, Giuliana

    2018-03-01

    The spatial features of the aftershock sequences triggered by three moderate magnitude events with coda-duration magnitudes 4.1, 5.1 and 5.6, which occurred in Northeastern Italy and Western Slovenia, were investigated. The fractal dimension and the orientations of the planar features fitting the hypocentral data have been inferred. The spatial organization is articulated through two temporal phases. The first phase is characterized by the decreasing of the fractal dimension and by vertically oriented planes fitting the hypocentral foci. The second phase is marked by an increase of the fractal dimension and by the activation of different planes, with more widespread orientation. The aftershock temporal distribution is analysed with a model based on a static fatigue process. The process is favoured by the decrease of the overburden pressure, the sharp variations of the mechanical properties of the medium and the unclamping effect resulting from positive normal stress changes caused by the mainshock stress step.

  5. A model for attenuation and scattering in the Earth's crust

    NASA Astrophysics Data System (ADS)

    Toksöz, M. Nafi; Dainty, Anton M.; Reiter, Edmund; Wu, Ru-Shan

    1988-03-01

    The mechanisms contributing to the attenuation of earthquake ground motion in the distance range of 10 to 200 km are studied with the aid of laboratory data, coda waves Rg attenuation, strong motion attenuation measurements in the northeast United States and Canada, and theoretical models. The frequency range 1 10 Hz has been studied. The relative contributions to attenuation of anelasticity of crustal rocks (constant Q), fluid flow and scattering are evaluated. Scattering is found to be strong with an albedo B 0=0.8 0.9 and a scattering extinction length of 17 32 km. The albedo is defined as the ratio of the total extinction length to the scattering extinction length. The Rg results indicate that Q increases with depth in the upper kilometer or two of the crust, at least in New England. Coda Q appears to be equivalent to intrinsic (anelastic) Q and indicates that this Q increases with frequency as Q=Q o f n , where n is in the range of 0.2 0.9. The intrinsic attenuation in the crust can be explained by a high constant Q (500≤ Q o≤2000) and a frequency dependent mechanism most likely due to fluid effects in rocks and cracks. A fluid-flow attenuation model gives a frequency dependence ( Q≃ Q o f 0.5) similar to those determined from the analysis of coda waves of regional seismograms. Q is low near the surface and high in the body of the crust.

  6. The Construct Validity of the CODA and Repeated Sprint Ability Tests in Football Referees.

    PubMed

    Riiser, Amund; Andersen, Vidar; Castagna, Carlo; Arne Pettersen, Svein; Saeterbakken, Atle; Froyd, Christian; Ylvisaker, Einar; Naess Kjosnes, Terje; Fusche Moe, Vegard

    2018-06-14

    As of 2017, the international football federation introduced the change of direction ability test (CODA) and the 5×30 m sprint test for assistant referees (ARs) and continued the 6×40 m sprint test for field referees (FRs) as mandatory tests. The aim of this study was to evaluate the association between performance in these tests and running performance during matches at the top level in Norway. The study included 9 FRs refereeing 21 matches and 19 ARs observed 53 times by a local positioning system at three stadiums during the 2016 season. Running performance during matches was assessed by high-intensity running (HIR) distance, HIR counts, acceleration distance, and acceleration counts. For the ARs, there was no association between the CODA test with high-intensity running or acceleration ( P >0.05). However, the 5×30 m sprint test was associated with HIR count during the entire match (E -12.9, 95% CI -25.4 to -0.4) and the 5-min period with the highest HIR count (E -2.02, 95% CI -3.55 to -0.49). For the FRs, the 6×40 m fitness test was not associated with running performance during matches ( P >0.05). In conclusion, performance in these tests had weak or no associations with accelerations or HIR in top Norwegian referees during match play. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Detectability of temporal changes in fine structures near the inner core boundary beneath the eastern hemisphere

    NASA Astrophysics Data System (ADS)

    Yu, Wen-che

    2016-04-01

    The inner core boundary (ICB), where melting and solidification of the core occur, plays a crucial role in the dynamics of the Earth's interior. To probe temporal changes near the ICB beneath the eastern hemisphere, I analyze differential times of PKiKP (dt(PKiKP)), double differential times of PKiKP-PKPdf, and PKiKP coda waves from repeating earthquakes in the Southwest Pacific subduction zones. Most PKiKP differential times are within ±30 ms, comparable to inherent travel time uncertainties due to inter-event separations, and suggest no systematic changes as a function of calendar time. Double differential times measured between PKiKP codas and PKiKP main phases show promising temporal changes, with absolute values of time shifts of >50 ms for some observations. However, there are discrepancies among results from different seismographs in the same calendar time window. Negligible changes in PKiKP times, combined with changes in PKiKP coda wave times on 5 year timescales, favor a smooth inner core boundary with fine-scale structures present in the upper inner core. Differential times of PKiKP can be interpreted in the context of either melting based on translational convection, or growth based on thermochemical mantle-inner core coupling. Small dt(PKiKP) values with inherent uncertainties do not have sufficient resolution to distinguish the resultant longitudinal (melting) and latitudinal (growth) dependencies predicted on the basis of the two models on 5 year timescales.

  8. Introducing CoDa (Cosmic Dawn): Radiation-Hydrodynamics of Galaxy Formation in the Early Universe

    NASA Astrophysics Data System (ADS)

    Ocvirk, Pierre; Gillet, Nicolas; Shapiro, Paul; Aubert, Dominique; Iliev, Ilian; Romain, Teyssier; Yepes, Gustavo; Choi, Jun-hwan; Sullivan, David; Knebe, Alexander; Gottloeber, Stefan; D'Aloisio, Anson; Park, Hyunbae; Hoffman, Yehuda

    2015-08-01

    CoDa (Cosmic Dawn) is the largest fully coupled radiation hydrodynamics simulation of the reionization of the local Universe to date. It was performed using RAMSES-CUDATON running on 8192 nodes (i.e. 8192 GPUs) on the titan supercomputer at Oak Ridge National Laboratory to simulate a 64 h-1Mpc side box down to z=4.23. In this simulation, reionization proceeds self-consistently, driven by stellar radiation. We compare the simulation's reionization history, ionizing flux density, the cosmic star formation history and the CMB Thompson scattering optical depth with their observational values. Luminosity functions are also in rather good agreement with high redshift observations, although very bright objects (MAB1600 < -21) are overabundant in CoDa. We investigate the evolution of the intergalactic medium, and find that gas filaments present a sheathed structure, with a hot envelope surrounding a cooler core. They are however not able to self-shield, while regions denser than 10^-4.5 H atoms per comoving h^-3cm^3 are. Haloes below M ˜ 3.10^9 M⊙ are severely affected by the expanding, rising UV background: their ISM is quickly photo-heated to temperatures above our star formation threshold and therefore stop forming stars after local reionization has occured. Overall, the haloes between 10^(10-11) M⊙ dominate the star formation budget of the box for most of the Epoch of Reionization. Several additional studies will follow, looking for instance at environmental effects on galaxy properties, and the regimes of accretion.

  9. Crustal structure of the Alps as seen by attenuation tomography

    NASA Astrophysics Data System (ADS)

    Mayor, Jessie; Calvet, Marie; Margerin, Ludovic; Vanderhaeghe, Olivier; Traversa, Paola

    2016-04-01

    We develop a simple tomographic approach exploiting the decay rate of coda waves to map the absorption properties of the crust in a region delimited approximately by the Rhine Graben to the North, the Apennines to the South, the Massif Central to the West and the Dinarides to the East. Our dataset comprises 40 000 coda records of about 2000 weak to moderate crustal earthquakes, with magnitude ranging from 2.8 to 6 and recorded by broad-band, accelerometric and short-period stations. After proper choice of a coda window minimizing the effects of variable epicentral distances, we measure the coda quality factor Qc in five non-overlapping frequency windows covering the 1-32 Hz band for all available source station pairs. These measurements are subsequently converted into maps of absorption quality factor (Qi) using a linearized, approximate relation between Qc and Qi. In practice the following procedure is applied in each frequency band: (1) we divide the target region into 40 × 40 km cells; (2) for each source-station pair, we assign the measured Qc value to each pixel intercepted by the direct ray path; (3) the results are averaged over all paths and subsequently smoothed with a 3 × 3 pixels moving window. Our approach is consistent with the high sensitivity of Qc to the value of Qi between source and station. Our tomographic approach reveals strong lateral variations of absorption with length scales ranging from 100 km to 1000 km. At low frequency (∼ 1 Hz), the correlation with the surface geology is clear, Cenozoic and Mesozoic sedimentary basins (resp. crystalline massifs) being recognized as high (resp. low)-absorption regions. Furthermore the Qi map delineates finer geological features such as the Ivrea Body, the Rhône Valley, or felsic intrusions in the central Alps. At high-frequency (>16 Hz), only the thickest Cenozoic sedimentary deposits show up as high-attenuation regions and a north/south dichotomy is apparent in the absorption structure. The limit between low-attenuation regions to the North and high-attenuation region to the South correlates geographically with the location of the Periadriatic Lineament (PL), a major late-alpine crustal- to lithospheric-scale structure. Furthermore, the attenuation structure seems to prolong the PL to the West along a line marked by large historical earthquakes. The Apennines orogenic belts exhibit a distinct frequency behavior, with high attenuation at low-frequency and low-attenuation at high-frequency. Low-frequency absorption may likely be explained by the relatively thick cover of Cenozoic sedimentary materials, as well as by shallow geothermal activity. We hypothesize that the frequency dependence of the attenuation structure, in particular in the Apennines, is caused by a change of the wavefield composition which accentuates the sensitivity of the coda to the deeper parts of the medium as the frequency increases.

  10. Influence of syllable structure on L2 auditory word learning.

    PubMed

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  11. Frequency dependent Lg attenuation in south-central Alaska

    USGS Publications Warehouse

    McNamara, D.E.

    2000-01-01

    The characteristics of seismic energy attenuation are determined using high frequency Lg waves from 27 crustal earthquakes, in south-central Alaska. Lg time-domain amplitudes are measured in five pass-bands and inverted to determine a frequency-dependent quality factor, Q(f), model for south-central Alaska. The inversion in this study yields the frequency-dependent quality factor, in the form of a power law: Q(f) = Q0fη = 220(±30) f0.66(±0.09) (0.75≤f≤12Hz). The results from this study are remarkably consistent with frequency dependent quality factor estimates, using local S-wave coda, in south-central Alaska. The consistency between S-coda Q(f) and Lg Q(f) enables constraints to be placed on the mechanism of crustal attenuation in south-central Alaska. For the range of frequencies considered in this study both scattering and intrinsic attenuation mechanisms likely play an equal role.

  12. Using Co-located Rotational and Translational Ground-Motion Sensors to Characterize Seismic Scattering in the P-Wave Coda

    NASA Astrophysics Data System (ADS)

    Bartrand, J.; Abbott, R. E.

    2017-12-01

    We present data and analysis of a seismic data collect at the site of a historical underground nuclear explosion at Yucca Flat, a sedimentary basin on the Nevada National Security Site, USA. The data presented here consist of active-source, six degree-of-freedom seismic signals. The translational signals were collected with a Nanometrics Trillium Compact Posthole seismometer and the rotational signals were collected with an ATA Proto-SMHD, a prototype rotational ground motion sensor. The source for the experiment was the Seismic Hammer (a 13,000 kg weight-drop), deployed on two-kilometer, orthogonal arms centered on the site of the nuclear explosion. By leveraging the fact that compressional waves have no rotational component, we generated a map of subsurface scattering and compared the results to known subsurface features. To determine scattering intensity, signals were cut to include only the P-wave and its coda. The ratio of the time-domain signal magnitudes of angular velocity and translational acceleration were sectioned into three time windows within the coda and averaged within each window. Preliminary results indicate an increased rotation/translation ratio in the vicinity of the explosion-generated chimney, suggesting mode conversion of P-wave energy to S-wave energy at that location. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  13. SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R E; Mayeda, K; Walter, W R

    2008-07-08

    The objectives of this study are to improve low-magnitude (concentrating on M2.5-5) regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge at small magnitudes (i.e., m{sub b}more » < {approx} 4.0) is poorly resolved, and source scaling remains a subject of on-going debate in the earthquake seismology community. Recently there have been a number of empirical studies suggesting scaling of micro-earthquakes is non-self-similar, yet there are an equal number of compelling studies that would suggest otherwise. It is not clear whether different studies obtain different results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods that both make use of empirical Green's function (EGF) earthquakes to remove path effects. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But finding well recorded earthquakes with 'perfect' EGF events for direct wave analysis is difficult, limits the number of earthquakes that can be studied. We begin with closely-located, well-correlated earthquakes. We use a multi-taper method to obtain time-domain source-time-functions by frequency division. We only accept an earthquake and EGF pair if they are able to produce a clear, time-domain source pulse. We fit the spectral ratios and perform a grid-search about the preferred parameters to ensure the fits are well constrained. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We analyze three clusters of aftershocks from the well-recorded sequence following the M5 Au Sable Forks, NY, earthquake to obtain some of the first accurate source parameters for small earthquakes in eastern North America. Each cluster contains a M{approx}2, and two contain M{approx}3, as well as smaller aftershocks. We find that the corner frequencies and stress drops are high (averaging 100 MPa) confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We also demonstrate that a scaling breakdown suggested by earlier work is simply an artifact of their more band-limited data. We calculate radiated energy, and find that the ratio of Energy to seismic Moment is also high, around 10{sup -4}. We estimate source parameters for the M5 mainshock using similar methods, but our results are more doubtful because we do not have a EGF event that meets our preferred criteria. The stress drop and energy/moment ratio for the mainshock are slightly higher than for the aftershocks. Our improved, and simplified coda wave analysis method uses spectral ratios (as for the direct waves) but relies on the averaging nature of the coda waves to use EGF events that do not meet the strict criteria of similarity required for the direct wave analysis. We have applied the coda wave spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks, and also to several sequences in Italy with M{approx}6 mainshocks. The Italian earthquakes have higher stress drops than the Hector Mine sequence, but lower than Au Sable Forks. These results show a departure from self-similarity, consistent with previous studies using similar regional datasets. The larger earthquakes have higher stress drops and energy/moment ratios. We perform a preliminary comparison of the two methods using the M5 Au Sable Forks earthquake. Both methods give very consistent results, and we are applying the comparison to further events.« less

  14. Position paper: appropriate use of pharmacotherapeutic agents by the orofacial pain dentist.

    PubMed

    Heir, Gary M; Haddox, J David; Crandall, Jeffrey; Eliav, Eli; Radford, Steven Graff; Schwartz, Anthony; Jaeger, Bernadette; Ganzberg, Steven; Aquino, Carlos M; Benoliel, Rafael

    2011-01-01

    Orofacial Pain Dentistry is concerned with the prevention, evaluation, diagnosis, treatment, and management of persistent and recurrent orofacial pain disorders. The American Dental Association, through the Commission on Dental Accreditation (CODA), now recognizes Orofacial Pain as an area of advanced education in Dentistry. It is mandated by CODA that postgraduate orofacial pain programs be designed to provide advanced knowledge and skills beyond those of the standard curriculum leading to the DDS or DMD degrees. Postgraduate programs in orofacial pain must include specific curricular content to comply with CODA standards. The intent of CODA standards is to assure that training programs develop specific educational goals and objectives that describe the student/resident’s expected knowledge and skills upon successful completion of the program. A standardized core curriculum, required for accreditation of dental orofacial pain training programs, has now been adopted.Among the various topics mandated in the curriculum are pharmacology and, specifically, pharmacotherapeutics. The American Academy of Orofacial Pain (AAOP) recommends, and the American Board of Orofacial Pain (ABOP) requires, that the minimally competent orofacial pain dentist* be knowledgeable in the management of orofacial pain conditions using medications when indicated. Basic knowledge of the appropriate use of pharmacotherapeutics is essential for the orofacial pain dentist and, therefore, constitutes part of the examination specifications of the ABOP. The minimally competent orofacial pain clinician must demonstrate knowledge, diagnostic skills, and treatment expertise in many areas, such as musculoskeletal, neurovascular, and neuropathic pain syndromes; sleep disorders related to orofacial pain; orofacial dystonias; and intraoral, intracranial, extracranial, and systemic disorders that cause orofacial pain or dysfunction. The orofacial pain dentist has the responsibility to diagnose and treat patients in pain that is often chronic, multifactorial, and complex. Failure to understand pain mechanisms can lead to inaccurate diagnoses and ineffective, delayed, or harmful treatment. It is the responsibility of the orofacial pain dentist to accurately diagnose the cause(s) of the pain and decide if treatment should be dentally, medically, or psychologically oriented, or if optimal management requires a combination of all three treatment approaches. Management may consist of a number of interdisciplinary modalities including, eg, physical medicine, behavioral medicine, and pharmacology or, in rare instances, surgical interventions. Among the essential armamentarium is the knowledge and proper use of pharmacologic agents.

  15. Coda Wave Attenuation Characteristics for North Anatolian Fault Zone, Turkey

    NASA Astrophysics Data System (ADS)

    Sertcelik, Fadime; Guleroglu, Mehmet

    2017-10-01

    North Anatolian Fault Zone, on which large earthquakes have occurred in the past, migrates regularly from east to west, and it is one of the most active faults in the world. The purpose of this study is to estimate the coda wave quality factor (Qc) for each of the five sub regionsthat were determined according to the fault rupture of these large earthquakes and along the fault. 978 records have been analyzed for 1.5, 3, 6, 9, 12 and 18 Hz frequencies by Single Backscattering Method. Along the fault, the variations in the Qc with lapse time are determined via, Qc = (136±25)f(0.96±0.027), Qc = (208±22)f(0.85±0.02) Qc = (307±28)f(0.72±0.025) at 20, 30, 40 sec lapse times, respectively. The estimated average frequency-dependence quality factor for all lapse time are; Qc(f) = (189±26)f(0.86±0.02) for Karliova-Tokat region; Qc(f) = (216±19)f(0.76±0.018) for Tokat-Çorum region; Qc(f) = (232±18)f(0.76±0.019) for Çorum-Adapazari region; Qc(f) = (280±28)f(0.79±0.021) for Adapazari-Yalova region; Qc(f) = (252±26)f(0.81±0.022) for Yalova-Gulf of Saros region. The coda wave quality factor at all the lapse times and frequencies is Qc(f) = (206±15)f(0.85±0.012) in the study area. The most change of Qc with lapse time is determined at Yalova-Saros region. The result may be related to heterogeneity degree of rapidly decreases towards the deep crust like compared to the other sub region. Moreover, the highest Qc is calculated between Adapazari - Yalova. It was interpreted as a result of seismic energy released by 1999 Kocaeli Earthquake. Besides, it couldn't be established a causal relationship between the regional variation of Qc with frequency and lapse time associated to migration of the big earthquakes. These results have been interpreted as the attenuation mechanism is affected by both regional heterogeneity and consist of a single or multi strands of the fault structure.

  16. The Trickster, the Bad Nigga, and the New Urban Ethnography: An Initial Report and Editorial Coda

    ERIC Educational Resources Information Center

    Milner, Richard B.

    1972-01-01

    The author first describes a new way of doing ethnographic research, contrasting it with the prevalent academic style, and then discusses the studies of black prostitution done by him and his wife. (JM)

  17. The neural correlates of highly iconic structures and topographic discourse in French Sign Language as observed in six hearing native signers.

    PubMed

    Courtin, C; Hervé, P-Y; Petit, L; Zago, L; Vigneau, M; Beaucousin, V; Jobard, G; Mazoyer, B; Mellet, E; Tzourio-Mazoyer, N

    2010-09-01

    "Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and spatial-classifier signs. We used functional magnetic resonance imaging (fMRI) to compare the neural correlates of topographic discourse and highly iconic structures in French Sign Language (LSF) in six hearing native signers, children of deaf adults (CODAs), and six LSF-naïve monolinguals. LSF materials consisted of videos of a lecture excerpt signed without spatially organized discourse or highly iconic structures (Lect LSF), a tale signed using highly iconic structures (Tale LSF), and a topographical description using a diagrammatic format and spatial-classifier signs (Topo LSF). We also presented texts in spoken French (Lect French, Tale French, Topo French) to all participants. With both languages, the Topo texts activated several different regions that are involved in mental navigation and spatial working memory. No specific correlate of LSF spatial discourse was evidenced. The same regions were more activated during Tale LSF than Lect LSF in CODAs, but not in monolinguals, in line with the presence of signing-space structure in both conditions. Motion processing areas and parts of the fusiform gyrus and precuneus were more active during Tale LSF in CODAs; no such effect was observed with French or in LSF-naïve monolinguals. These effects may be associated with perspective-taking and acting during personal transfers. 2010 Elsevier Inc. All rights reserved.

  18. Regional variation of coda Q in Kopili fault zone of northeast India and its implications

    NASA Astrophysics Data System (ADS)

    Bora, Nilutpal; Biswas, Rajib; Dobrynina, Anna A.

    2018-01-01

    Kopili fault has been experiencing higher seismic and tectonic activity during the recent years. These kind of active tectonics can be inspected by examining coda-wave attenuation and its dependence with frequency. Exploiting single back-scattering model, we have endeavored to measure coda Q and its associated parameters such as frequency dependent factor (n) and attenuation coefficient (γ) covering seven lapse-time windows spanning from 30 to 90 s and central frequencies 1.5, 3.5, 6, 9 and 12 Hz. The average estimated values of QC increases with frequency and lapse time window from 114 at frequency 1.5 Hz to 1563 at frequency 12 Hz for 30 s window length, and from 305 at frequency 1.5 Hz to 2135 at frequency 12 Hz for 90 s window length. The values of Q0 and n are also estimated for the entire Kopili fault zone. For this study region, the Q0 values vary from 62 to 348 and n varies from 0.57 to 1.51 within the frequency range 1.5 to 12 Hz. Furthermore, depth variation of attenuation of this region reveals that there is velocity anomaly at depth 210-220 km as there arises sharp changes in γ and n which are supported by available data, reported by other researcher for this region. Finally, we have tried to separate the intrinsic and scattering attenuation for this area. It is observed that the entire region is dominated by mainly scattering attenuation, but we can see an increase in intrinsic attenuation with depths in two stations namely TZR and BKD. Furthermore, the obtained results are comparable with the available global data.

  19. Variation of coda wave attenuation in the Alborz region and central Iran

    NASA Astrophysics Data System (ADS)

    Rahimi, H.; Motaghi, K.; Mukhopadhyay, S.; Hamzehloo, H.

    2010-06-01

    More than 340 earthquakes recorded by the Institute of Geophysics, University of Tehran (IGUT) short period stations from 1996 to 2004 were analysed to estimate the S-coda attenuation in the Alborz region, the northern part of the Alpine-Himalayan orogen in western Asia, and in central Iran, which is the foreland of this orogen. The coda quality factor, Qc, was estimated using the single backscattering model in frequency bands of 1-25 Hz. In this research, lateral and depth variation of Qc in the Alborz region and central Iran are studied. It is observed that in the Alborz region there is absence of significant lateral variation in Qc. The average frequency relation for this region is Qc = 79 +/- 2f1.07+/-0.08. Two anomalous high-attenuation areas in central Iran are recognized around the stations LAS and RAZ. The average frequency relation for central Iran excluding the values of these two stations is Qc = 94 +/- 2f0.97+/-0.12. To investigate the attenuation variation with depth, Qc value was calculated for 14 lapse times (25, 30, 35,... 90s) for two data sets having epicentral distance range R < 100 km (data set 1) and 100 < R < 200 km (data set 2) in each area. It is observed that Qc increases with depth. However, the rate of increase of Qc with depth is not uniform in our study area. Beneath central Iran the rate of increase of Qc is greater at depths less than 100 km compared to that at larger depths indicating the existence of a high attenuation anomalous structure under the lithosphere of central Iran. In addition, below ~180 km, the Qc value does not vary much with depth under both study areas, indicating the presence of a transparent mantle under them.

  20. Monitoring Unstable Glaciers with Seismic Noise Interferometry

    NASA Astrophysics Data System (ADS)

    Preiswerk, L. E.; Walter, F.

    2016-12-01

    Gravity-driven glacier instabilities are a threat to human infrastructure in alpine terrain, and this hazard is likely to increase with future changes in climate. Seismometers have been used previously on hazardous glaciers to monitor the natural englacial seismicity. In some situations, an increase in "icequake" activity may indicate fracture growth and thus an imminent major break-off. However, without independent constraints on unstable volumes, such mere event counting is of little use. A promising new approach to monitor unstable masses in Alpine terrain is coda wave interferometry of ambient noise. While already established in the solid earth, application to glaciers is not straightforward, because the lack of inhomogeneities typically suppresses seismic coda waves in glacier ice. Only glaciers with pervasive crevasses provide enough scattering to generate long codas. This is requirement is likely met for highly dynamic unstable glaciers. Here, we report preliminary results from a temporary 5-station on-ice array of seismometers (corner frequencies: 1 Hz, array aperture: 500m) on Bisgletscher (Switzerland). The seismometers were deployed in shallow boreholes, directly above the unstable tongue of the glacier. In the frequency band 4-12 Hz, we find stable noise cross-correlations, which in principle allows monitoring on a subdaily scale. The origin and the source processes of the ambient noise in these frequencies are however uncertain. As a first step, we evaluate the stability of the sources in order to separate effects of changing source parameters from changes of englacial properties. Since icequakes occurring every few seconds may dominate the noise field, we compare their temporal and spatial occurrences with the cross-correlation functions (stability over time, the asymmetry between causal and acausal parts of the cross-correlation functions) as well as with results from beamforming to assess the influence of these transient events on the noise field.

  1. Cosmic Dawn (CoDa): the First Radiation-Hydrodynamics Simulation of Reionization and Galaxy Formation in the Local Universe

    NASA Astrophysics Data System (ADS)

    Ocvirk, Pierre; Gillet, Nicolas; Shapiro, Paul R.; Aubert, Dominique; Iliev, Ilian T.; Teyssier, Romain; Yepes, Gustavo; Choi, Jun-Hwan; Sullivan, David; Knebe, Alexander; Gottlöber, Stefan; D'Aloisio, Anson; Park, Hyunbae; Hoffman, Yehuda; Stranex, Timothy

    2016-12-01

    Cosmic reionization by starlight from early galaxies affected their evolution, thereby impacting reionization itself. Star formation suppression, for example, may explain the observed underabundance of Local Group dwarfs relative to N-body predictions for cold dark matter. Reionization modelling requires simulating volumes large enough [˜ (100 Mpc)3] to sample reionization `patchiness', while resolving millions of galaxy sources above ˜108 M⊙ combining gravitational and gas dynamics with radiative transfer. Modelling the Local Group requires initial cosmological density fluctuations pre-selected to form the well-known structures of the Local Universe today. Cosmic Dawn (`CoDa') is the first such fully coupled, radiation-hydrodynamics simulation of reionization of the Local Universe. Our new hybrid CPU-GPU code, RAMSES-CUDATON, performs hundreds of radiative transfer and ionization rate-solver timesteps on the GPUs for each hydro-gravity timestep on the CPUs. CoDa simulated (91Mpc)3 with 40963 particles and cells, to redshift 4.23, on ORNL supercomputer Titan, utilizing 8192 cores and 8192 GPUs. Global reionization ended slightly later than observed. However, a simple temporal rescaling which brings the evolution of ionized fraction into agreement with observations also reconciles ionizing flux density, cosmic star formation history, CMB electron scattering optical depth and galaxy UV luminosity function with their observed values. Photoionization heating suppressed the star formation of haloes below ˜2 × 109 M⊙, decreasing the abundance of faint galaxies around MAB1600 = [-10, -12]. For most of reionization, star formation was dominated by haloes between 1010-1011 M⊙ , so low-mass halo suppression was not reflected by a distinct feature in the global star formation history. Intergalactic filaments display sheathed structures, with hot envelopes surrounding cooler cores, but do not self-shield, unlike regions denser than 100 <ρ>.

  2. High Frequency Ground Motion from Finite Fault Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Crempien, Jorge G. F.

    There are many tectonically active regions on earth with little or no recorded ground motions. The Eastern United States is a typical example of regions with active faults, but with low to medium seismicity that has prevented sufficient ground motion recordings. Because of this, it is necessary to use synthetic ground motion methods in order to estimate the earthquake hazard a region might have. Ground motion prediction equations for spectral acceleration typically have geometric attenuation proportional to the inverse of distance away from the fault. Earthquakes simulated with one-dimensional layered earth models have larger geometric attenuation than the observed ground motion recordings. We show that as incident angles of rays increase at welded boundaries between homogeneous flat layers, the transmitted rays decrease in amplitude dramatically. As the receiver distance increases away from the source, the angle of incidence of up-going rays increases, producing negligible transmitted ray amplitude, thus increasing the geometrical attenuation. To work around this problem we propose a model in which we separate wave propagation for low and high frequencies at a crossover frequency, typically 1Hz. The high-frequency portion of strong ground motion is computed with a homogeneous half-space and amplified with the available and more complex one- or three-dimensional crustal models using the quarter wavelength method. We also make use of seismic coda energy density observations as scattering impulse response functions. We incorporate scattering impulse response functions into our Green's functions by convolving the high-frequency homogeneous half-space Green's functions with normalized synthetic scatterograms to reproduce scattering physical effects in recorded seismograms. This method was validated against ground motion for earthquakes recorded in California and Japan, yielding results that capture the duration and spectral response of strong ground motion.

  3. Long codas of coupled wave systems in seismic basins

    NASA Astrophysics Data System (ADS)

    Seligman, Thomas H.

    2002-11-01

    Quite some time ago it was pointed out that the damage patterns and Fourier spectra of the 1985 earthquake in Mexico City are only compatible with a resonant effect of horizontal waves with the approximate speed of sound waves in water [see Flores et al., Nature 326, 783 (1987)]. In a more recent paper it was pointed out that this indeed will occur with a very specific frequency selection for a coupled system of Raleigh waves at the interface of the bottom of the ancient lakebed with the more solid deposits, and an evanescent sound wave in the mud above [see J. Flores et al., Bull. Seismol. Soc. Am. 89, 14-21 (1999)]. In the present talk we shall go over these arguments again and show that strong reflection at the edges of the lake must occur to account for the strong magnification entailing necessarily a long coda, and that the mecanism can be understood in the same terms.

  4. An application of the Continuous Opinions and Discrete Actions (CODA) model to adolescent smoking initiation.

    PubMed

    Sun, Ruoyan; Mendez, David

    2017-01-01

    We investigated the impact of peers' opinions on the smoking initiation process among adolescents. We applied the Continuous Opinions and Discrete Actions (CODA) model to study how social interactions change adolescents' opinions and behaviors about smoking. Through agent-based modeling (ABM), we simulated a population of 2500 adolescents and compared smoking prevalence to data from 9 cohorts of adolescents in the National Survey on Drug Use and Health (NSDUH) from year 2001 till 2014. Our model adjusts well for NSDUH data according to pseudo R2 values, which are at least 96%. Optimal parameter values indicate that adolescents exhibit imitator characteristics with regard to smoking opinions. The imitator characteristics suggests that teenagers tend to update their opinions consistently according to what others do, and these opinions later translate into smoking behaviors. As a result, peer influence from social networks plays a big role in the smoking initiation process and should be an important driver in policy formulation.

  5. The role of syllabic structure in French visual word recognition.

    PubMed

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  6. Measuring the effects of pore-pressure changes on seismic amplitude using crosswell continuous active-source seismic monitoring (CASSM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchesini, Pierpaolo; Daley, Thomas; Ajo-Franklin, Jonathan

    Monitoring of time-varying reservoir properties, such as the state of stress, is a primary goal of geophysical investigations, including for geological sequestration of CO 2, enhanced hydrocarbon recovery (EOR), and other subsurface engineering activities. In this work, we used Continuous Active-Source Seismic Monitoring (CASSM), with cross-well geometry, to measure variation in seismic coda amplitude, as a consequence of effective stress change (in the form of changes in pore fluid pressure). To our knowledge, the presented results are the first in-situ example of such crosswell measurement at reservoir scale and in field conditions. Data compliment the findings of our previous workmore » which investigated the relationship between pore fluid pressure and seismic velocity (velocity-stress sensitivity) using the CASSM system at the same field site (Marchesini et al., 2017, in review). We find that P-wave coda amplitude decreases with decreasing pore pressure (increasing effective stress).« less

  7. Earthquake stress via event ratio levels: Application to the 2011 and 2016 Oklahoma seismic sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, William R.; Yoo, Seung -Hoon; Mayeda, Kevin

    Here, we develop a new methodology for determining earthquake stress drop and apparent stress values via spectral ratio asymptotic levels. With sufficient bandwidth, the stress ratio for a pair of events can be directly related to these low- and high-frequency levels. This avoids the need to assume a particular spectral model and derive stress drop from cubed corner frequency measures. The method can be applied to spectral ratios for any pair of closely related earthquakes and is particularly well suited for coda envelope methods that provide good azimuthally averaged, point-source measures. We apply the new method to the 2011 Praguemore » and 2016 Pawnee earthquake sequences in Oklahoma. The sequences show stress scaling with size and depth, with the largest events having apparent stress levels near 1 MPa and smaller and/or shallower events having systematically lower stress values.« less

  8. Earthquake stress via event ratio levels: Application to the 2011 and 2016 Oklahoma seismic sequences

    DOE PAGES

    Walter, William R.; Yoo, Seung -Hoon; Mayeda, Kevin; ...

    2017-04-03

    Here, we develop a new methodology for determining earthquake stress drop and apparent stress values via spectral ratio asymptotic levels. With sufficient bandwidth, the stress ratio for a pair of events can be directly related to these low- and high-frequency levels. This avoids the need to assume a particular spectral model and derive stress drop from cubed corner frequency measures. The method can be applied to spectral ratios for any pair of closely related earthquakes and is particularly well suited for coda envelope methods that provide good azimuthally averaged, point-source measures. We apply the new method to the 2011 Praguemore » and 2016 Pawnee earthquake sequences in Oklahoma. The sequences show stress scaling with size and depth, with the largest events having apparent stress levels near 1 MPa and smaller and/or shallower events having systematically lower stress values.« less

  9. META II Complex Systems Design and Analysis (CODA)

    DTIC Science & Technology

    2011-08-01

    37  3.8.7  Variables, Parameters and Constraints ............................................................. 37  3.8.8  Objective...18  Figure 7: Inputs, States, Outputs and Parameters of System Requirements Specifications ......... 19...Design Rule Based on Device Parameter ....................................................... 57  Figure 35: AEE Device Design Rules (excerpt

  10. Allophonic Variation in the Spanish Sibilant Fricative

    ERIC Educational Resources Information Center

    Garcia, Alison

    2013-01-01

    In Spanish, the phoneme /s/ has two variants: [z] occurs in the coda when preceding a voiced consonant, and [s] occurs elsewhere. However, recent research has revealed irregular voicing patterns with regards to this phone. This dissertation examines two of these allophonic variations. It first investigates how speech rate and speech formality…

  11. Subsyllabic Unit Preference in Young Chinese Children

    ERIC Educational Resources Information Center

    Wang, Min; Cheng, Chenxi

    2008-01-01

    We reported three experiments investigating subsyllabic unit preference in young Chinese children. In Experiment 1, a Chinese sound similarity judgment task was designed in which 48 pair of stimuli varied in terms of shared subsyllabic units (i.e., vowel, body, rime, onset-coda). Grade 1 Chinese-speaking monolingual children judged pairs with…

  12. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  13. Simultaneous inversion of intrinsic and scattering attenuation parameters incorporating multiple scattering effect

    NASA Astrophysics Data System (ADS)

    Ogiso, M.

    2017-12-01

    Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.

  14. Ontology-Based Combinatorial Comparative Analysis of Adverse Events Associated with Killed and Live Influenza Vaccines

    PubMed Central

    Sarntivijai, Sirarat; Xiang, Zuoshuang; Shedden, Kerby A.; Markel, Howard; Omenn, Gilbert S.; Athey, Brian D.; He, Yongqun

    2012-01-01

    Vaccine adverse events (VAEs) are adverse bodily changes occurring after vaccination. Understanding the adverse event (AE) profiles is a crucial step to identify serious AEs. Two different types of seasonal influenza vaccines have been used on the market: trivalent (killed) inactivated influenza vaccine (TIV) and trivalent live attenuated influenza vaccine (LAIV). Different adverse event profiles induced by these two groups of seasonal influenza vaccines were studied based on the data drawn from the CDC Vaccine Adverse Event Report System (VAERS). Extracted from VAERS were 37,621 AE reports for four TIVs (Afluria, Fluarix, Fluvirin, and Fluzone) and 3,707 AE reports for the only LAIV (FluMist). The AE report data were analyzed by a novel combinatorial, ontology-based detection of AE method (CODAE). CODAE detects AEs using Proportional Reporting Ratio (PRR), Chi-square significance test, and base level filtration, and groups identified AEs by ontology-based hierarchical classification. In total, 48 TIV-enriched and 68 LAIV-enriched AEs were identified (PRR>2, Chi-square score >4, and the number of cases >0.2% of total reports). These AE terms were classified using the Ontology of Adverse Events (OAE), MedDRA, and SNOMED-CT. The OAE method provided better classification results than the two other methods. Thirteen out of 48 TIV-enriched AEs were related to neurological and muscular processing such as paralysis, movement disorders, and muscular weakness. In contrast, 15 out of 68 LAIV-enriched AEs were associated with inflammatory response and respiratory system disorders. There were evidences of two severe adverse events (Guillain-Barre Syndrome and paralysis) present in TIV. Although these severe adverse events were at low incidence rate, they were found to be more significantly enriched in TIV-vaccinated patients than LAIV-vaccinated patients. Therefore, our novel combinatorial bioinformatics analysis discovered that LAIV had lower chance of inducing these two severe adverse events than TIV. In addition, our meta-analysis found that all previously reported positive correlation between GBS and influenza vaccine immunization were based on trivalent influenza vaccines instead of monovalent influenza vaccines. PMID:23209624

  15. Social Identity in Hearing Youth Who Have Deaf Parents: A Qualitative Study

    ERIC Educational Resources Information Center

    Knight, Tracy Rouly

    2013-01-01

    The purpose of this research study is to describe the perspectives of young children of deaf adults regarding their linguistic and cultural identity. The researcher defined young Children of Deaf Adults (Codas) as Kids of Deaf Adults (Kodas). Kodas represented an interesting subgroup of bilingual, bicultural, and bimodal children with diverse…

  16. Feet and Syllables in "Elephants" and "Missiles": A Reappraisal

    ERIC Educational Resources Information Center

    Zonneveld, Wim; van der Pas, Brigit; de Bree, Elise

    2007-01-01

    Using data from a case study presented in Chiat (1989), Marshall and Chiat (2003) compare two different approaches to account for the realization of intervocalic consonants in child phonology: "coda capture theory" and the "foot domain account". They argue in favour of the latter account. In this note, we present a reappraisal…

  17. The Emergence of Sub-Syllabic Representations

    ERIC Educational Resources Information Center

    Lee, Yongeun; Goldrick, Matthew

    2008-01-01

    In a variety of experimental paradigms speakers do not treat all sub-syllabic sequences equally. In languages like English, participants tend to group vowels and codas together to the exclusion of onsets (i.e., /bet/=/b/-/et/). Three possible accounts of these patterns are examined. A hierarchical account attributes these results to the presence…

  18. Mozart to Michelangelo: Software to Hone Your Students' Fine Arts Skills.

    ERIC Educational Resources Information Center

    Smith, Russell

    2000-01-01

    Describes 15 art and music computer software products for classroom use. "Best bets" (mostly secondary level) include Clearvue Inc.'s Art of Seeing, Sunburst Technology's Curious George Paint & Print Studio, Inspiration Software's Inspiration 6.0, Harmonic Vision's Music Ace 2, and Coda Music Technology's PrintMusic! 2000 and SmartMusic Studio.…

  19. Physical constraints and the comparative ecology of coastal ecosystems across the US Great Lakes, with a coda

    EPA Science Inventory

    One of my favorite papers by Scott Nixon (1988) was the story he build around the observation that marine fisheries yields were higher than temperate lakes. The putative agent for the freshwater/marine difference, involved a higher energy of mixing due to tides in marine environm...

  20. Use of Abstracts, Orientations, and Codas in Narration by Language-Disordered and Nondisordered Children.

    ERIC Educational Resources Information Center

    Sleight, Christine C.; Prinz, Philip M.

    1985-01-01

    Forty language-disordered and nondisordered elementary children viewed a nonverbal film, wrote the story, and narrated it to language-disordered and nondisordered peers unfamiliar with the film. Language-disordered Ss made fewer references to the orientation clauses of props and activities than nondisordered Ss. Neither group modified their…

  1. Contextual Variability in American English Dark-L

    ERIC Educational Resources Information Center

    Oxley, Judith; Roussel, Nancye; Buckingham, Hugh

    2007-01-01

    This paper presents a four-subject study that examines the relative influence of syllable position and stress, together with vowel context on the colouring of the dark-l characteristic of speakers of General American English. Most investigators report lighter /l/ tokens in syllable onsets and darker tokens in coda positions. The present study…

  2. Lithospheric structure of the southern French Alps inferred from broadband analysis

    NASA Astrophysics Data System (ADS)

    Bertrand, E.; Deschamps, A.

    2000-11-01

    Broadband receiver functions analysis is commonly used to evaluate the fine-scale S-velocity structure of the lithosphere. We analyse teleseismic P-waves and their coda from 30 selected teleseismic events recorded at three seismological stations of to the French TGRS network in the Alpes Maritimes. Receiver functions are computed in the time domain using an SVD matrix inversion method. Dipping Moho and lateral heterogeneities beneath the array are inferred from the amplitude, arrival time and polarity of locally-generated PS phases. We propose that the Moho dips 11° towards 25°±10°N below station CALF, in the outer part of the Alpine belt. At this station, we determine a Moho depth of about 20±2 km; the same depth is suggested below SAOF station also located in the fold-trust belt. Beneath station STET located in the inner part of the Alpine belt, the Moho depth increases to 30 km and dips towards the N-NW. Moreover, 1D-modelling of summed receiver function from STET station constrains a crustal structure significantly different from that observed at stations located in the outer part of the Alps. Indeed, beneath CALF and SAOF stations we need a 2 km thick shallow low velocity layer to fit best the observed receiver functions whereas this layer seems not to be present beneath STET station. Because recent P-coda studies have shown that near-receiver scattering can dominate teleseismic P-wave recordings in tectonically complicated areas, we account for effect of scattering energy in our records from array measurements. As the array aperture is wide relative to the heterogeneity scale length in the area, the array analysis produces only smooth imaging of scatterers beneath the stations.

  3. The Use of Barker Coded Signal on the Measurement of Wave Velocity of Rock

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Wu, H.

    2016-12-01

    The wave velocity of the rock is important petro physics parameters; it can be used to calculate the elastic parameters, monitor the variations in the stress suffered by rock; and the velocity anisotropy reflects the rock anisotropy. Furthermore, since the coda wave is more sensitive to the change in rock properties, its velocity variation has been applied to monitor the variations in rock structures caused by varying temperature, stress, water saturation and other factors. However, the measurements of velocities heavily depend on signal-to-noise ratio (SNR) of the signals, because low signal-to-noise ratio would result in the difficulty in the identification of information. Fortunately coded excitation technique, widely used in radar, and medical system, just can solve the problem above. Although this technique can effectively improve the SNR and resolution of received signal, there exits very high sidelobes after traditional matched filter. So a pseudo inverse filter was successfully applied to suppress the side lobes. After comparing different coded signals, Barker coded signal are selected to measure the velocity of P wave of Plexiglas, sandstone, granite, marble with automatic measurement method, which are compared with the measurement results of single pulse; the results showed that the measurement of coded signals is more closely to the manual measurement. Moreover, coda wave measurement of loading granite was also made with Barker coded signal, the results of which also showed that the detection result of coded signals is better than that of the single pulse. In conclusion, the experiments verify the effectiveness and reliability of coded signals used on the measurement of wave velocity of rock.

  4. Approximate Seismic Diffusive Models of Near-Receiver Geology: Applications from Lab Scale to Field

    NASA Astrophysics Data System (ADS)

    King, Thomas; Benson, Philip; De Siena, Luca; Vinciguerra, Sergio

    2017-04-01

    This paper presents a novel and simple method of seismic envelope analysis that can be applied at multiple scales, e.g. field, m to km scale and laboratory, mm to cm scale, and utilises the diffusive approximation of the seismic wavefield (Wegler, 2003). Coefficient values for diffusion and attenuation are obtained from seismic coda energies and are used to describe the rate at which seismic energy is scattered and attenuated into the local medium around a receiver. Values are acquired by performing a linear least squares inversion of coda energies calculated in successive time windows along a seismic trace. Acoustic emission data were taken from piezoelectric transducers (PZT) with typical resonance frequency of 1-5MHz glued around rock samples during deformation laboratory experiments carried out using a servo-controlled triaxial testing machine, where a shear/damage zone is generated under compression after the nucleation, growth and coalescence of microcracks. Passive field data were collected from conventional geophones during the 2004-2008 eruption of Mount St. Helens volcano (MSH), USA where a sudden reawakening of the volcanic activity and a new dome growth has occurred. The laboratory study shows a strong correlation between variations of the coefficients over time and the increase of differential stress as the experiment progresses. The field study links structural variations present in the near-surface geology, including those seen in previous geophysical studies of the area, to these same coefficients. Both studies show a correlation between frequency and structural feature size, i.e. landslide slip-planes and microcracks, with higher frequencies being much more sensitive to smaller scale features and vice-versa.

  5. A Comparison of Diarrheal Severity Scores in the MAL-ED Multisite Community-Based Cohort Study

    PubMed Central

    Lee, Gwenyth O.; Richard, Stephanie A.; Kang, Gagandeep; Houpt, Eric R.; Seidman, Jessica C.; Pendergast, Laura L.; Bhutta, Zulfiqar A.; Ahmed, Tahmeed; Mduma, Estomih R.; Lima, Aldo A.; Bessong, Pascal; Jennifer, Mats Steffi; Hossain, Md. Iqbal; Chandyo, Ram Krishna; Nyathi, Emanuel; Lima, Ila F.; Pascal, John; Soofi, Sajid; Ladaporn, Bodhidatta; Guerrant, Richard L.; Caulfield, Laura E.; Black, Robert E.; Kosek, Margaret N.

    2016-01-01

    ABSTRACT Objectives: There is a lack of consensus on how to measure diarrheal severity. Within the context of a multisite, prospective cohort study, we evaluated the performance of a modified Vesikari score (MAL-ED), 2 previously published scores (Clark and CODA [a diarrheal severity score (Community DiarrheA) published by Lee et al]), and a modified definition of moderate-to-severe diarrhea (MSD) based on dysentery and health care worker diagnosed dehydration. Methods: Scores were built using maternally reported symptoms or fieldworker-reported clinical signs obtained during the first 7 days of a diarrheal episode. The association between these and the risk of hospitalization were tested using receiver operating characteristic analysis. Severity scores were also related to illness etiology, and the likelihood of the episode subsequently becoming prolonged or persistent. Results: Of 10,159 episodes from 1681 children, 143 (4.0%) resulted in hospitalization. The area under the curve of each score as a predictor of hospitalization was 0.84 (95% confidence interval: 0.81, 0.87) (Clark), 0.85 (0.82, 0.88) (MAL-ED), and 0.87 (0.84, 0.89) (CODA). Severity was also associated with etiology and episode duration. Although families were more likely to seek care for severe diarrhea, approximately half of severe cases never reached the health system. Conclusions: Community-based diarrheal severity scores are predictive of relevant child health outcomes. Because they require no assumptions about health care access or utilization, they are useful in refining estimates of the burden of diarrheal disease, in estimating the effect of disease control interventions, and in triaging children for referral in low- and middle-income countries in which the rates of morbidity and mortality after diarrhea remain high. PMID:27347723

  6. Attenuation and scattering tomography of the deep plumbing system of Mount St. Helens

    USGS Publications Warehouse

    De Siena, Luca; Thomas, Christine; Waite, Greg P.; Moran, Seth C.; Klemme, Stefan

    2014-01-01

    We present a combined 3-D P wave attenuation, 2-D S coda attenuation, and 3-D S coda scattering tomography model of fluid pathways, feeding systems, and sediments below Mount St. Helens (MSH) volcano between depths of 0 and 18 km. High-scattering and high-attenuation shallow anomalies are indicative of magma and fluid-rich zones within and below the volcanic edifice down to 6 km depth, where a high-scattering body outlines the top of deeper aseismic velocity anomalies. Both the volcanic edifice and these structures induce a combination of strong scattering and attenuation on any seismic wavefield, particularly those recorded on the northern and eastern flanks of the volcanic cone. North of the cone between depths of 0 and 10 km, a low-velocity, high-scattering, and high-attenuation north-south trending trough is attributed to thick piles of Tertiary marine sediments within the St. Helens Seismic Zone. A laterally extended 3-D scattering contrast at depths of 10 to 14 km is related to the boundary between upper and lower crust and caused in our interpretation by the large-scale interaction of the Siletz terrane with the Cascade arc crust. This contrast presents a low-scattering, 4–6 km2 “hole” under the northeastern flank of the volcano. We infer that this section represents the main path of magma ascent from depths greater than 6 km at MSH, with a small north-east shift in the lower plumbing system of the volcano. We conclude that combinations of different nonstandard tomographic methods, leading toward full-waveform tomography, represent the future of seismic volcano imaging.

  7. CSDP: The seismology of continental thermal regimes

    NASA Astrophysics Data System (ADS)

    Aki, K.

    1991-05-01

    The past year continued to be extremely productive following up two major breakthroughs made in the preceding year. One of the breakthroughs was the derivation of an integral equation for time-dependent power spectra, which unified all the existing theories on seismic scattering including the radiative transfer theory for total energy and single-multiple scattering theories based on the ray approach. We successfully applied the method to the data from the United States Geological Survey (USGS) regional seismic arrays in central California, Long Valley and Island of Hawaii, and obtained convincing results on the scattering Q(sup -1) and intrinsic Q(sup -1) in these areas for the frequency range from 1 Hz to 20 Hz. The frequency dependence of scattering Q(sup -1) is, then, interpreted in terms of random medium with continuous or discrete scatterers. The other breakthrough was the application of T-matrix formulation to the seismic scattering problem. We are currently working on two dimensional inclusions with high and low velocity contrast with the surrounding medium. In addition to the above two main lines of research, we were able to use so-called 'T-phase' observed on the Island of Hawaii to map the Q value with a good spatial resolution. The T-phase is seismic waves converted from acoustic waves propagated through the sofar channel of the ocean. We found that we can eliminate remarkably well the frequency dependent recording site effect from the T-phase amplitude using the amplification factor for coda waves, further confirming the fundamental separability of source, path and site effects for coda waves, and proving the effectiveness of stochastic modeling of high-frequency seismic waves.

  8. Modeling of high‐frequency seismic‐wave scattering and propagation using radiative transfer theory

    USGS Publications Warehouse

    Zeng, Yuehua

    2017-01-01

    This is a study of the nonisotropic scattering process based on radiative transfer theory and its application to the observation of the M 4.3 aftershock recording of the 2008 Wells earthquake sequence in Nevada. Given a wide range of recording distances from 29 to 320 km, the data provide a unique opportunity to discriminate scattering models based on their distance‐dependent behaviors. First, we develop a stable numerical procedure to simulate nonisotropic scattering waves based on the 3D nonisotropic scattering theory proposed by Sato (1995). By applying the simulation method to the inversion of M 4.3 Wells aftershock recordings, we find that a nonisotropic scattering model, dominated by forward scattering, provides the best fit to the observed high‐frequency direct S waves and S‐wave coda velocity envelopes. The scattering process is governed by a Gaussian autocorrelation function, suggesting a Gaussian random heterogeneous structure for the Nevada crust. The model successfully explains the common decay of seismic coda independent of source–station locations as a result of energy leaking from multiple strong forward scattering, instead of backscattering governed by the diffusion solution at large lapse times. The model also explains the pulse‐broadening effect in the high‐frequency direct and early arriving S waves, as other studies have found, and could be very important to applications of high‐frequency wave simulation in which scattering has a strong effect. We also find that regardless of its physical implications, the isotropic scattering model provides the same effective scattering coefficient and intrinsic attenuation estimates as the forward scattering model, suggesting that the isotropic scattering model is still a viable tool for the study of seismic scattering and intrinsic attenuation coefficients in the Earth.

  9. Mapping Stripe Rust Resistance in a BrundageXCoda Winter Wheat Recombinant Inbred Line Population

    PubMed Central

    Case, Austin J.; Naruoka, Yukiko; Chen, Xianming; Garland-Campbell, Kimberly A.; Zemetra, Robert S.; Carter, Arron H.

    2014-01-01

    A recombinant inbred line (RIL) mapping population developed from a cross between winter wheat (Triticum aestivum L.) cultivars Coda and Brundage was evaluated for reaction to stripe rust (caused by Puccinia striiformis f. sp. tritici). Two hundred and sixty eight RIL from the population were evaluated in replicated field trials in a total of nine site-year locations in the U.S. Pacific Northwest. Seedling reaction to stripe rust races PST-100, PST-114 and PST-127 was also examined. A linkage map consisting of 2,391 polymorphic DNA markers was developed covering all chromosomes of wheat with the exception of 1D. Two QTL on chromosome 1B were associated with adult plant and seedling reaction and were the most significant QTL detected. Together these QTL reduced adult plant infection type from a score of seven to a score of two reduced disease severity by an average of 25% and provided protection against race PST-100, PST-114 and PST-127 in the seedling stage. The location of these QTL and the race specificity provided by them suggest that observed effects at this locus are due to a complementation of the previously known but defeated resistances of the cultivar Tres combining with that of Madsen (the two parent cultivars of Coda). Two additional QTL on chromosome 3B and one on 5B were associated with adult plant reaction only, and a single QTL on chromosome 5D was associated with seedling reaction to PST-114. Coda has been resistant to stripe rust since its release in 2000, indicating that combining multiple resistance genes for stripe rust provides durable resistance, especially when all-stage resistance genes are combined in a fashion to maximize the number of races they protect against. Identified molecular markers will allow for an efficient transfer of these genes into other cultivars, thereby continuing to provide excellent resistance to stripe rust. PMID:24642574

  10. Change of Direction Ability Performance in Cerebral Palsy Football Players According to Functional Profiles

    PubMed Central

    Reina, Raúl; Sarabia, Jose M.; Yanci, Javier; García-Vaquero, María P.; Campayo-Piernas, María

    2016-01-01

    The aims of the present study were to evaluate the validity and reliability of the two different change of direction ability (CODA) tests in elite football players with cerebral palsy (CP) and to analyse the differences in performance of this ability between current functional classes (FT) and controls. The sample consisted of 96 international cerebral palsy football players (FPCP) and 37 football players. Participants were divided into four different groups according to the International Federation of Cerebral Palsy Football (IFCPF) classes and a control group (CG): FT5 (n = 8); FT6 (n = 12); FT7 (n = 62); FT8 (n = 14); and CG (n = 37). The reproducibility of Modified Agility Test (MAT) and Illinois Agility Test (IAT) (ICC = 0.82–0.95, SEM = 2.5–5.8%) showed excellent to good values. In two CODA tests, CG performed faster scores compared with FPCP classes (p < 0.01, d = 1.76–3.26). In IAT, FT8 class comparisons regarding the other classes were: FT5 (p = 0.047, d = 1.05), FT6 (p = 0.055, d = 1.19), and FT7 (p = 0.396, d = 0.56). With regard to MAT, FT8 class was also compared with FT5 (p = 0.006, d = 1.30), FT6 (p = 0.061, d = 0.93), and FT7 (p = 0.033, d = 1.01). No significant differences have been found between FT5, FT6, and FT7 classes. According to these results, IAT and MAT could be useful and reliable and valid tests to analyse CODA in FPCP. Each test (IAT and MAT) could be applied considering the cut point that classifiers need to make a decision about the FT8 class and the other FT classes (FT5, FT6, and FT7). PMID:26779037

  11. Lateral variation of seismic attenuation in Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Thirunavukarasu, Ajaay; Kumar, Ajay; Mitra, Supriyo

    2017-01-01

    We use data from local earthquakes (mb ≥ 3.0) recorded by the Sikkim broad-band seismograph network to study the frequency-dependent attenuation of the crust and uppermost mantle. These events have been relocated using body wave phase data from local and regional seismograms. The decay of coda amplitudes at a range of central frequencies (1 to 12 Hz) has been measured for 74 earthquake-receiver pairs. These measurements are combined to estimate the frequency-dependent coda Q of the form Q( f) = Q0 f η. The estimated Q0 values range from 80 to 200, with an average of 123 ± 29; and η ranges from 0.92 to 1.04, with an average of 0.98 ± 0.04. To study the lateral variation of Q0 and η, we regionalized the measured Q values by combining all the earthquake-receiver path measurements through a back projection algorithm. We consider a single back-scatter model for the coda waves with elliptical sampling and parametrize the sampled area using 0.2° square grids. A nine-point spatial smoothening (similar to spatial Gaussian filter) is applied to stabilize the inversion. This is done at every frequency to observe the spatial variation of Q( f) and subsequently combined to obtain η variations. Results of our study reveal that the Sikkim Himalaya is characterized by low Q0 (80-100) compared to the foreland basin to its south (150-200) and the Nepal Himalaya to its west (140-160). The low Q and high η in Sikkim Himalaya is attributed to extrinsic scattering attenuation from structural heterogeneity and active faults within the crust, and intrinsic attenuation due to anelasticity in the hotter lithosphere beneath the actively deforming mountain belt. Similar low Q and high η values had also been observed in northwest and Garhwal-Kumaun Himalaya.

  12. Coda--After "The End of the End of Ideology"

    ERIC Educational Resources Information Center

    Jost, John T.

    2007-01-01

    Replies to comments by M. Glassman and D. Karno and R. K. Unger, on the author's original article on ideology. J. T. Jost thanks Glassman and Karno for returning him to his philosophical roots. Glassman and Karno argued in favor of an "instrumental pragmatist" approach to the study of ideology that emphasizes the strategic, purposive,…

  13. Mrs. Klein and Paulo Freire: Coda for the Pain of Symbolization in the Lifeworld of the Mind

    ERIC Educational Resources Information Center

    Britzman, Deborah P.

    2017-01-01

    The preceding symposium articles speculate on the psychosocial dynamics of discrimination as reverberating with grief, mourning, melancholia, and denial. They invite a psychoanalytic paradox on the fate of inchoate loss and its complex relation to oppression and depression: constellations of attachment to loss met with its social and psychical…

  14. Physical constraints and the comparative ecology of coastal ecosystems across the US Great Lakes, with a coda, presentation

    EPA Science Inventory

    One of my favorite papers by Scott Nixon (1988) was the story he build around the observation that marine fisheries yields were higher per unit area or per unit primary production than temperate lakes. The story, and the putative agent for the freshwater/marine difference, involv...

  15. Reviewing Sonority for Word-Final Sonorant+Obstruent Consonant Cluster Development in Turkish

    ERIC Educational Resources Information Center

    Topbas, Seyhun; Kopkalli-Yavuz, Handan

    2008-01-01

    The purpose of this study is to investigate the acquisition patterns of sonorant+obstruent coda clusters in Turkish to determine whether Turkish data support the prediction the Sonority Sequencing Principle (SSP) makes as to which consonant (i.e. C1 or C2) is more likely to be preserved in sonorant+obstruent clusters, and the error patterns of…

  16. End-word dysfluencies in young children: a clinical report.

    PubMed

    MacMillan, Verity; Kokolakis, Artemi; Sheedy, Stacey; Packman, Ann

    2014-01-01

    We report on 12 children with end-word dysfluencies (EWDs). Our aim was to document this little-reported type of dysfluency and to develop a possible explanation for them and how they relate to developmental stuttering. Audio recordings were made for 9 of the 12 children in the study. The EWDs were identified by consensus of two specialist speech pathologists and confirmed on acoustic displays. A segment of participant 1's speech was transcribed, including phonetic transcription of EWDs. The EWDs typically consisted of repetitions of the nucleus and/or the coda. However, there were also some EWDs that consisted of fixed postures on the nucleus (when in final position) or coda. We also report on the infrequent occurrence of broken words. Ten of the 12 children also stuttered, with 9 of them coming from four families, each with a history of stuttering. This study indicates that EWDs may be more prevalent than previously thought, but they may go largely unnoticed due to their perceptually fleeting nature. The hypothesis was developed that EWDs be regarded as another type of developmental dysfluency, along with stuttering and cluttering. Ideas for further research are suggested. © 2014 S. Karger AG, Basel.

  17. Thermal Cracking in Westerly Granite Monitored Using Direct Wave Velocity, Coda Wave Interferometry, and Acoustic Emissions

    NASA Astrophysics Data System (ADS)

    Griffiths, L.; Lengliné, O.; Heap, M. J.; Baud, P.; Schmittbuhl, J.

    2018-03-01

    To monitor both the permanent (thermal microcracking) and the nonpermanent (thermo-elastic) effects of temperature on Westerly Granite, we combine acoustic emission monitoring and ultrasonic velocity measurements at ambient pressure during three heating and cooling cycles to a maximum temperature of 450°C. For the velocity measurements we use both P wave direct traveltime and coda wave interferometry techniques, the latter being more sensitive to changes in S wave velocity. During the first cycle, we observe a high acoustic emission rate and large—and mostly permanent—apparent reductions in velocity with temperature (P wave velocity is reduced by 50% of the initial value at 450°C, and 40% upon cooling). Our measurements are indicative of extensive thermal microcracking during the first cycle, predominantly during the heating phase. During the second cycle we observe further—but reduced—microcracking, and less still during the third cycle, where the apparent decrease in velocity with temperature is near reversible (at 450°C, the P wave velocity is decreased by roughly 10% of the initial velocity). Our results, relevant for thermally dynamic environments such as geothermal reservoirs, highlight the value of performing measurements of rock properties under in situ temperature conditions.

  18. Localized water reverberation phases and its impact on back-projection images

    NASA Astrophysics Data System (ADS)

    Yue, H.; Castillo, J.; Yu, C.; Meng, L.; Zhan, Z.

    2017-12-01

    Coherent radiators imaged by back-projections (BP) are commonly interpreted as part of the rupture process. Nevertheless, artifacts introduced by structure related phases are rarely discriminated from the rupture process. In this study, we adopt the logic of empirical Greens' function analysis (EGF) to discriminate between rupture and structure effect. We re-examine the waveforms and BP images of the 2012 Mw 7.2 Indian Ocean earthquake and an EGF event (Mw 6.2). The P wave codas of both events present similar shape with characteristic period of approximately 10 s, which are back-projected as coherent radiators near the trench. S wave BP doesn't image energy radiation near the trench. We interpret those coda waves as localized water reverberation phases excited near the trench. We perform a 2D waveform modeling using realistic bathymetry model, and find that the sharp near-trench bathymetry traps the acoustic water waves forming localized reverberation phases. These waves can be imaged as coherent near-trench radiators with similar features as that in the observations. We present a set of methodology to discriminate between the rupture and propagation effects in BP images, which can serve as a criterion of subevent identification.

  19. TOMO-ETNA Experiment -Etna volcano, Sicily, investigated with active and passive seismic methods

    NASA Astrophysics Data System (ADS)

    Luehr, Birger-G.; Ibanez, Jesus M.; Díaz-Moreno, Alejandro; Prudencio, Janire; Patane, Domenico; Zieger, Toni; Cocina, Ornella; Zuccarello, Luciano; Koulakov, Ivan; Roessler, Dirk; Dahm, Torsten

    2017-04-01

    The TOMO-ETNA experiment, as part of the European Union project "MEDiterranean SUpersite Volcanoes (MED-SUV)", was devised to image the crustal structure beneath Etna by using state of the art passive and active seismic methods. Activities on-land and offshore are aiming to obtain new high-resolution seismic images to improve the knowledge of crustal structures existing beneath the Etna volcano and northeast Sicily up to the Aeolian Islands. In a first phase (June 15 - July 24, 2014) at Etna volcano and surrounding areas two removable seismic networks were installed composed by 80 Short Period and 20 Broadband stations, additionally to the existing network belonging to the "Istituto Nazionale di Geofisica e Vulcanologia" (INGV). So in total air-gun shots could be recorded by 168 stations onshore plus 27 ocean bottom instruments offshore in the Tyrrhenian and Ionian Seas. Offshore activities were performed by Spanish and Italian research vessels. In a second phase the broadband seismic network remained operative until October 28, 2014, as well as offshore surveys during November 19 -27, 2014. Active seismic sources were generated by an array of air-guns mounted in the Spanish Oceanographic vessel "Sarmiento de Gamboa" with a power capacity of up to 5.200 cubic inches. In total more than 26.000 shots were fired and more than 450 local and regional earthquakes could be recorded and will be analyzed. For resolving a volcanic structure the investigation of attenuation and scattering of seismic waves is important. In contrast to existing studies that are almost exclusively based on S-wave signals emitted by local earthquakes, here air-gun signals were investigated by applying a new methodology based on the coda energy ratio defined as the ratio between the energy of the direct P-wave and the energy in a later coda window. It is based on the assumption that scattering caused by heterogeneities removes energy from direct P-waves that constitutes the earliest possible arrival to any part later in the seismic wave train. As an independent proxy of the scattering strength along the ray path, we measure the peak delay time of a direct P-wave, which is well correlated with the coda energy ratio. As a result the distribution of heterogeneities around Etna could be visualized as the projection of the observation in directions of incident rays at the stations. Increased seismic scattering could be detected in the volcano and east of it. The strong heterogeneous zone towards the east coast of Sicily supports earlier observations, and is interpreted as a potential signature of the eastward sliding volcano flank. Beside the investigation of P-wave scattering the new seismic tomography software PARTOS (Passive Active Ray Tomography Software) has been developed based on a joint inversion of active and passive seismic sources. With PARTOS real data inversion has been carried out using three different subsets: i) active data; ii) passive data; and iii) joint dataset, permitting to obtain a new tomographic approach of that region.

  20. Mechanical Strain Measurement from Coda Wave Interferometry

    NASA Astrophysics Data System (ADS)

    Azzola, J.; Schmittbuhl, J.; Zigone, D.; Masson, F.; Magnenet, V.

    2017-12-01

    Coda Wave Interferometry (CWI) aims at tracking small changes in solid materials like rocks where elastic waves are diffusing. They are intensively sampling the medium, making the technique much more sensitive than those relying on direct wave arrivals. Application of CWI to ambient seismic noise has found a large range of applications over the past years like for multiscale imaging but also for monitoring complex structures such as regional faults or reservoirs (Lehujeur et al., 2015). Physically, observed changes are typically interpreted as small variations of seismic velocities. However, this interpretation remains questionable. Here, a specific focus is put on the influence of the elastic deformation of the medium on CWI measurements. The goal of the present work is to show from a direct numerical and experimental modeling that deformation signal also exists in CWI measurements which might provide new outcomes for the technique.For this purpose, we model seismic wave propagation within a diffusive medium using a spectral element approach (SPECFEM2D) during an elastic deformation of the medium. The mechanical behavior is obtained from a finite element approach (Code ASTER) keeping the mesh grid of the sample constant during the whole procedure to limit numerical artifacts. The CWI of the late wave arrivals in the synthetic seismograms is performed using both a stretching technique in the time domain and a frequency cross-correlation method. Both show that the elastic deformation of the scatters is fully correlated with time shifts of the CWI differently from an acoustoelastic effect. As an illustration, the modeled sample is chosen as an effective medium aiming to mechanically and acoustically reproduce a typical granitic reservoir rock.Our numerical approach is compared to experimental results where multi-scattering of an acoustic wave through a perforated loaded Au4G (Dural) plate is performed at laboratory scale. Experimental and numerical results of the strain influence on CWI are shown to be consistent.Lehujeur, M., J. Vergne, J. Schmittbuhl, and A. Maggi. Characterization of ambient seismic noise near a deep geothermal reservoir and implications for interferometric methods: a case study in northern alsace, france. Geothermal Energy, 3(1):1-17, 2015.

  1. Teleseismic Lg of Semipalatinsk and Novaya Zemlya Nuclear Explosions Recorded by the GRF (Gräfenberg) Array: Comparison with Regional Lg (BRV) and their Potential for Accurate Yield Estimation

    NASA Astrophysics Data System (ADS)

    Schlittenhardt, J.

    - A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the specific path to GRF.

  2. Gait outcome following outpatient physiotherapy based on the Bobath concept in people post stroke.

    PubMed

    Lennon, Sheila; Ashburn, Ann; Baxter, David

    The purpose of this study was to characterize the gait cycle of patients with hemiplegia before and after a period of outpatient physiotherapy based on the Bobath concept. Nine patients, at least 6 weeks post stroke and recently discharged from a stroke unit, were measured before and after a period of outpatient physiotherapy (mean duration = 17.4 weeks). Therapy was documented using a treatment checklist for each patient. The primary outcome measures were a number of gait variables related to the therapists' treatment hypothesis, recorded during the gait cycle using the CODA motion analysis system. Other secondary outcome measures were the Motor Assessment Scale, Modified Ashworth Scale, subtests of the Sodring Motor Evaluation Scale, the Step test, a 10-m walk test, the Barthel Index and the London Handicap Score. Recovery of more normal gait patterns in the gait cycle (using motion analysis) did not occur. Significant changes in temporal parameters (loading response, single support time) for both legs, in one kinematic (dorsiflexion during stance) and one kinetic variable on the unaffected side (hip flexor moment), and most of the clinical measures of impairment, activity and participation (with the exception of the Modified Ashworth Scale and the 10-m walk) were noted. Study findings did not support the hypothesis that the Bobath approach restored more normal movement patterns to the gait cycle. Further research is required to investigate the treatment techniques that are effective at improving walking ability in people after stroke.

  3. U.S. Army Medical Department Journal, January-March 2006

    DTIC Science & Technology

    2006-03-01

    Commission of Dental available. In 2004, the first Army resident went to the Accreditation (CODA) in association with the Orofacial Pain Fellowship at...the Orofacial Pain American Dental Association (ADA). Advanced Center, Naval Postgraduate Dental School, National training in general dentistry is...presented by orofacial pain patients. Year Advanced Education in General Dentistry Program. DODI 6000.13 notes that "while internship Another

  4. An Acoustically Based Sociolinguistic Analysis of Variable Coda /s/ Production in the Spanish of New York City

    ERIC Educational Resources Information Center

    Erker, Daniel Gerard

    2012-01-01

    This study examines a major linguistic event underway in New York City. Of its 10 million inhabitants, nearly a third are speakers of Spanish. This community is socially and linguistically diverse: Some speakers are recent arrivals from Latin America while others are lifelong New Yorkers. Some have origins in the Caribbean, the historic source of…

  5. Coda: The Slow Fuse of Change--Obama, the Schools, Imagination, and Convergence

    ERIC Educational Resources Information Center

    Greene, Maxine

    2009-01-01

    The author began writing this essay the day after waves of euphoria swept over what appeared to be a profoundly altered public space. Americans had seen the most diverse gathering of people coming freely together to affirm a common purpose no one could quite yet define. No one had instructed them to come out in the cold of that inauguration…

  6. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  7. Fine-scale structure of the mid-mantle characterised by global stacks of PP precursors

    NASA Astrophysics Data System (ADS)

    Bentham, H. L. M.; Rost, S.; Thorne, M. S.

    2017-08-01

    Subduction zones are likely a major source of compositional heterogeneities in the mantle, which may preserve a record of the subduction history and mantle convection processes. The fine-scale structure associated with mantle heterogeneities can be studied using the scattered seismic wavefield that arrives as coda to or as energy preceding many body wave arrivals. In this study we analyse precursors to PP by creating stacks recorded at globally distributed stations. We create stacks aligned on the PP arrival in 5° distance bins (with range 70-120°) from 600 earthquakes recorded at 193 stations stacking a total of 7320 seismic records. As the energy trailing the direct P arrival, the P coda, interferes with the PP precursors, we suppress the P coda by subtracting a best fitting exponential curve to this energy. The resultant stacks show that PP precursors related to scattering from heterogeneities in the mantle are present for all distances. Lateral variations are explored by producing two regional stacks across the Atlantic and Pacific hemispheres, but we find only negligible differences in the precursory signature between these two regions. The similarity of these two regions suggests that well mixed subducted material can survive at upper and mid-mantle depth. To describe the scattered wavefield in the mantle, we compare the global stacks to synthetic seismograms generated using a Monte Carlo phonon scattering technique. We propose a best-fitting layered heterogeneity model, BRT2017, characterised by a three layer mantle with a background heterogeneity strength (ɛ = 0.8%) and a depth-interval of increased heterogeneity strength (ɛ = 1%) between 1000 km and 1800 km. The scalelength of heterogeneity is found to be 8 km throughout the mantle. Since mantle heterogeneity of 8 km scale may be linked to subducted oceanic crust, the detection of increased heterogeneity at mid-mantle depths could be associated with stalled slabs due to increases in viscosity, supporting recent observations of mantle viscosity increases due to the iron spin transition at depths of ∼1000 km.

  8. Coda Q and its Frequency Dependence in the Eastern Himalayan and Indo-Burman Plate Boundary Systems

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Kumar, A.

    2015-12-01

    We use broadband waveform data for 305 local earthquakes from the Eastern Himalayan and Indo-Burman plate boundary systems, to model the seismic attenuation in NE India. We measure the decay in amplitude of coda waves at discreet frequencies (between 1 and 12Hz) to evaluate the quality factor (Qc) as a function of frequency. We combine these measurements to evaluate the frequency dependence of Qc of the form Qc(f)=Qof η, where Qo is the quality factor at 1Hz and η is the frequency dependence. Computed Qo values range from 80-360 and η ranges from 0.85-1.45. To study the lateral variation in Qo and η, we regionalise the Qc by combining all source-receiver measurements using a back-projection algorithm. For a single back scatter model, the coda waves sample an elliptical area with the epicenter and receiver at the two foci. We parameterize the region using square grids. The algorithm calculates the overlap in area and distributes Qc in the sampled grids using the average Qc as the boundary value. This is done in an iterative manner, by minimising the misfit between the observed and computed Qc within each grid. This process is repeated for all frequencies and η is computed for each grid by combining Qc for all frequencies. Our results reveal strong variation in Qo and η across NE India. The highest Qo are in the Bengal Basin (210-280) and the Indo-Burman subduction zone (300-360). The Shillong Plateau and Mikir Hills have intermediate Qo (~160) and the lowest Qo (~80) is observed in the Naga fold thrust belt. This variation in Qo demarcates the boundary between the continental crust beneath the Shillong Plateau and Mikir Hills and the transitional to oceanic crust beneath the Bengal Basin and Indo-Burman subduction zone. Thick pile of sedimentary strata in the Naga fold thrust belt results in the low Qo. Frequency dependence (η) of Qc across NE India is observed to be very high, with regions of high Qo being associated with relatively higher η.

  9. Frequency-dependent Lg-wave attenuation in northern Morocco

    NASA Astrophysics Data System (ADS)

    Noriega, Raquel; Ugalde, Arantza; Villaseñor, Antonio; Harnafi, Mimoun

    2015-11-01

    Frequency-dependent attenuation (Q- 1) in the crust of northern Morocco is estimated from Lg-wave spectral amplitude measurements every quarter octave in the frequency band 0.8 to 8 Hz. This study takes advantage of the improved broadband data coverage in the region provided by the deployment of the IberArray seismic network. Earthquake data consist of 71 crustal events with magnitudes 4 ≤ mb ≤ 5.5 recorded on 110 permanent and temporary seismic stations between January 2008 and December 2013 with hypocentral distances between 100 and 900 km. 1274 high-quality Lg waveforms provide dense path coverage of northern Morocco, crossing a region with a complex structure and heterogeneous tectonic setting as a result of continuous interactions between the African and Eurasian plates. We use two different methods: the coda normalization (CN) analysis, that allows removal of the source and site effects from the Lg spectra, and the spectral amplitude decay (SAD) method, that simultaneously inverts for source, site, and path attenuation terms. The CN and SAD methods return similar results, indicating that the Lg Q models are robust to differences in the methodologies. Larger errors and no significant frequency dependence are observed for frequencies lower than 1.5 Hz. For distances up to 400 km and the frequency band 1.5 ≤ ƒ (Hz) ≤ 4.5, the model functions Q(f) = (529- 22+ 23)(f/1.5)0.23 ± 0.06 and Q(f) = (457- 7+ 7)(f/1.5)0.44 ± 0.02 are obtained using the CN and SAD methods, respectively. A change in the frequency dependence is observed above 4.5 Hz for both methods which may be related to the influence of the Sn energy on the Lg window. The frequency-dependent Q- 1 estimates represent an average attenuation beneath a broad region including the Rif and Tell mountains, the Moroccan and Algerian mesetas, the Atlas Mountains and the Sahara Platform structural domains, and correlate well with areas of moderate seismicity where intermediate Q values have been obtained.

  10. Improvement of coda phase detectability and reconstruction of global seismic data using frequency-wavenumber methods

    NASA Astrophysics Data System (ADS)

    Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng

    2018-02-01

    Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.

  11. Modeling Events in the Lower Imperial Valley Basin

    NASA Astrophysics Data System (ADS)

    Tian, X.; Wei, S.; Zhan, Z.; Fielding, E. J.; Helmberger, D. V.

    2010-12-01

    The Imperial Valley below the US-Mexican border has few seismic stations but many significant earthquakes. Many of these events, such as the recent El Mayor-Cucapah event, have complex mechanisms involving a mixture of strike-slip and normal slip patterns with now over 30 aftershocks with magnitude over 4.5. Unfortunately, many earthquake records from the Southern Imperial Valley display a great deal of complexity, ie., strong Rayleigh wave multipathing and extended codas. In short, regional recordings in the US are too complex to easily separate source properties from complex propagation. Fortunately, the Dec 30 foreshock (Mw=5.9) has excellent recordings teleseismically and regionally, and moreover is observed with InSAR. We use this simple strike-slip event to calibrate paths. In particular, we are finding record segments involving Pnl (including depth phases) and some surface waves (mostly Love waves) that appear well behaved, ie., can be approximated by synthetics from 1D local models and events modeled with the Cut-and-Paste (CAP) routine. Simple events can then be identified along with path calibration. Modeling the more complicated paths can be started with known mechanisms. We will report on both the aftershocks and historic events.

  12. High-resolution Imaging of the Philippine Sea Plate subducting beneath Central Japan

    NASA Astrophysics Data System (ADS)

    Padhy, S.; Furumura, T.

    2016-12-01

    Thermal models predict that the oceanic crust of the young (<20 Ma) and warmer Philippine-sea plate (PHP) is more prone to melting. Deriving a high-resolution image of the PHP, including slab melting and other features of the subduction zone, is a key to understand the basics of earthquake occurrence and origin of magma in complex subduction zone like central Japan, where both the PHP and Pacific (PAC) Plates subduct. To this purpose, we analyzed high-resolution waveforms of moderate sized (M 4-6), intermediate-to-deep (>150 km) PAC earthquakes occurring in central Japan and conducted numerical simulation to derive a fine-scale PHP model, which is not constrained in earlier studies. Observations show spindle-shaped seismograms with strong converted phases and extended coda with very slow decay from a group of PAC events occurring in northern part of central Japan and recorded by high-sensitivity seismograph network (Hi-net) stations in the region. We investigate the mechanism of propagation of these anomalous waveforms using the finite difference method (FDM) simulation of wave propagation through the subduction zone. We examine the effects on waveform changes of major subduction zone features, such as the melting of oceanic crust in PHP, serpentinized mantle wedge, hydrated layer on the PAC due to slab dehydration, and anomaly in upper mantle between the PAC and PHP. Simulation results show that the waveform anomaly is primarily explained by strong scattering and absorption of high-frequency energy by the low-velocity anomalous mantle structure, with a strong coda excitation yielding spindle-shaped waveforms. The data are secondarily explained by melting of PHP in the basaltic crust. The location of the mantle anomaly is tightly constrained by the observation and evidence of PAC thinning in the region; these localized low-velocity structures aid in ascending the slab-derived fluids around the slab thinning. We expect that the results of this study will enhance our present understanding on the mechanism of intermediate to deep earthquakes in the region.

  13. Inner core boundary topography explored with reflected and diffracted P waves

    NASA Astrophysics Data System (ADS)

    deSilva, Susini; Cormier, Vernon F.; Zheng, Yingcai

    2018-03-01

    The existence of topography of the inner core boundary (ICB) can affect the amplitude, phase, and coda of body waves incident on the inner core. By applying pseudospectral and boundary element methods to synthesize compressional waves interacting with the ICB, these effects are predicted and compared with waveform observations in pre-critical, critical, post-critical, and diffraction ranges of the PKiKP wave reflected from the ICB. These data sample overlapping regions of the inner core beneath the circum-Pacific belt and the Eurasian, North American, and Australian continents, but exclude large areas beneath the Pacific and Indian Oceans and the poles. In the pre-critical range, PKiKP waveforms require an upper bound of 2 km at 1-20 km wavelength for any ICB topography. Higher topography sharply reduces PKiKP amplitude and produces time-extended coda not observed in PKiKP waveforms. The existence of topography of this scale smooths over minima and zeros in the pre-critical ICB reflection coefficient predicted from standard earth models. In the range surrounding critical incidence (108-130 °), this upper bound of topography does not strongly affect the amplitude and waveform behavior of PKIKP + PKiKP at 1.5 Hz, which is relatively insensitive to 10-20 km wavelength topography height approaching 5 km. These data, however, have a strong overlap in the regions of the ICB sampled by pre-critical PKiKP that require a 2 km upper bound to topography height. In the diffracted range (>152°), topography as high as 5 km attenuates the peak amplitudes of PKIKP and PKPCdiff by similar amounts, leaving the PKPCdiff/PKIKP amplitude ratio unchanged from that predicted by a smooth ICB. The observed decay of PKPCdiff into the inner core shadow and the PKIKP-PKPCdiff differential travel time are consistent with a flattening of the outer core P velocity gradient near the ICB and iron enrichment at the bottom of the outer core.

  14. Electronic laboratory quality assurance program: A method of enhancing the prosthodontic curriculum and addressing accreditation standards.

    PubMed

    Moghadam, Marjan; Jahangiri, Leila

    2015-08-01

    An electronic quality assurance (eQA) program was developed to replace a paper-based system and to address standards introduced by the Commission on Dental Accreditation (CODA) and to improve educational outcomes. This eQA program provides feedback to predoctoral dental students on prosthodontic laboratory steps at New York University College of Dentistry. The purpose of this study was to compare the eQA program of performing laboratory quality assurance with the former paper-based format. Fourth-year predoctoral dental students (n=334) who experienced both the paper-based and the electronic version of the quality assurance program were surveyed about their experiences. Additionally, data extracted from the eQA program were analyzed to identify areas of weakness in the curriculum. The study findings revealed that 73.8% of the students preferred the eQA program to the paper-based version. The average number of treatments that did not pass quality assurance standards was 119.5 per month. This indicated a 6.34% laboratory failure rate. Further analysis of these data revealed that 62.1% of the errors were related to fixed prosthodontic treatment, 27.9% to partial removable dental prostheses, and 10% to complete removable dental prostheses in the first 18 months of program implementation. The eQA program was favored by dental students who have experienced both electronic and paper-based versions of the system. Error type analysis can yield the ability to create customized faculty standardization sessions and refine the didactic and clinical teaching of the predoctoral students. This program was also able to link patient care activity with the student's laboratory activities, thus addressing the latest requirements of the CODA regarding the competence of graduates in evaluating laboratory work related to their patient care. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  15. Effectiveness of a Littoral Combat Ship as a Major Node in a Wireless Mesh Network

    DTIC Science & Technology

    2017-03-01

    17 Figure 6. Cloud Relay Groups . Source: Persistent Systems (2014a). .......................18 Figure 7. SolarWinds Network Performance Monitor...CIG Commander’s Initiative Group CLI Command Line Interface CN Core Network CODA Common Optical Digital Architecture CPS Cyber-Physical Systems...CSBA Center for Strategic and Budgetary CSG Carrier Strike Group DAMA Demand Assigned Multiple Access DDG Guided Missile Destroyer DL Distributed

  16. Predicting Explosion-Generated SN and LG Coda Using Syntheic Seismograms

    DTIC Science & Technology

    2008-09-01

    velocities in the upper crust are based on borehole data, geologic and gravity data, refraction studies and seismic experiments (McLaughlin et al. 1983...realizations of random media. We have estimated the heterogeneity parameters for the NTS using available seismic and geologic data. Lateral correlation...variance and coherence measures between seismic traces are estimated from clusters of nuclear explosions and well- log data. The horizontal von Karman

  17. A rondo in three flats

    NASA Astrophysics Data System (ADS)

    Hatheway, Alson E.

    2008-08-01

    "Rondo: an instrumental composition typically with a refrain recurring four times in the tonic and with three couplets in contrasting keys." --G. & C. Merriam Co. New York 1973 The composition will be played on three instruments, a cryogenic space surveillance sensor, a document scanner and an optical image correlator. The performer will take the liberty of including an introduction and a coda. After the first couplet the listeners may sing along with the performer.

  18. Introduction to special issue: Robert Jay Kastenbaum (1932-2013).

    PubMed

    Fulton, Robert; Klass, Dennis; Doka, Kenneth J; Kastenbaum, Beatrice

    The three pieces in this section introduce the Festschrift celebrating the works and influence of Omega: Journal of Death and Dying's founding editor, Robert Kastenbaum. Robert Fulton, an early Associate Editor of the Journal begins with some personal reflections on Kastenbaum. Klass and Doka then describe the nature of the Festschrift. A closing coda by Robert Kastenbaum's wife, Beatrice Kastenbaum, reminds us of the person behind the work.

  19. Current Status of Postdoctoral and Graduate Programs in Dentistry.

    PubMed

    Assael, Leon

    2017-08-01

    Advanced dental education has evolved in the context of societal needs and economic trends to its current status. Graduate programs have positioned their role in the context of health systems and health science education trends in hospitals, interprofessional clinical care teams, and dental schools and oral health care systems. Graduate dental education has been a critical factor in developing teams in trauma care, craniofacial disorders, pediatric and adult medicine, and oncology. The misalignment of the mission of graduate dental programs and the demands of private practice has posed a challenge in the evolution of programs as educational programs have been directed towards tertiary and indigent care while the practice community focuses on largely healthy affluent patients for complex clinical interventions. Those seeking graduate dental education today are smaller in number and include more international dental graduates than in the past. Graduate dental education in general dentistry and in the nine recognized dental specialties now includes Commission on Dental Accreditation (CODA) recognition of training standards as part of its accreditation process and a CODA accreditation process for areas of clinical education not recognized as specialties by the American Dental Association. Current types of programs include fellowship training for students in recognized specialties. This article was written as part of the project "Advancing Dental Education in the 21 st Century."

  20. Spectral identification of sperm whales from Littoral Acoustic Demonstration Center passive acoustic recordings

    NASA Astrophysics Data System (ADS)

    Sidorovskaia, Natalia A.; Richard, Blake; Ioup, George E.; Ioup, Juliette W.

    2005-09-01

    The Littoral Acoustic Demonstration Center (LADC) made a series of passive broadband acoustic recordings in the Gulf of Mexico and Ligurian Sea to study noise and marine mammal phonations. The collected data contain a large amount of various types of sperm whale phonations, such as isolated clicks and communication codas. It was previously reported that the spectrograms of the extracted clicks and codas contain well-defined null patterns that seem to be unique for individuals. The null pattern is formed due to individual features of the sound production organs of an animal. These observations motivated the present studies of adapting human speech identification techniques for deep-diving marine mammal phonations. A three-state trained hidden Markov model (HMM) was used with the phonation spectra of sperm whales. The HHM-algorithm gave 75% accuracy in identifying individuals when it had been initially tested for the acoustic data set correlated with visual observations of sperm whales. A comparison of the identification accuracy based on null-pattern similarity analysis and the HMM-algorithm is presented. The results can establish the foundation for developing an acoustic identification database for sperm whales and possibly other deep-diving marine mammals that would be difficult to observe visually. [Research supported by ONR.

  1. Modeling Array Stations in SIG-VISA

    NASA Astrophysics Data System (ADS)

    Ding, N.; Moore, D.; Russell, S.

    2013-12-01

    We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.

  2. Equatorial anisotropy in the inner part of Earth's inner core from autocorrelation of earthquake coda

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Song, Xiaodong; Xia, Han H.

    2015-03-01

    The Earth's solid inner core exhibits strong anisotropy, with wave velocity dependent on the direction of propagation due to the preferential alignment of iron crystals. Variations in the anisotropic structure, laterally and with depth, provide markers for measuring inner-core rotation and offer clues into the formation and dynamics of the inner core. Previous anisotropy models of the inner core have assumed a cylindrical anisotropy in which the symmetry axis is parallel to the Earth's spin axis. An inner part of the inner core with a distinct form of anisotropy has been suggested, but there is considerable uncertainty regarding its existence and characteristics. Here we analyse the autocorrelation of earthquake coda measured by global broadband seismic arrays between 1992 and 2012, and find that the differential travel times of two types of core-penetrating waves vary at low latitudes by up to 10 s. Our findings are consistent with seismic anisotropy in the innermost inner core that has a fast axis near the equatorial plane through Central America and Southeast Asia, in contrast to the north-south alignment of anisotropy in the outer inner core. The different orientations and forms of anisotropy may represent a shift in the evolution of the inner core.

  3. Separable Roles for Attentional Control Sub-Systems in Reading Tasks: A Combined Behavioral and fMRI Study

    PubMed Central

    Ihnen, S.K.Z.; Petersen, Steven E.; Schlaggar, Bradley L.

    2015-01-01

    Attentional control is important both for learning to read and for performing difficult reading tasks. A previous study invoked 2 mechanisms to explain reaction time (RT) differences between reading tasks with variable attentional demands. The present study combined behavioral and neuroimaging measures to test the hypotheses that there are 2 mechanisms of interaction between attentional control and reading; that these mechanisms are dissociable both behaviorally and neuro-anatomically; and that the 2 mechanisms involve functionally separable control systems. First, RT evidence was found in support of the 2-mechanism model, corroborating the previous study. Next, 2 sets of brain regions were identified as showing functional magnetic resonance imaging blood oxygen level-dependent activity that maps onto the 2-mechanism distinction. One set included bilateral Cingulo-opercular regions and mostly right-lateralized Dorsal Attention regions (CO/DA+). This CO/DA+ region set showed response properties consistent with a role in reporting which processing pathway (phonological or lexical) was biased for a particular trial. A second set was composed primarily of left-lateralized Frontal-parietal (FP) regions. Its signal properties were consistent with a role in response checking. These results demonstrate how the subcomponents of attentional control interact with subcomponents of reading processes in healthy young adults. PMID:24275830

  4. Comment on "Localized water reverberation phases and its impact on back-projection images" by Yue et al. [2017

    NASA Astrophysics Data System (ADS)

    Fan, W.; Shearer, P. M.

    2017-12-01

    Fan and Shearer [2016] analyzed the 2012 Mw 7.2 Sumatra earthquake and reported that the earthquake dynamically triggered early aftershock/aftershocks 150 km away from the mainshock and 50 s later. The early aftershock/aftershocks were detected with teleseismic P-wave back-projection, coincided with passing surface waves, and showed observable seismic waveforms in a wide frequency range (0.02—5 Hz). Recently, however, Yue et al. [2017] interpreted these coda arrivals as water reverberations from the mainshock, based mostly on EGF analysis of a nearby M6 earthquake and a water-phase synthetic test. Here, we show detailed back-projection and waveform analysis of three M6 earthquakes within 100km of the Mw 7.2 earthquake, including the EGF event analyzed in Yue et al. [2017]. In addition, we examine the waveforms of three M5.5 reverse faulting earthquakes close to our detected early aftershock landward of the trench. Our results show that the coda energy in question is more likely caused by a separate earthquake near the trench than by a mainshock water reverberation phase, thus supporting our earlier conclusion that the detected coherent radiators are likely to be dynamically triggered early aftershock/aftershocks.

  5. Moon meteoritic seismic hum: Steady state prediction

    USGS Publications Warehouse

    Lognonne, P.; Feuvre, M.L.; Johnson, C.L.; Weber, R.C.

    2009-01-01

    We use three different statistical models describing the frequency of meteoroid impacts on Earth to estimate the seismic background noise due to impacts on the lunar surface. Because of diffraction, seismic events on the Moon are typically characterized by long codas, lasting 1 h or more. We find that the small but frequent impacts generate seismic signals whose codas overlap in time, resulting in a permanent seismic noise that we term the "lunar hum" by analogy with the Earth's continuous seismic background seismic hum. We find that the Apollo era impact detection rates and amplitudes are well explained by a model that parameterizes (1) the net seismic impulse due to the impactor and resulting ejecta and (2) the effects of diffraction and attenuation. The formulation permits the calculation of a composite waveform at any point on the Moon due to simulated impacts at any epicentral distance. The root-mean-square amplitude of this waveform yields a background noise level that is about 100 times lower than the resolution of the Apollo long-period seismometers. At 2 s periods, this noise level is more than 1000 times lower than the low noise model prediction for Earth's microseismic noise. Sufficiently sensitive seismometers will allow the future detection of several impacts per day at body wave frequencies. Copyright 2009 by the American Geophysical Union.

  6. The interface between morphology and phonology: Exploring a morpho-phonological deficit in spoken production

    PubMed Central

    Cohen-Goldberg, Ariel M.; Cholin, Joana; Miozzo, Michele; Rapp, Brenda

    2013-01-01

    Morphological and phonological processes are tightly interrelated in spoken production. During processing, morphological processes must combine the phonological content of individual morphemes to produce a phonological representation that is suitable for driving phonological processing. Further, morpheme assembly frequently causes changes in a word's phonological well-formedness that must be addressed by the phonology. We report the case of an aphasic individual (WRG) who exhibits an impairment at the morpho-phonological interface. WRG was tested on his ability to produce phonologically complex sequences (specifically, coda clusters of varying sonority) in heteromorphemic and tautomorphemic environments. WRG made phonological errors that reduced coda sonority complexity in multimorphemic words (e.g., passed→[pæstɪd]) but not in monomorphemic words (e.g., past). WRG also made similar insertion errors to repair stress clash in multimorphemic environments, confirming his sensitivity to cross-morpheme well-formedness. We propose that this pattern of performance is the result of an intact phonological grammar acting over the phonological content of morphemic representations that were weakly joined because of brain damage. WRG may constitute the first case of a morpho-phonological impairment—these results suggest that the processes that combine morphemes constitute a crucial component of morpho-phonological processing. PMID:23466641

  7. CoDA 2014 special issue: Exploring data-focused research across the department of energy: Editorial

    DOE PAGES

    Myers, Kary Lynn

    2015-10-05

    Here, this collection of papers, written by researchers at the national labs, in academia, and in industry present real problems, massive and complex datasets, and novel statistical approaches motivated by the challenges presented by experimental and computational science. You'll find explorations of the trajectories of aircraft and of the light curves of supernovae, of computer network intrusions and of nuclear forensics, of photovoltaics and overhead imagery.

  8. Near Source Contributions to Teleseismic P Wave Coda and Regional Phases

    DTIC Science & Technology

    1991-04-27

    Pasadena, CA 91-125 Mr. William J. Best Prof. F. A. Dahlen 907 Westwood Drive Geological and Geophysical Sciences Vienna, VA 22180 Princeton...Station S-CUBED University of California A Division of Maxwell Laboratory Berkeley, CA 94720 P.O.Box 1620 La Jolla, CA 92038-1620 2 Prof. William ...Geosciences- Building #77 University of Arizona Tucson, AZ 85721 Dr. William Wortman Mission Research Corporation 8560 Cinderbed Rd. Suite # 700 Newington

  9. Regional Magnitude Research Supporting Broad-Area Monitoring of Small Seismic Events

    DTIC Science & Technology

    2007-09-01

    detonated at the Nevada Test Site (NTS) and the Semipalatinsk Test Site (STS). Observations for both test sites show that Pn amplitudes yield scale 10...identification procedures, and yield, via direct comparison to test site results for high frequencies (>1 Hz). Coda techniques are known to be effective...2006). Source spectral modeling of regional P/S discriminants at nuclear test sites in China and the former Soviet Union, Bull. Seismol. Soc. Am

  10. Modeling the Combined Effects of Deterministic and Statistical Structure for Optimization of Regional Monitoring

    DTIC Science & Technology

    2014-06-30

    Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM 87117-5776 DTIC COPY NOTICE AND SIGNATURE PAGE Using ...any other person or corporation; or convey any rights or permission to manufacture, use , or sell any patented invention that may relate to them...stations in Eurasia. This is accomplished by synthesizing seismograms using a radiative transport technique to predict the high frequency coda (>5 Hz

  11. Half a century of cross-cultural psychology: A grateful coda.

    PubMed

    Lonner, Walter J

    2015-11-01

    This article provides brief commentaries on culture-oriented research in psychology and a synopsis of the author's 50-year involvement in cross-cultural psychology. Overviews of several areas with which he is more familiar are given. These include his career-long stewardship of the Journal of Cross-Cultural Psychology, of which he is founding and special issues editor, continuous involvement with the International Association for Cross-Cultural Psychology, ongoing interest in the search for psychological universals, studying the influence of cultures on personality, values, and other psychological dimensions, monitoring the inclusion of culture in introductory psychology texts, contributions to cross-cultural counseling, and sustained involvement with the Online Readings in Psychology and Culture since its inception. Also included are comments on both the ever-expanding research on culture's influence on behavior and thought by a growing network of scholars who have different, yet complementary, agendas and research methods. (c) 2015 APA, all rights reserved).

  12. Temporal Change of Seismic Earth's Inner Core Phases: Inner Core Differential Rotation Or Temporal Change of Inner Core Surface?

    NASA Astrophysics Data System (ADS)

    Yao, J.; Tian, D.; Sun, L.; Wen, L.

    2017-12-01

    Since Song and Richards [1996] first reported seismic evidence for temporal change of PKIKP wave (a compressional wave refracted in the inner core) and proposed inner core differential rotation as its explanation, it has generated enormous interests in the scientific community and the public, and has motivated many studies on the implications of the inner core differential rotation. However, since Wen [2006] reported seismic evidence for temporal change of PKiKP wave (a compressional wave reflected from the inner core boundary) that requires temporal change of inner core surface, both interpretations for the temporal change of inner core phases have existed, i.e., inner core rotation and temporal change of inner core surface. In this study, we discuss the issue of the interpretation of the observed temporal changes of those inner core phases and conclude that inner core differential rotation is not only not required but also in contradiction with three lines of seismic evidence from global repeating earthquakes. Firstly, inner core differential rotation provides an implausible explanation for a disappearing inner core scatterer between a doublet in South Sandwich Islands (SSI), which is located to be beneath northern Brazil based on PKIKP and PKiKP coda waves of the earlier event of the doublet. Secondly, temporal change of PKIKP and its coda waves among a cluster in SSI is inconsistent with the interpretation of inner core differential rotation, with one set of the data requiring inner core rotation and the other requiring non-rotation. Thirdly, it's not reasonable to invoke inner core differential rotation to explain travel time change of PKiKP waves in a very small time scale (several months), which is observed for repeating earthquakes in Middle America subduction zone. On the other hand, temporal change of inner core surface could provide a consistent explanation for all the observed temporal changes of PKIKP and PKiKP and their coda waves. We conclude that the observed temporal changes of the inner core phases are caused by temporal changes of inner core surface. The temporal changes of inner core surface are found to occur in some localized regions within a short time scale (years to months), a phenomenon that should provide important clues to a potentially fundamental change of our understanding of core dynamics.

  13. Comparisons of Source Characteristics between Recent Inland Crustal Earthquake Sequences inside and outside of Niigata-Kobe Tectonic Zone, Japan

    NASA Astrophysics Data System (ADS)

    Somei, K.; Asano, K.; Iwata, T.; Miyakoshi, K.

    2012-12-01

    After the 1995 Kobe earthquake, many M7-class inland earthquakes occurred in Japan. Some of those events (e.g., the 2004 Chuetsu earthquake) occurred in a tectonic zone which is characterized as a high strain rate zone by the GPS observation (Sagiya et al., 2000) or dense distribution of active faults. That belt-like zone along the coast in Japan Sea side of Tohoku and Chubu districts, and north of Kinki district, is called as the Niigata-Kobe tectonic zone (NKTZ, Sagiya et al, 2000). We investigate seismic scaling relationship for recent inland crustal earthquake sequences in Japan and compare source characteristics between events occurring inside and outside of NKTZ. We used S-wave coda part for estimating source spectra. Source spectral ratio is obtained by S-wave coda spectral ratio between the records of large and small events occurring close to each other from nation-wide strong motion network (K-NET and KiK-net) and broad-band seismic network (F-net) to remove propagation-path and site effects. We carefully examined the commonality of the decay of coda envelopes between event-pair records and modeled the observed spectral ratio by the source spectral ratio function with assuming omega-square source model for large and small events. We estimated the corner frequencies and seismic moment (ratio) from those modeled spectral ratio function. We determined Brune's stress drops of 356 events (Mw: 3.1-6.9) in ten earthquake sequences occurring in NKTZ and six sequences occurring outside of NKTZ. Most of source spectra obey omega-square source spectra. There is no obvious systematic difference between stress drops of events in NKTZ zone and others. We may conclude that the systematic tendency of seismic source scaling of the events occurred inside and outside of NKTZ does not exist and the average source scaling relationship can be effective for inland crustal earthquakes. Acknowledgements: Waveform data were provided from K-NET, KiK-net and F-net operated by National Research Institute for Earth Science and Disaster Prevention Japan. This study is supported by Multidisciplinary research project for Niigata-Kobe tectonic zone promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.

  14. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  15. Waveform complexity caused by near trench structure and its impact on earthquake source study: application to the 2015 Illapel earthquake sequence

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wei, S.; Wu, W.; Ni, S.

    2017-12-01

    Among various types of 3D heterogeneity in the Earth, trench might be the most complex systems, which includes rapidly varying bathymetry and usually thick sediment below water layer. These structure complexities can cause substantial waveform complexities on seismograms, but their corresponding impact on the earthquake source studies has not yet been well understood. Here we explore those effects via studies of two moderate aftershocks (one near the coast while the other close to the Peru-Chile trench axis) in the 2015 Illapel earthquake sequence. The horizontal locations and depths of these two events are poorly constrained and the reported results of various agencies display substantial variations. Thus, we first relocated the epicenters using the P-wave first arrivals and determined other parameters by waveform fitting. In a jackknifing way, we found that the trench event has large differences between regional and teleseismic solutions, in particular for depth, while the coastal event shows consistent results. The teleseismic P/Pdiff waves between these two events also display distinctly different features. More specifically, the trench event has more complex P/Pdiff waves and stronger coda waves, in terms of amplitude and duration (longer than 100s). The coda waves are coherent across stations at different distances and azimuths, indicating a more likely origin of scattering waves due to 3D heterogeneity near trench. To quantitatively model those 3D effects, we adopted a hybrid waveform simulation approach that computes the 3D wavefield in the source region by the Spectral Element Method (SEM) and then propagates the wavefield to teleseismic and shadow zone distances through the Direct Solution Method (DSM). We incorporated the GEBCO bathymetry and water layer into the SEM simulations and assumed the IASP91 1D model for DSM computation. Comparing with the poor 1D synthetics fitting to the data, we do obtain dramatic improvement in 3D waveform fittings across a series of frequency bands. With sensitivity tests of 3D waveform modeling, the centroid longitude and depth for the near trench event are refined. Our study suggests that the complex trench structure must be taken into account for a reliable analysis of shallow earthquake near trench, in particular for the shallowest tsunamigenic earthquakes.

  16. Continent-Wide Maps of Lg Coda Q Variation and Rayleigh-wave Attenuation Variation for Eurasia

    DTIC Science & Technology

    2007-01-30

    lithosphere and crustal strain lead us to infer that fluids, originating by hydrothermal release from subducting lithosphere or other upper mantle heat...relatively low Qo values in the Arabian Peninsula are produced by fluids that have been released in the upper mantle by hydrothermal processes and have...Advection of plumes in mantle flow: Implications for hotspot motion, mantle viscosity and plume distribution, Geophys. J. Int., 132, 412–434. Talebian, M

  17. Analysis of Deep Seafloor Arrivals Observed on NPAL04

    DTIC Science & Technology

    2012-12-03

    transmission station to the scattering point (black line) to compute the time spent on the PE-predicted path to the scattering point. This time would...arrives at the OBSs at times corresponding to caustics of the PE predicted time fronts, there are large amplitude, late arrivals that occur between... caustics and even after the PE predicted coda. Similar analysis was done for T500 to T2300 with similar results and is discussed in Section 4 of

  18. Modeling the Combined Effects of Deterministic and Statistical Structure for Optimization of Regional Monitoring

    DTIC Science & Technology

    2015-06-30

    Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM 87117-5776 DTIC COPY NOTICE AND SIGNATURE PAGE Using Government drawings...or corporation; or convey any rights or permission to manufacture, use , or sell any patented invention that may relate to them. This report was...synthesizing seismograms using a radiative transport technique to predict the high frequency coda (2 to 4 Hz) of regional seismic phases at stations

  19. Scattering - a probe to Earth's small scale structure

    NASA Astrophysics Data System (ADS)

    Rost, S.; Earle, P.

    2009-05-01

    Much of the short-period teleseismic wavefield shows strong evidence for scattered waves in extended codas trailing the main arrivals predicted by ray theory. This energy mainly originates from high-frequency body waves interacting with fine-scale volumetric heterogeneities in the Earth. Studies of this energy revealed much of what we know about Earth's structure at scale lengths around 10 km throughout the Earth from crust to core. From these data we can gain important information about the mineral-physical and geochemical constitution of the Earth that is inaccessible to many other seismic imaging techniques. Previous studies used scattered energy related to PKP, PKiKP, and Pdiff to identify and map the small-scale structure of the mantle and core. We will present observations related to the core phases PKKP and P'P' to study fine-scale mantle heterogeneities. These phases are maximum travel-time phases with respect to perturbations at their reflection points. This allows observation of the scattered energy as precursors to the main phase avoiding common problems with traditional coda phases which arrive after the main pulse. The precursory arrival of the scattered energy allows the separation between deep Earth and crustal contributions to the scattered wavefield for certain source-receiver configurations. Using the information from these scattered phases we identify regions of the mantle that shows increased scattering potential likely linked to larger scale mantle structure identified in seismic tomography and geodynamical models.

  20. PubMed

    de Quadros, Ronice Müller; Cruz, Carina Rebello; Pizzio, Aline Lemos

    2012-01-01

    This study compared the performance in phonological memory tasks of bimodal bilíngual hearing children (children of deaf parents) and deaf children with cochlear implant (children of deaf parents and hearing parents), with different contexts of access to Brazilian Sign Language (Libras). We used two tests: Portuguêse Pseudowords (Santos and Bueno, 2003) and Libras Pseudosigns (developed by researchers from Development Bimodal Bilíngual Project). Moreover, we included two control groups, one of deaf children, growing up with Libras, with deaf parents, and the other of hearing adults Codas, bimodal bilínguals, with deaf parents. In the analysis of the results, initially, in regard to the performance among the groups tested, it was found that the bimodal bilíngual children had higher scores in both tests. However, when we analyzed the performance of the deaf child with cochlear implant, with deaf parents, with full access to sign language, compared to the other children with cochlear implant, with restricted access to Libras, we found that this child has a similar performance to the Coda children. The cochlear-implanted children with restricted access to Libras, therefore with more access to Portuguêse, had lower scores in both tests, being the worst score for the Portuguêse test. The results shown that children with cochlear implant can have benefits when they have access to Libras, having similar performances to hearing bimodal bilíngual children.

  1. Anisotropy of the Earth's inner inner core from autocorrelations of earthquake coda in China Regional Seismic Networks

    NASA Astrophysics Data System (ADS)

    Xia, H.; Song, X.; Wang, T.

    2014-12-01

    The Earth's inner core possesses strong cylindrical anisotropy with the fast symmetry axis parallel to the rotation axis. However, recent study has suggested that the inner part of the inner core has a fast symmetry axis near the equator with a different form of anisotropy from the outer part (Wang et al., this session). To confirm the observation, we use data from dense seismic arrays of the China Regional Seismic Networks. We perform autocorrelation (ACC) of the coda after major earthquakes (Mw>=7.0) at each station and then stack the ACCs at each cluster of stations. The PKIKP2 and PKIIKP2 phases (round-trip phase from the Earth's surface reflections) can be clearly extracted from the stacked empirical Green's functions. We observe systematic variation of the differential times between PKIKP2 and PKIIKP2 phases, which are sensitive to the bulk anisotropy of the inner core. The differential times show large variations with both latitudes and longitudes, even though our ray paths are not polar (with our stations at mid-range latitudes of about 20 to 45 degrees). The observations cannot be explained by an averaged anisotropy model with the fast axis along the rotation axis. The pattern appears consistent with an inner inner core that has a fast axis near the equator.

  2. Numerical modeling of time-lapse monitoring of CO2 sequestration in a layered basalt reservoir

    USGS Publications Warehouse

    Khatiwada, M.; Van Wijk, K.; Clement, W.P.; Haney, M.

    2008-01-01

    As part of preparations in plans by The Big Sky Carbon Sequestration Partnership (BSCSP) to inject CO2 in layered basalt, we numerically investigate seismic methods as a noninvasive monitoring technique. Basalt seems to have geochemical advantages as a reservoir for CO2 storage (CO2 mineralizes quite rapidly while exposed to basalt), but poses a considerable challenge in term of seismic monitoring: strong scattering from the layering of the basalt complicates surface seismic imaging. We perform numerical tests using the Spectral Element Method (SEM) to identify possibilities and limitations of seismic monitoring of CO2 sequestration in a basalt reservoir. While surface seismic is unlikely to detect small physical changes in the reservoir due to the injection of CO2, the results from Vertical Seismic Profiling (VSP) simulations are encouraging. As a perturbation, we make a 5%; change in wave velocity, which produces significant changes in VSP images of pre-injection and post-injection conditions. Finally, we perform an analysis using Coda Wave Interferometry (CWI), to quantify these changes in the reservoir properties due to CO2 injection.

  3. A comparison of methods to estimate seismic phase delays--Numerical examples for coda wave interferometry

    USGS Publications Warehouse

    Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.

    2015-01-01

    Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).

  4. The involvement of a polyphenol-rich extract of black chokeberry in oxidative stress on experimental arterial hypertension.

    PubMed

    Ciocoiu, Manuela; Badescu, Laurentiu; Miron, Anca; Badescu, Magda

    2013-01-01

    The aim of this study is to characterize the content of Aronia melanocarpa Elliott (black chokeberry) extract and also to estimate the influence of polyphenolic compounds contained in chokeberries on oxidative stress, on an L-NAME-induced experimental model of arterial hypertension. The rat blood pressure values were recorded using a CODA Noninvasive Blood Pressure System. HPLC/DAD coupled with ElectroSpray Ionization-Mass Spectrometry allowed identification of five phenolic compounds in berries ethanolic extract as follows: chlorogenic acid, kuromanin, rutin, hyperoside, and quercetin. The serous activity of glutathione-peroxidase (GSH-Px) has significantly lower values in the hypertensive (AHT) group as compared to the group protected by polyphenols (AHT + P). The total antioxidant capacity (TAC) values are lower in the AHT group and they are significantly higher in the AHT + P group. All the measured blood pressure components revealed a biostatistically significant blood pressure drop between the AHT group and the AHT + P group. The results reveal the normalization of the reduced glutathion (GSH) concentration as well as a considerable reduction in the malondialdehyde (MDA) serum concentration in the AHT + P group. Ethanolic extract of black chokeberry fruits not only has a potential value as a prophylactic agent but also may function as a nutritional supplement in the management of arterial hypertension.

  5. Fault zone reverberations from cross-correlations of earthquake waveforms and seismic noise

    NASA Astrophysics Data System (ADS)

    Hillers, Gregor; Campillo, Michel

    2016-03-01

    Seismic wavefields interact with low-velocity fault damage zones. Waveforms of ballistic fault zone head waves, trapped waves, reflected waves and signatures of trapped noise can provide important information on structural and mechanical fault zone properties. Here we extend the class of observable fault zone waves and reconstruct in-fault reverberations or multiples in a strike-slip faulting environment. Manifestations of the reverberations are significant, consistent wave fronts in the coda of cross-correlation functions that are obtained from scattered earthquake waveforms and seismic noise recorded by a linear fault zone array. The physical reconstruction of Green's functions is evident from the high similarity between the signals obtained from the two different scattered wavefields. Modal partitioning of the reverberation wavefield can be tuned using different data normalization techniques. The results imply that fault zones create their own ambiance, and that the here reconstructed reverberations are a key seismic signature of wear zones. Using synthetic waveform modelling we show that reverberations can be used for the imaging of structural units by estimating the location, extend and magnitude of lateral velocity contrasts. The robust reconstruction of the reverberations from noise records suggests the possibility to resolve the response of the damage zone material to various external and internal loading mechanisms.

  6. INL Seismic Monitoring Annual Report: January 1, 2006 - December 31, 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. J. Payne; N. S. Carpenter; J. M. Hodges

    During 2006, the Idaho National Laboratory (INL) recorded 1998 independent triggers from earthquakes both within the region and from around the world. Fifteen small to moderate size earthquakes ranging in magnitude from 3.0 to 4.5 occurred within and outside the 161-km (100-mile) radius of INL. There were 357 earthquakes with magnitudes up to 4.5 that occurred within the 161-km radius of the INL. The majority of earthquakes occurred in the Basin and Range Province surrounding the eastern Snake River Plain (ESRP). The largest of these earthquakes had a body-wave magnitude (mb) 4.5 and occurred on February 5, 2006. It wasmore » located northeast of Spencer, Idaho near the east-west trending Centennial fault along the Idaho-Montana border. The earthquake did not trigger SMAs located within INL buildings. Three earthquakes occurred within the ESRP, two of which occurred within the INL boundaries. One earthquake of coda magnitude (Mc) 1.7 occurred on October 18, 2006 and was located southeast of Pocatello, Idaho. The two earthquakes within the INL boundaries included the local magnitude (ML) 2.0 on July 31, 2006 located near the southern termination of the Lemhi fault and the Mc 0.4 on August 6, 2006 located near the center of INL. The ML 2.0 earthquake was well recorded by most of the INL seismic stations and had a focal depth of 8.98 km. First motions were used to compute a focal mechanism, which indicated normal faulting along one of two possible fault planes that may strike N76ºW and dip 70±3ºSW or strike N55ºW and dip 20±13ºNE. Slip along a normal fault that strikes N76ºW and dips 70±3ºSW is consistent with slip along a possible segment of the NW-trending Lemhi normal fault.« less

  7. Aided and Unaided Speech Perception by Older Hearing Impaired Listeners

    PubMed Central

    Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William

    2015-01-01

    The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423

  8. Surface-wave potential for triggering tectonic (nonvolcanic) tremor

    USGS Publications Warehouse

    Hill, D.P.

    2010-01-01

    Source processes commonly posed to explain instances of remote dynamic triggering of tectonic (nonvolcanic) tremor by surface waves include frictional failure and various modes of fluid activation. The relative potential for Love- and Rayleigh-wave dynamic stresses to trigger tectonic tremor through failure on critically stressed thrust and vertical strike-slip faults under the Coulomb-Griffith failure criteria as a function of incidence angle is anticorrelated over the 15- to 30-km-depth range that hosts tectonic tremor. Love-wave potential is high for strike-parallel incidence on low-angle reverse faults and null for strike-normal incidence; the opposite holds for Rayleigh waves. Love-wave potential is high for both strike-parallel and strike-normal incidence on vertical, strike-slip faults and minimal for ~45?? incidence angles. The opposite holds for Rayleigh waves. This pattern is consistent with documented instances of tremor triggered by Love waves incident on the Cascadia mega-thrust and the San Andreas fault (SAF) in central California resulting from shear failure on weak faults (apparent friction, ????? 0.2). However, documented instances of tremor triggered by surface waves with strike-parallel incidence along the Nankai megathrust beneath Shikoku, Japan, is associated primarily with Rayleigh waves. This is consistent with the tremor bursts resulting from mixed-mode failure (crack opening and shear failure) facilitated by near-lithostatic ambient pore pressure, low differential stress, with a moderate friction coefficient (?? ~ 0.6) on the Nankai subduction interface. Rayleigh-wave dilatational stress is relatively weak at tectonic tremor source depths and seems unlikely to contribute significantly to the triggering process, except perhaps for an indirect role on the SAF in sustaining tremor into the Rayleigh-wave coda that was initially triggered by Love waves.

  9. Surface-wave potential for triggering tectonic (nonvolcanic) tremor-corrected

    USGS Publications Warehouse

    Hill, David P.

    2012-01-01

    Source processes commonly posed to explain instances of remote dynamic triggering of tectonic (nonvolcanic) tremor by surface waves include frictional failure and various modes of fluid activation. The relative potential for Love- and Rayleigh-wave dynamic stresses to trigger tectonic tremor through failure on critically stressed thrust and vertical strike-slip faults under the Coulomb-Griffith failure criteria as a function of incidence angle are anticorrelated over the 15- to 30-km-depth range that hosts tectonic tremor. Love-wave potential is high for strike-parallel incidence on low-angle reverse faults and null for strike-normal incidence; the opposite holds for Rayleigh waves. Love-wave potential is high for both strike-parallel and strike-normal incidence on vertical, strike-slip faults and minimal for ~45° incidence angles. The opposite holds for Rayleigh waves. This pattern is consistent with documented instances of tremor triggered by Love waves incident on the Cascadia megathrust and the San Andreas fault (SAF) in central California resulting from shear failure on weak faults (apparent friction is μ* ≤ 0:2). Documented instances of tremor triggered by surface waves with strike-parallel incidence along the Nankai megathrust beneath Shikoku, Japan, however, are associated primarily with Rayleigh waves. This is consistent with the tremor bursts resulting from mixed-mode failure (crack opening and shear failure) facilitated by near-lithostatic ambient pore pressure, low differential stress, with a moderate friction coefficient (μ ~ 0:6) on the Nankai subduction interface. Rayleigh-wave dilatational stress is relatively weak at tectonic tremor source depths and seems unlikely to contribute significantly to the triggering process, except perhaps for an indirect role on the SAF in sustaining tremor into the Rayleigh-wave coda that was initially triggered by Love waves.

  10. Source spectral variation and yield estimation for small, near-source explosions

    NASA Astrophysics Data System (ADS)

    Yoo, S.; Mayeda, K. M.

    2012-12-01

    Significant S-wave generation is always observed from explosion sources which can lead to difficulty in discriminating explosions from natural earthquakes. While there are numerous S-wave generation mechanisms that are currently the topic of significant research, the mechanisms all remain controversial and appear to be dependent upon the near-source emplacement conditions of that particular explosion. To better understand the generation and partitioning of the P and S waves from explosion sources and to enhance the identification and discrimination capability of explosions, we investigate near-source explosion data sets from the 2008 New England Damage Experiment (NEDE), the Humble-Redwood (HR) series of explosions, and a Massachusetts quarry explosion experiment. We estimate source spectra and characteristic source parameters using moment tensor inversions, direct P and S waves multi-taper analysis, and improved coda spectral analysis using high quality waveform records from explosions from a variety of emplacement conditions (e.g., slow/fast burning explosive, fully tamped, partially tamped, single/ripple-fired, and below/above ground explosions). The results from direct and coda waves are compared to theoretical explosion source model predictions. These well-instrumented experiments provide us with excellent data from which to document the characteristic spectral shape, relative partitioning between P and S-waves, and amplitude/yield dependence as a function of HOB/DOB. The final goal of this study is to populate a comprehensive seismic source reference database for small yield explosions based on the results and to improve nuclear explosion monitoring capability.

  11. Quantitative trait loci analysis for resistance to Cephalosporium stripe, a vascular wilt disease of wheat.

    PubMed

    Quincke, Martin C; Peterson, C James; Zemetra, Robert S; Hansen, Jennifer L; Chen, Jianli; Riera-Lizarazu, Oscar; Mundt, Christopher C

    2011-05-01

    Cephalosporium stripe, caused by Cephalosporium gramineum, can cause severe loss of wheat (Triticum aestivum L.) yield and grain quality and can be an important factor limiting adoption of conservation tillage practices. Selecting for resistance to Cephalosporium stripe is problematic; however, as optimum conditions for disease do not occur annually under natural conditions, inoculum levels can be spatially heterogeneous, and little is known about the inheritance of resistance. A population of 268 recombinant inbred lines (RILs) derived from a cross between two wheat cultivars was characterized using field screening and molecular markers to investigate the inheritance of resistance to Cephalosporium stripe. Whiteheads (sterile heads caused by pathogen infection) were measured on each RIL in three field environments under artificially inoculated conditions. A linkage map for this population was created based on 204 SSR and DArT markers. A total of 36 linkage groups were resolved, representing portions of all chromosomes except for chromosome 1D, which lacked a sufficient number of polymorphic markers. Quantitative trait locus (QTL) analysis identified seven regions associated with resistance to Cephalosporium stripe, with approximately equal additive effects. Four QTL derived from the more susceptible parent (Brundage) and three came from the more resistant parent (Coda), but the cumulative, additive effect of QTL from Coda was greater than that of Brundage. Additivity of QTL effects was confirmed through regression analysis and demonstrates the advantage of accumulating multiple QTL alleles to achieve high levels of resistance.

  12. Coherent Seismic Arrivals in the P Wave Coda of the 2012 Mw 7.2 Sumatra Earthquake: Water Reverberations or an Early Aftershock?

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; Shearer, Peter M.

    2018-04-01

    Teleseismic records of the 2012 Mw 7.2 Sumatra earthquake contain prominent phases in the P wave train, arriving about 50 to 100 s after the direct P arrival. Azimuthal variations in these arrivals, together with back-projection analysis, led Fan and Shearer (https://doi.org/10.1002/2016GL067785) to conclude that they originated from early aftershock(s), located ˜150 km northeast of the mainshock and landward of the trench. However, recently, Yue et al. (https://doi.org/10.1002/2017GL073254) argued that the anomalous arrivals are more likely water reverberations from the mainshock, based mostly on empirical Green's function analysis of a M6 earthquake near the mainshock and a water phase synthetic test. Here we present detailed back-projection and waveform analyses of three M6 earthquakes within 100 km of the Mw 7.2 earthquake, including the empirical Green's function event analyzed in Yue et al. (https://doi.org/10.1002/2017GL073254). In addition, we examine the waveforms of three M5.5 reverse-faulting earthquakes close to the inferred early aftershock location in Fan and Shearer (https://doi.org/10.1002/2016GL067785). These results suggest that the reverberatory character of the anomalous arrivals in the mainshock coda is consistent with water reverberations, but the origin of this energy is more likely an early aftershock rather than delayed and displaced water reverberations from the mainshock.

  13. Enhanced Rayleigh waves tomography of Mexico using ambient noise cross-correlations (C1) and correlations of coda of correlations (C3)

    NASA Astrophysics Data System (ADS)

    Spica, Z. J.; Perton, M.; Calo, M.; Cordoba-Montiel, F.; Legrand, D.; Iglesias, A.

    2015-12-01

    Standard application of the seismic ambient noise tomography considers the existence of synchronous records at stations for green's functions retrieval. More recent theoretical and experimental observations showed the possibility to apply correlation of coda of noise correlation (C3) to obtain green's functions between stations of asynchronous seismic networks making possible to dramatically increase databases for imagining the Earth's interior. However, this possibility has not been fully exploited yet, and right now the data C3 are not included into tomographic inversions to refine seismic structures. Here we show for the first time how to incorporate the data of C1 and C3 to calculate dispersion maps of Rayleigh waves in the range period of 10-120s, and how the merging of these datasets improves the resolution of the structures imaged. Tomographic images are obtained for an area covering Mexico, the Gulf of Mexico and the southern U.S. We show dispersion maps calculated using both data of C1 and the complete dataset (C1+C3). The latter provide new details of the seismic structure of the region allowing a better understanding of their role on the geodynamics of the study area. The resolving power obtained in our study is several times higher than in previous studies based on ambient noise. This demonstrates the new possibilities for imaging the Earth's crust and upper mantle using this enlarged database.

  14. Change of direction ability test differentiates higher level and lower level soccer referees

    PubMed Central

    Los, Arcos A; Grande, I; Casajús, JA

    2016-01-01

    This report examines the agility and level of acceleration capacity of Spanish soccer referees and investigates the possible differences between field referees of different categories. The speed test consisted of 3 maximum acceleration stretches of 15 metres. The change of direction ability (CODA) test used in this study was a modification of the Modified Agility Test (MAT). The study included a sample of 41 Spanish soccer field referees from the Navarre Committee of Soccer Referees divided into two groups: i) the higher level group (G1, n = 20): 2ndA, 2ndB and 3rd division referees from the Spanish National Soccer League (28.43 ± 1.39 years); and ii) the lower level group (G2, n = 21): Navarre Provincial League soccer referees (29.54 ± 1.87 years). Significant differences were found with respect to the CODA between G1 (5.72 ± 0.13 s) and G2 (6.06 ± 0.30 s), while no differences were encountered between groups in acceleration ability. No significant correlations were obtained in G1 between agility and the capacity to accelerate. Significant correlations were found between sprint and agility times in the G2 and in the total group. The results of this study showed that agility can be used as a discriminating factor for differentiating between national and regional field referees; however, no observable differences were found over the 5 and 15 m sprint tests. PMID:27274111

  15. Dental students' self-assessment of operative preparations using CAD/CAM: a preliminary analysis.

    PubMed

    Mays, Keith A; Levine, Eric

    2014-12-01

    The Commission on Dental Accreditation (CODA)'s accreditation standards for dental schools state that "graduates must demonstrate the ability to self-assess." Therefore, dental schools have developed preclinical and clinical self-assessment (SA) protocols aimed at fostering a reflective process. This study comparing students' visual SA with students' digital SA and with faculty assessment was designed to test the hypothesis that higher agreement would occur when utilizing a digital evaluation. Twenty-five first-year dental students at one dental school participated by preparing a mesial occlusal preparation on tooth #30 and performing both types of SAs. A faculty evaluation was then performed both visually and digitally using the same evaluation criteria. The Kappa statistic was used to measure agreement between evaluators. The results showed statistically significant moderate agreement between the faculty visual and faculty digital modes of evaluation for occlusal shape (K=0.507, p=0.002), proximal shape (K=0.564, p=0.001), orientation (K=0.425, p=0.001), and definition (K=0.480, p=0.001). There was slight to poor agreement between the student visual and faculty visual assessment, except for preparation orientation occlusal shape (K=0.164, p=0.022), proximal shape (K=-0.227, p=0.032), orientation (K=0.253, p=0.041), and definition (K=-0.027, p=0.824). This study showed that the students had challenges in self-assessing even when using CAD/CAM and the digital assessment did not improve the amount of student/faculty agreement.

  16. Breakdown of equipartition in diffuse fields caused by energy leakage

    NASA Astrophysics Data System (ADS)

    Margerin, L.

    2017-05-01

    Equipartition is a central concept in the analysis of random wavefields which stipulates that in an infinite scattering medium all modes and propagation directions become equally probable at long lapse time in the coda. The objective of this work is to examine quantitatively how this conclusion is affected in an open waveguide geometry, with a particular emphasis on seismological applications. To carry our this task, the problem is recast as a spectral analysis of the radiative transfer equation. Using a discrete ordinate approach, the smallest eigenvalue and associated eigenfunction of the transfer equation, which control the asymptotic intensity distribution in the waveguide, are determined numerically with the aid of a shooting algorithm. The inverse of this eigenvalue may be interpreted as the leakage time of the diffuse waves out of the waveguide. The associated eigenfunction provides the depth and angular distribution of the specific intensity. The effect of boundary conditions and scattering anisotropy is investigated in a series of numerical experiments. Two propagation regimes are identified, depending on the ratio H∗ between the thickness of the waveguide and the transport mean path in the layer. The thick layer regime H∗ > 1 has been thoroughly studied in the literature in the framework of diffusion theory and is briefly considered. In the thin layer regime H∗ < 1, we find that both boundary conditions and scattering anisotropy leave a strong imprint on the leakage effect. A parametric study reveals that in the presence of a flat free surface, the leakage time is essentially controlled by the mean free time of the waves in the layer in the limit H∗ → 0. By contrast, when the free surface is rough, the travel time of ballistic waves propagating through the crust becomes the limiting factor. For fixed H∗, the efficacy of leakage, as quantified by the inverse coda quality factor, increases with scattering anisotropy. For sufficiently thin layers H∗≈ 1/5, the energy flux is predominantly directed parallel to the surface and equipartition breaks down. Qualitatively, the anisotropy of the intensity field is found to increase with the inverse non-dimensional leakage time, with the scattering mean free time as time scale. Because it enhances leakage, a rough free surface may result in stronger anisotropy of the intensity field than a flat surface, for the same bulk scattering properties. Our work identifies leakage as a potential explanation for the large deviation from isotropy observed in the coda of body waves.

  17. Proceedings of the Interservice/Industry Training Systems Conference (9th), Held at Washington, DC, on 30 November - 2 December 1987

    DTIC Science & Technology

    1987-12-01

    requires much more data, but holds fast to the idea that the FV approach, or some other model, is critical if the job analysis process is to have its...Ada compiled code executes twice as fast as Microsoft’s Fortran compiled code. This conclusion is at variance with the results obtained from...finish is not so important. Hence, if a design methodology produces coda that will not execute fast enough on processors suitable for flight

  18. Dynamic triggering potential of large earthquakes recorded by the EarthScope U.S. Transportable Array using a frequency domain detection method

    NASA Astrophysics Data System (ADS)

    Linville, L. M.; Pankow, K. L.; Kilb, D. L.; Velasco, A. A.; Hayward, C.

    2013-12-01

    Because of the abundance of data from the Earthscope U.S. Transportable Array (TA), data paucity and station sampling bias in the US are no longer significant obstacles to understanding some of the physical parameters driving dynamic triggering. Initial efforts to determine locations of dynamic triggering in the US following large earthquakes (M ≥ 8.0) during TA relied on a time domain detection algorithm which used an optimized short-term average to long-term average (STA/LTA) filter and resulted in an unmanageably large number of false positive detections. Specific site sensitivities and characteristic noise when coupled with changes in detection rates often resulted in misleading output. To navigate this problem, we develop a frequency domain detection algorithm that first pre-whitens each seismogram and then computes a broadband frequency stack of the data using a three hour time window beginning at the origin time of the mainshock. This method is successful because of the broadband nature of earthquake signals compared with the more band-limited high frequency picks that clutter results from time domain picking algorithms. Preferential band filtering of the frequency stack for individual events can further increase the accuracy and drive the detection threshold to below magnitude one, but at general cost to detection levels across large scale data sets. Of the 15 mainshocks studied, 12 show evidence of discrete spatial clusters of local earthquake activity occurring within the array during the mainshock coda. Most of this activity is in the Western US with notable sequences in Northwest Wyoming, Western Texas, Southern New Mexico and Western Montana. Repeat stations (associated with 2 or more mainshocks) are generally rare, but when occur do so exclusively in California and Nevada. Notably, two of the most prolific regions of seismicity following a single mainshock occur following the 2009 magnitude 8.1 Samoa (Sep 29, 2009, 17:48:10) event, in areas with few or no known Quaternary faults and sparse historic seismicity. To gain a better understanding of the potential interaction between local events during the mainshock coda and the local stress changes induced by the passing surface waves, we juxtapose the local earthquake locations on maps of peak stress changes (e.g., radial, tangential and horizontal). Preliminary results reveal that triggering in the US is perhaps not as common as previously thought, and that dynamic triggering is most likely a more complicated interplay between physical parameters (e.g., amplitude threshold, wave orientation, tectonic environment, etc) than can be explained by a single dominant driver.

  19. Constraints on small-scale heterogeneity in the lowermost mantle from observations of near podal PcP precursors

    NASA Astrophysics Data System (ADS)

    Zhang, Baolong; Ni, Sidao; Sun, Daoyuan; Shen, Zhichao; Jackson, Jennifer M.; Wu, Wenbo

    2018-05-01

    Volumetric heterogeneities on large (∼>1000 km) and intermediate scales (∼>100 km) in the lowermost mantle have been established with seismological approaches. However, there are controversies regarding the level of heterogeneity in the lowermost mantle at small scales (a few kilometers to tens of kilometers), with lower bound estimates ranging from 0.1% to a few percent. We take advantage of the small amplitude PcP waves at near podal distances (0-12°) to constrain the level of small-scale heterogeneity within 250 km above the CMB. First, we compute short period synthetic seismograms with a finite difference code for a series of volumetric heterogeneity models in the lowermost mantle, and find that PcP is not identifiable if the small-scale heterogeneity in the lowermost mantle is above 2.5%. We then use a functional form appropriate for coda decay to suppress P coda contamination. By comparing the corrected envelope of PcP and its precursors with synthetic seismograms, we find that perturbations of small-scale (∼8 km) heterogeneity in the lowermost mantle is ∼0.2-0.5% beneath regions of the China-Myanmar border area, Okhotsk Sea and South America. Whereas strong perturbations (∼1.0%) are found beneath Central America. In the regions studied, we find that this particular type of small-scale heterogeneity in the lowermost mantle is weak, yet there are some regions requiring heterogeneity up to 1.0%. Where scattering is stronger, such as under Central America, more chemically complex mineral assemblages may be present at the core-mantle boundary.

  20. Consonant and Vowel Processing in Word Form Segmentation: An Infant ERP Study.

    PubMed

    Von Holzen, Katie; Nishibayashi, Leo-Lyuki; Nazzi, Thierry

    2018-01-31

    Segmentation skill and the preferential processing of consonants (C-bias) develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs) to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders), while a minority showed a positive-going response (Positive Responders). Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill) and mispronunciations (evaluating phonological processing) at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation) and later lexical skills.

  1. Changes in Seismic Velocity During the 2004 - 2008 Eruption of Mount St. Helens Volcano

    NASA Astrophysics Data System (ADS)

    Hotovec-Ellis, A. J.; Vidale, J. E.; Gomberg, J. S.; Moran, S. C.; Thelen, W. A.

    2013-12-01

    Mount St. Helens (MSH) effusively erupted in late 2004, following an 18-year quiescence. Many swarms of repeating earthquakes accompanied the extrusion and in some cases the waveforms from these earthquakes evolved slowly, possibly reflecting changes in the properties of the volcano that affect seismic wave propagation. We use coda-wave interferometry to quantify these changes in terms of small (usually <1%) changes in seismic velocity structure by determining how relatively condensed or stretched the coda is between two similar earthquakes. We then utilize several hundred distinct families of repeating earthquakes at once to create a continuous function of velocity change observed at any station in the seismic network. The rate of earthquakes allows us to track these changes on a daily or even hourly time scale. Following years of no seismic velocity changes larger than those due to climatic processes (tenths of a percent), we observed decreases in seismic velocity of >1% coincident with the onset of increased earthquake activity beginning September 23, 2004. These changes are largest near the summit of the volcano, and likely related to shallow deformation as magma first worked its way to the surface. Changes in velocity are often attributed to deformation, especially volumetric strain and the opening or closing of cracks, but also with nonlinear responses to ground shaking and fluid intrusion. We compare velocity changes across the eruption with other available observations, such as deformation (e.g., GPS, tilt, photogrammetry), to better constrain the relationships between velocity change and its possible causes.

  2. A continuous record of intereruption velocity change at Mount St. Helens from coda wave interferometry

    USGS Publications Warehouse

    Hotovec-Ellis, Alicia J.; Gomberg, Joan S.; Vidale, John; Creager, Ken C.

    2014-01-01

    In September 2004, Mount St. Helens volcano erupted after nearly 18 years of quiescence. However, it is unclear from the limited geophysical observations when or if the magma chamber replenished following the 1980–1986 eruptions in the years before the 2004–2008 extrusive eruption. We use coda wave interferometry with repeating earthquakes to measure small changes in the velocity structure of Mount St. Helens volcano that might indicate magmatic intrusion. By combining observations of relative velocity changes from many closely located earthquake sources, we solve for a continuous function of velocity changes with time. We find that seasonal effects dominate the relative velocity changes. Seismicity rates and repeating earthquake occurrence also vary seasonally; therefore, velocity changes and seismicity are likely modulated by snow loading, fluid saturation, and/or changes in groundwater level. We estimate hydrologic effects impart stress changes on the order of tens of kilopascals within the upper 4 km, resulting in annual velocity variations of 0.5 to 1%. The largest nonseasonal change is a decrease in velocity at the time of the deep Mw = 6.8 Nisqually earthquake. We find no systematic velocity changes during the most likely times of intrusions, consistent with a lack of observable surface deformation. We conclude that if replenishing intrusions occurred, they did not alter seismic velocities where this technique is sensitive due to either their small size or the finite compressibility of the magma chamber. We interpret the observed velocity changes and shallow seasonal seismicity as a response to small stress changes in a shallow, pressurized system.

  3. Identification of T-Waves in the Alboran Sea

    NASA Astrophysics Data System (ADS)

    Carmona, Enrique; Almendros, Javier; Alguacil, Gerardo; Soto, Juan Ignacio; Luzón, Francisco; Ibáñez, Jesús M.

    2015-11-01

    Analyses of seismograms from ~1,100 north-Moroccan earthquakes recorded at stations of the Red Sísmica de Andalucía (Southern Spain) reveal the systematic presence of late phases embedded in the earthquake codas. These phases have distinctive frequency contents, similar to the P and S spectra and quite different to the frequency contents of the earthquake codas. They are best detected at near-shore stations. Their amplitudes decay significantly with distance to the shoreline. The delays with respect to the P-wave onsets of the preceding earthquakes are consistently around 85 s. Late phases are only detected for earthquakes located in a small region of about 100 × 60 km centered at 35.4°N, 4.0°W near the northern coast of Morocco. Several hypotheses could, in principle, explain the presence of these late phases in the seismograms, for example, the occurrence of low-energy aftershocks, efficient wave reflections, or Rayleigh waves generated along the source-station paths. However, we conclude that the most-likely origin of these phases corresponds to the incidence of T-waves (generated by conversion from elastic to acoustic energy in the north-Moroccan coast) in the southern coast of the Iberian Peninsula. T-waves are thought to be generated by energy trapping in low-velocity channels along long oceanic paths; in this case, we demonstrate that they can be produced in much shorter paths as well. Although T-waves have been already documented in other areas of the Mediterranean Sea, this is the first time that they have been identified in the Alboran Sea.

  4. Spatio-temporal distribution of energy radiation from low frequency tremor

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Obara, K.

    2007-12-01

    Recent fine-scale hypocenter locations of low frequency tremors (LFTs) estimated by cross-correlation technique (Shelly et al. 2006; Maeda et al. 2006) and new finding of very low frequency earthquake (Ito et al. 2007) suggest that these slow events occur at the plate boundary associated with slow slip events (Obara and Hirose, 2006). However, the number of tremor detected by above technique is limited since continuous tremor waveforms are too complicated. Although an envelope correlation method (ECM) (Obara, 2002) enables us to locate epicenters of LFT without arrival time picks, however, ECM fails to locate LFTs precisely especially on the most active stage of tremor activity because of the low-correlation of envelope amplitude. To reveal total energy release of LFT, here we propose a new method for estimating the location of LFTs together with radiated energy from the tremor source by using envelope amplitude. The tremor amplitude observed at NIED Hi-net stations in western Shikoku simply decays in proportion to the reciprocal of the source-receiver distance after the correction of site- amplification factor even though the phases of the tremor are very complicated. So, we model the observed mean square envelope amplitude by time-dependent energy radiation with geometrical spreading factor. In the model, we do not have origin time of the tremor since we assume that the source of the tremor continuously radiates the energy. Travel-time differences between stations estimated by the ECM technique also incorporated in our locating algorithm together with the amplitude information. Three-component 1-hour Hi-net velocity continuous waveforms with a pass-band of 2-10 Hz are used for the inversion after the correction of site amplification factors at each station estimated by coda normalization method (Takahashi et al. 2005) applied to normal earthquakes in the region. The source location and energy are estimated by applying least square inversion to the 1-min window iteratively. As a first application of our method, we estimated the spatio-temporal distribution of energy radiation for 2006 May episodic tremor and slip event occurred in western Shikoku, Japan, region. Tremor location and their radiated energy are estimated for every 1 minute. We counted the number of located LFTs and summed up their total energy at each grid having 0.05-degree spacing at each day to figure out the spatio-temporal distribution of energy release of tremors. The resultant spatial distribution of radiated energy is concentrated at a specific region. Additionally, we see the daily change of released energy, both of location and amount, which corresponds to the migration of tremor activity. The spatio-temporal distribution of energy radiation of tremors is in good agreement with a spatio-temporal slip distribution of slow slip event estimated from Hi-net tiltmeter record (Hirose et al. 2007). This suggests that small continuous tremors occur associated with a rupture process of slow slip.

  5. Attenuation tomography of the main volcanic regions of the Campanian Plain.

    NASA Astrophysics Data System (ADS)

    de Siena, Luca; Del Pezzo, Edoardo; Bianco, Francesca

    2010-05-01

    Passive, high resolution attenuation tomography is used to image the geological structure in the first upper 4 km of shallow crust beneath the Campanian Plain. Images were produced by two separate attenuation tomography studies of the main volcanic regions of the Campanian Plain, Southern Italy, Mt. Vesuvius volcano and Campi Flegrei caldera. The three-dimensional S wave attenuation tomography of Mt. Vesuvius has been obtained with multiple measurements of coda-normalized S-wave spectra of local small magnitude earthquakes. P-wave attenuation tomography was performed using classical spectral methods. The images were obtained inverting the spectral data with a multiple resolution approach expressively designed for attenuation tomography. This allowed to obtain a robust attenuation image of the volumes under the central cone at a maximum resolution of 300 m. The same approach was applied to a data set recorded in the Campi Flegrei area during the 1982-1984 seismic crisis. Inversion ensures a minimum cell size resolution of 500 meters in the zones with sufficient ray coverage, and 1000 meters outside these zones. The study of the resolution matrix as well as the synthetic tests guarantee an optimal reproduction of the input anomalies in the center of the caldera, between 0 and 3.5 km in depth. Results allowed an unprecedented view of several features of the medium, like the residual part of solidified magma from the last eruption, under the central cone of Mt. Vesuvius, and the feeding systems and top of the carbonate basement, 3 km depth below both volcanic areas. Vertical Q contrast image important fault zones, such as the La Starza fault, as well as high attenuation structures that correspond to gas or fluid reservoirs, and reveal the upper part of gas bearing conduits connecting these high attenuation volumes with the magma sill revealed at about 7 km in depth by passive travel-time tomography under the whole Campanian Plain.

  6. Extracting near-surface QL between 1-4 Hz from higher-order noise correlations in the Euroseistest area, Greece

    NASA Astrophysics Data System (ADS)

    Haendel, A.; Ohrnberger, M.; Krüger, F.

    2016-11-01

    Knowledge of the quality factor of near-surface materials is of fundamental interest in various applications. Attenuation can be very strong close to the surface and thus needs to be properly assessed. In recent years, several researchers have studied the retrieval of attenuation coefficients from the cross correlation of ambient seismic noise. Yet, the determination of exact amplitude information from noise-correlation functions is, in contrast to the extraction of traveltimes, not trivial. Most of the studies estimated attenuation coefficients on the regional scale and within the microseism band. In this paper, we investigate the possibility to derive attenuation coefficients from seismic noise at much shallower depths and higher frequencies (>1 Hz). The Euroseistest area in northern Greece offers ideal conditions to study quality factor retrieval from ambient noise for different rock types. Correlations are computed between the stations of a small scale array experiment (station spacings <2 km) that was carried out in the Euroseistest area in 2011. We employ the correlation of the coda of the correlation (C3) method instead of simple cross correlations to mitigate the effect of uneven noise source distributions on the correlation amplitude. Transient removal and temporal flattening are applied instead of 1-bit normalization in order to retain relative amplitudes. The C3 method leads to improved correlation results (higher signal-to-noise ratio and improved time symmetry) compared to simple cross correlations. The C3 functions are rotated from the ZNE to the ZRT system and we focus on Love wave arrivals on the transverse component and on Love wave quality factors QL. The analysis is performed for selected stations being either situated on soft soil or on weathered rock. Phase slowness is extracted using a slant-stack method. Attenuation parameters are inferred by inspecting the relative amplitude decay of Love waves with increasing interstation distance. We observe that the attenuation coefficient γ and QL can be reliably extracted for stations situated on soft soil whereas the derivation of attenuation parameters is more problematic for stations that are located on weathered rock. The results are in acceptable conformance with theoretical Love wave attenuation curves that were computed using 1-D shear wave velocity and quality factor profiles from the Euroseistest area.

  7. Formal Models of Hardware and Their Applications to VLSI Design Automation.

    DTIC Science & Technology

    1986-12-24

    ORGANIZATION Universitv of Southern’iaplcbe ralifnrni Offico of ’,aval "esearch 6c. ADDRESS (City. State and ZIP Code) 7b. ADDRESS (City. Stote and ZIP Code...Di’f-i2C-33-K-O147 8.ADESS IXity, State and ZIP Coda, 10 SOURCE OF FUNDING NODS US fr-," esearch C-f-ice PORM POET TS OKUI 2..Fc 2~1ELEMENT No NO. NO...are classified as belonging to one of six different types. The dimensions of the routing channel are defined as functions of these random variables

  8. The VELA Program. A Twenty-Five Year Review of Basic Research

    DTIC Science & Technology

    1985-01-01

    Z7 - db -gP W- -~ g F 7 f. -t-4? - Un~imue h 9 * so :;-. 6 do. 0AAA3LO C Best Available Copy I@ The VELA Program A Twenty-Five Vear Review of Basic...Modeling in the Inelastic Region of Underground Nuclear Explosions L.I. Burdick, I.S. Barker, D. V. Helmbewr, and D. G . Harkridff 130 Spall Contribution to...Contas Xiii In-Situ Strain Paths and Stress Bounds with Application to Desert Alit iviutm. JG. Truio 344 Model ",- g L. Codas of P-SV and SH by Vertical

  9. Interpreting Continental Break-Up From Surface Observations: Analysis of 1D Partial Melting Using Synthetic Waveform Propagation

    NASA Astrophysics Data System (ADS)

    Franken, T.; Armitage, J. J.; Fuji, N.; Fournier, A.

    2017-12-01

    Low shear-wave velocity zones underneath margins of continental break-up are believed to be related to the presence of melt. Many models attempt to model the process of melt production and transportation during mantle upwelling, yet there is a disconnect between geodynamic models, seismic observations, and petrological studies of melt flow velocities. Geodynamic models that emulate melt retention of 2 %, suggested by shear-wave velocity anomalies (Forsyth & MELT Seismic Team, 1998), fail to adequately reproduce the seismic signal as seen in receiver functions (Rychert, 2012; Armitage et al., 2015). Furthermore, numerical models of melt migration conclude mean melt flow velocities up to 1,3 m yr-1(Weatherley & Katz, 2015), whereas Uranium isotope migration rates advocate velocities up to two orders of magnitude higher. This study aims to reconcile the diverting assertions on the partial melting process by analysing the effect of melt presence on the coda of the seismic signal. A 1D forward model has been created to emulate melt production and transportation in an upwelling mantle environment. Scenarios have been modelled for variable upwelling velocities v (1 - 100 mm yr-1), initial temperatures T0 (1200 - 1800 °C) and permeabilities k0 (10-9 - 10-5 m2). The 1D model parameters are converted to anharmonic seismic parameters using look-up tables from phase diagrams (Goes et al., 2012) to generate synthetic seismograms with the Direct Solution Method. The maximum frequency content of the synthetics is 1,25 Hz, sampled at 20 Hz with a low-pass filter of 0,1 Hz. A comparison between the synthetics and seismic observations of the La Reunion mantle plume from the RER Geoscope receiver is performed using a Monte-Carlo approach. The synthetic seismograms show highest sensitivity to the presence of melt in S-waves within epicentral distances of 0-20 degrees. In the 0-10 degree range only a time-shift is observed proportional to the melt fraction at the onset of melting. Within the 10-20 degree range the presence of melt causes an additional change in the coda of the signal compared to a no-melt model. By analysing these altered synthetic waveforms we search for a seismic signature corresponding to melt presence to form a benchmark for the comparison between the Monte-Carlo results and the seismic observations.

  10. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    NASA Astrophysics Data System (ADS)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  11. Constraining the source location of the 30 May 2015 (Mw 7.9) Bonin deep-focus earthquake using seismogram envelopes of high-frequency P waveforms: Occurrence of deep-focus earthquake at the bottom of a subducting slab

    NASA Astrophysics Data System (ADS)

    Takemura, Shunsuke; Maeda, Takuto; Furumura, Takashi; Obara, Kazushige

    2016-05-01

    In this study, the source location of the 30 May 2015 (Mw 7.9) deep-focus Bonin earthquake was constrained using P wave seismograms recorded across Japan. We focus on propagation characteristics of high-frequency P wave. Deep-focus intraslab earthquakes typically show spindle-shaped seismogram envelopes with peak delays of several seconds and subsequent long-duration coda waves; however, both the main shock and aftershock of the 2015 Bonin event exhibited pulse-like P wave propagations with high apparent velocities (~12.2 km/s). Such P wave propagation features were reproduced by finite-difference method simulations of seismic wave propagation in the case of slab-bottom source. The pulse-like P wave seismogram envelopes observed from the 2015 Bonin earthquake show that its source was located at the bottom of the Pacific slab at a depth of ~680 km, rather than within its middle or upper regions.

  12. Modeling the blockage of Lg waves from 3-D variations in crustal structure

    NASA Astrophysics Data System (ADS)

    Sanborn, Christopher J.; Cormier, Vernon F.

    2018-05-01

    Comprised of S waves trapped in Earth's crust, the high frequency (2-10 Hz) Lg wave is important to discriminating earthquakes from explosions by comparing its amplitude and waveform to those of Pg and Pn waves. Lateral variations in crustal structure, including variations in crustal thickness, intrinsic attenuation, and scattering, affect the efficiency of Lg propagation and its consistency as a source discriminant at regional (200-1500 km) distances. To investigate the effects of laterally varying Earth structure on the efficiency of propagation of Lg and Pg, we apply a radiative transport algorithm to model complete, high-frequency (2-4 Hz), regional coda envelopes. The algorithm propagates packets of energy with ray theory through large-scale 3-D structure, and includes stochastic effects of multiple-scattering by small-scale heterogeneities within the large-scale structure. Source-radiation patterns are described by moment tensors. Seismograms of explosion and earthquake sources are synthesized in canonical models to predict effects on waveforms of paths crossing regions of crustal thinning (pull-apart basins and ocean/continent transitions) and thickening (collisional mountain belts), For paths crossing crustal thinning regions, Lg is amplified at receivers within the thinned region but strongly disrupted and attenuated at receivers beyond the thinned region. For paths crossing regions of crustal thickening, Lg amplitude is attenuated at receivers within the thickened region, but experiences little or no reduction in amplitude at receivers beyond the thickened region. The length of the Lg propagation within a thickened region and the complexity of over- and under-thrust crustal layers, can produce localized zones of Lg amplification or attenuation. Regions of intense scattering within laterally homogeneous models of the crust increase Lg attenuation but do not disrupt its coda shape.

  13. Dynamic acousto-elastic testing of concrete with a coda-wave probe: comparison with standard linear and nonlinear ultrasonic techniques.

    PubMed

    Shokouhi, Parisa; Rivière, Jacques; Lake, Colton R; Le Bas, Pierre-Yves; Ulrich, T J

    2017-11-01

    The use of nonlinear acoustic techniques in solids consists in measuring wave distortion arising from compliant features such as cracks, soft intergrain bonds and dislocations. As such, they provide very powerful nondestructive tools to monitor the onset of damage within materials. In particular, a recent technique called dynamic acousto-elasticity testing (DAET) gives unprecedented details on the nonlinear elastic response of materials (classical and non-classical nonlinear features including hysteresis, transient elastic softening and slow relaxation). Here, we provide a comprehensive set of linear and nonlinear acoustic responses on two prismatic concrete specimens; one intact and one pre-compressed to about 70% of its ultimate strength. The two linear techniques used are Ultrasonic Pulse Velocity (UPV) and Resonance Ultrasound Spectroscopy (RUS), while the nonlinear ones include DAET (fast and slow dynamics) as well as Nonlinear Resonance Ultrasound Spectroscopy (NRUS). In addition, the DAET results correspond to a configuration where the (incoherent) coda portion of the ultrasonic record is used to probe the samples, as opposed to a (coherent) first arrival wave in standard DAET tests. We find that the two visually identical specimens are indistinguishable based on parameters measured by linear techniques (UPV and RUS). On the contrary, the extracted nonlinear parameters from NRUS and DAET are consistent and orders of magnitude greater for the damaged specimen than those for the intact one. This compiled set of linear and nonlinear ultrasonic testing data including the most advanced technique (DAET) provides a benchmark comparison for their use in the field of material characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Constraints on Small-scale Heterogeneity in the Lowermost Mantle from Observations of Near Podal PcP Precursors

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Ni, S.; Sun, D.; Shen, Z.; Jackson, J. M.; Wu, W.

    2017-12-01

    Volumetric heterogeneity on large scales ( >1000 km) and intermediate scales ( >100km) in the lowermost mantle have been established with seismological approaches. However, there are controversies regarding the level of heterogeneity in lowermost mantle at small scales (a few kilometers to tens of kilometers), with lower bound estimates ranging from 0.1% to a few percent. We take advantage of the small amplitude PcP waves at near podal distances (0-12°) to constrain the level of small-scale heterogeneity in the lowermost mantle. First, we compute short period synthetic seismograms with a finite difference code for a series of volumetric heterogeneity models in the lowermost mantle, and find that PcP is not identifiable if the small-scale heterogeneity in the lowermost mantle is above 2.0%. And then we use a functional form appropriate for coda decay to suppress P coda contamination. By comparing the corrected envelope of PcP and its precursors with synthetic seismograms, we find that perturbation of small-scale ( 8 km) heterogeneity in the lowermost mantle is 0.2% beneath regions to the east of China-Myanmar border area, north of Okhotsk Sea and South America. The perturbation is 0.5% beneath south of Okhotsk Sea and west of China-Myanmar border area, whereas strong perturbations ( 1.0%) are found beneath Central America. In the regions studied, we find that this particular type of small scale heterogeneity in lowermost mantle is weak, yet there are some regions requiring heterogeneity up to 1.0%. Where scattering is stronger, such as under Central America, more chemically complex mineral assemblages may be present at the core-mantle boundary.

  15. Inflammatory biomarkers and radiologic measurements in never-smokers with COPD: A cross-sectional study from the CODA cohort

    PubMed Central

    Lee, Hyun; Hong, Yoonki; Lim, Myoung Nam; Bak, So Hyeon; Kim, Min-Ji; Kim, Kyunga; Kim, Woo Jin; Park, Hye Yun

    2017-01-01

    Various biomarkers have emerged as potential surrogates to represent various subgroups of chronic obstructive pulmonary disease (COPD), which manifest with different phenotypes. However, the biomarkers representing never-smokers with COPD have not yet been well elucidated. The aim of this study was to evaluate the associations of certain serum and radiological biomarkers with the presence of COPD in never-smokers. To explore the associations of serum and radiological biomarkers with the presence of COPD in never-smokers, we conducted a cross-sectional patient cohort study composed of never-smokers from the COPD in Dusty Areas (CODA) cohort, consisting of subjects living in dusty areas near cement plants in South Korea. Of the 131 never-smokers in the cohort, 77 (58.8%) had COPD. There were no significant differences in the number of subjects with high levels of inflammatory biomarkers (>90th percentile of never-smokers without COPD), including white blood cell count, total bilirubin, interleukin (IL)-6, IL-8, and C-reactive protein, or radiologic measurements (including emphysema index and mean wall area percentage) between never-smokers with COPD and those without COPD. However, the number of subjects with high uric acid was significantly higher in never-smokers with COPD than never-smokers without COPD (31.2% (24/77) vs. 11.1% (6/54); p = 0.013). In addition, multivariate analysis revealed that high uric acid was significantly associated with the presence of COPD in never-smokers (adjusted relative risk: 1.63; 95% confidence interval: 1.21, 2.18; p = 0.001). Our study suggests that high serum levels of uric acid might be a potential biomarker for assessing the presence of COPD in never-smokers. PMID:29117798

  16. The Inhomogeneous Reionization Times of Present-day Galaxies

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre; Shapiro, Paul R.; Iliev, Ilian T.; Yepes, Gustavo; Gottlöber, Stefan; Hoffman, Yehuda; Teyssier, Romain

    2018-04-01

    Today’s galaxies experienced cosmic reionization at different times in different locations. For the first time, reionization (50% ionized) redshifts, z R , at the location of their progenitors are derived from new, fully coupled radiation-hydrodynamics simulation of galaxy formation and reionization at z > 6, matched to N-body simulation to z = 0. Constrained initial conditions were chosen to form the well-known structures of the local universe, including the Local Group and Virgo, in a (91 Mpc)3 volume large enough to model both global and local reionization. Reionization simulation CoDa I-AMR, by CPU-GPU code EMMA, used (2048)3 particles and (2048)3 initial cells, adaptively refined, while N-body simulation CoDa I-DM2048, by Gadget2, used (2048)3 particles, to find reionization times for all galaxies at z = 0 with masses M(z = 0) ≥ 108 M ⊙. Galaxies with M(z=0)≳ {10}11 {M}ȯ reionized earlier than the universe as a whole, by up to ∼500 Myr, with significant scatter. For Milky Way–like galaxies, z R ranged from 8 to 15. Galaxies with M(z=0)≲ {10}11 {M}ȯ typically reionized as late or later than globally averaged 50% reionization at < {z}R> =7.8, in neighborhoods where reionization was completed by external radiation. The spread of reionization times within galaxies was sometimes as large as the galaxy-to-galaxy scatter. The Milky Way and M31 reionized earlier than global reionization but later than typical for their mass, neither dominated by external radiation. Their most-massive progenitors at z > 6 had z R =9.8 (MW) and 11 (M31), while their total masses had z R = 8.2 (both).

  17. Assuring dental hygiene clinical competence for licensure: a national survey of dental hygiene program directors.

    PubMed

    Fleckner, Lucinda M; Rowe, Dorothy J

    2015-02-01

    To conduct a national survey of dental hygiene program directors to gain their opinions of alternative assessments of clinical competency, as qualifications for initial dental hygiene licensure. A 22 question survey, comprised of statements eliciting Likert-scale responses, was developed and distributed electronically to 341 U.S. dental hygiene program directors. Responses were tabulated and analyzed using University of California, San Francisco Qualtrics® computer software. Data were summarized as frequencies of responses to each item on the survey. The response rate was 42% (n=143). The majority of respondents (65%) agreed that graduating from a Commission on Dental Accreditation (CODA)-approved dental hygiene program and passing the national board examination was the best measure to assure competence for initial licensure. The addition of "successfully completing all program's competency evaluations" to the above core qualifications yielded a similar percentage of agreement. Most (73%) agreed that "the variability of live patients as test subjects is a barrier to standardizing the state and regional examinations," while only 29% agreed that the "use of live patients as test subjects is essential to assure competence for initial licensure." The statement that the one-time state and regional examinations have "low validity in reflecting the complex responsibilities of the dental hygienist in practice" had a high (77%) level of agreement. Most dental hygiene program directors agree that graduating from a CODA-approved dental hygiene program and passing the national board examination would ensure that a graduate has achieved clinical competence and readiness to provide comprehensive patient-centered care as a licensed dental hygienist. Copyright © 2015 The American Dental Hygienists’ Association.

  18. Local infrasound observations of large ash explosions at Augustine Volcano, Alaska, during January 11–28, 2006

    USGS Publications Warehouse

    Petersen, Tanja; De Angelis, Silvio; Tytgat, Guy; McNutt, Stephen R.

    2006-01-01

    We present and interpret acoustic waveforms associated with a sequence of large explosion events that occurred during the initial stages of the 2006 eruption of Augustine Volcano, Alaska. During January 11–28, 2006, 13 large explosion events created ash-rich plumes that reached up to 14 km a.s.l., and generated atmospheric pressure waves that were recorded on scale by a microphone located at a distance of 3.2 km from the active vent. The variety of recorded waveforms included sharp N-shaped waves with durations of a few seconds, impulsive signals followed by complex codas, and extended signals with emergent character and durations up to minutes. Peak amplitudes varied between 14 and 105 Pa; inferred acoustic energies ranged between 2×108 and 4×109 J. A simple N-shaped short-duration signal recorded on January 11, 2006 was associated with the vent-opening blast that marked the beginning of the explosive eruption sequence. During the following days, waveforms with impulsive onsets and extended codas accompanied the eruptive activity, which was characterized by explosion events that generated large ash clouds and pyroclastic flows along the flanks of the volcano. Continuous acoustic waveforms that lacked a clear onset were more common during this period. On January 28, 2006, the occurrence of four large explosion events marked the end of this explosive eruption phase at Augustine Volcano. After a transitional period of about two days, characterized by many small discrete bursts, the eruption changed into a stage of more sustained and less explosive activity accompanied by the renewed growth of a summit lava dome.

  19. Graduate Periodontics Programs' Integration of Implant Provisionalization in Core Curricula: Implementation of CODA Standard 4-10.2.d.

    PubMed

    Barwacz, Christopher A; Pantzlaff, Ed; Allareddy, Veerasathpurush; Avila-Ortiz, Gustavo

    2017-06-01

    The aim of this descriptive study was to provide an overview of the status of implementation of Commission on Dental Accreditation (CODA) Standard 4-10.2.d (Provisionalization of Dental Implants) by U.S. graduate periodontics programs since its introduction in 2013. Surveys were sent in May 2015 to 56 accredited postdoctoral periodontics program directors to ascertain program director characteristics; status of planning, implementation, and curriculum resulting from adoption of Standard 4-10.2.d; preferred clinical protocols for implant provisionalization; interdisciplinary educational collaborators; and competency assessment mechanisms. The survey response rate was 52% (N=29); the majority were male, aged 55 or older, and had held their position for less than ten years. Among the responding programs, 93% had formal educational curricula established in implant provisionalization. Graduate periodontics (96%) and prosthodontics (63%) faculty members were predominantly involved with curriculum planning. Of these programs, 96% used immediate implant provisionalization, with direct (chairside) provisionalization protocols (86%) being preferred over indirect protocols (14%) and polyethylethylketone provisional abutments (75%) being preferred to titanium (25%) provisional abutments. Straight and concave transmucosal emergence profile designs (46% each) were preferred in teaching, with only 8% of programs favoring convex transmucosal profiles. A majority of responding programs (67%) lacked protocols for communicating to the restorative referral a mechanism to duplicate the mature peri-implant mucosal architecture. Regional location did not play a significant role in any educational component related to implant provisionalization for these graduate periodontal programs. Overall, this study found that a clear majority of graduate periodontics programs had established formal curricula related to implant provisionalization, with substantial clinical and philosophical consensus within the specialty.

  20. Patient satisfaction survey of mandibular two-implant-retained overdentures in a predoctoral program.

    PubMed

    Dias, Renata; Moghadam, Marjan; Kuyinu, Esther; Jahangiri, Leila

    2013-08-01

    In response to the Commission of Dental Accreditation (CODA) mandate of a competency in the "replacement of teeth including fixed, removable and implant" prostheses, a predoctoral implant curriculum was implemented at New York University College of Dentistry. The assessment of the success or failure of a program should include an assessment of patient satisfaction with the treatment received in the predoctoral clinics. The purpose of this study was to measure patient satisfaction with the mandibular 2-implant-retained overdenture therapy received in the predoctoral program at New York University College of Dentistry. A telephone survey of patients who received an implant-retained overdenture in the predoctoral clinics at New York University, College of Dentistry (n=101) was conducted. Two of the authors contacted patients for participation in the survey and, using a prepared script, asked about their satisfaction with items such as function, comfort, and esthetics in addition to their overall satisfaction with the treatment they received. Data were analyzed with descriptive statistics. The study revealed that 79% of participants were satisfied with their masticatory ability, 84% were satisfied with the comfort of the prosthesis, and 89% were satisfied with the esthetics of their new prosthesis. Additionally, 85% of participants reported satisfaction with the overall treatment experience, and 90% would recommend that a friend receive the same treatment. The results of this study support the incorporation of treatment with an implant-retained mandibular overdenture as part of the routine care provided in the predoctoral education program to meet the mandates of CODA. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  1. Support for equatorial anisotropy of Earth's inner-inner core from seismic interferometry at low latitudes

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Song, Xiaodong

    2018-03-01

    Anisotropy of Earth's inner core provides a key role to understand its evolution and the Earth's magnetic field. Recently, using autocorrelations from earthquake's coda, we found an equatorial anisotropy of the inner-inner core (IIC), in apparent contrast to the polar anisotropy of the outer-inner core (OIC). To reduce the influence of the polar anisotropy and reduce possible contaminations from the large Fresnel zone of the PKIKP2 and PKIIKP2 phases at low frequencies, we processed coda noise of large earthquakes (10,000-40,000 s after magnitude ≥7.0) from stations at low latitudes (within ±35°) during 1990-2013. Using a number of improved procedures of both autocorrelation and cross-correlation, we extracted 52 array-stacked high-quality empirical Green's functions (EGFs), an increase of over 60% from our previous study. The high-quality data allow us to measure the relative arrival times by automatic waveform cross correlation. The results show large variation (∼10.9 s) in the differential times between the PKIKP2 and PKIIKP2 phases. The estimated influence of the Fresnel zone is insignificant (<1.1 s), compared to the observed data variation and measurement uncertainty. The observed time residuals match very well previous IIC model with a quasi-equatorial fast axis (near Central America and the Southeast Asia) and the spatial pattern from the low-latitude measurements is similar to the previous global dataset, including the fast axis and two low-velocity open rings, thus providing further support for the equatorial anisotropy model of the IIC. Speculations for the shift of the fast axis between the OIC and the IIC include: change of deformation regimes during the inner core history, change of geomagnetic field, and a proto-inner core.

  2. Features of Radiation and Propagation of Seismic Waves in the Northern Caucasus: Manifestations in Regional Coda

    NASA Astrophysics Data System (ADS)

    Kromskii, S. D.; Pavlenko, O. V.; Gabsatarova, I. P.

    2018-03-01

    Based on the Anapa (ANN) seismic station records of 40 earthquakes ( M W > 3.9) that occurred within 300 km of the station since 2002 up to the present time, the source parameters and quality factor of the Earth's crust ( Q( f)) and upper mantle are estimated for the S-waves in the 1-8 Hz frequency band. The regional coda analysis techniques which allow separating the effects associated with seismic source (source effects) and with the propagation path of seismic waves (path effects) are employed. The Q-factor estimates are obtained in the form Q( f) = 90 × f 0.7 for the epicentral distances r < 120 km and in the form Q( f) = 90 × f1.0 for r > 120 km. The established Q( f) and source parameters are close to the estimates for Central Japan, which is probably due to the similar tectonic structure of the regions. The shapes of the source parameters are found to be independent of the magnitude of the earthquakes in the magnitude range 3.9-5.6; however, the radiation of the high-frequency components ( f > 4-5 Hz) is enhanced with the depth of the source (down to h 60 km). The estimates Q( f) of the quality factor determined from the records by the Sochi, Anapa, and Kislovodsk seismic stations allowed a more accurate determination of the seismic moments and magnitudes of the Caucasian earthquakes. The studies will be continued for obtaining the Q( f) estimates, geometrical spreading functions, and frequency-dependent amplification of seismic waves in the Earth's crust in the other regions of the Northern Caucasus.

  3. The reionization times of z=0 galaxies

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique

    2018-05-01

    We study the inhomogeneity of the reionization process by comparing the reionization times of z = 0 galaxies as a function of their mass. For this purpose, we combine the results of the CODA-I AMR radiative hydrodynamics simulation of the Reionization with the halo merger trees of a pure dark matter tree-code z = 0 simulation evolved from the same set of initial conditions. We find that galaxies with M(z = 0) > 1011M⊙ are reionized earlier than the whole Universe, with e.g. MW-like haloes reionized between 100 and 300 million years before the diffuse IGM. Lighter galaxies reionized as late as the global volume, probably from external radiation.

  4. Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method

    NASA Astrophysics Data System (ADS)

    Li, Jiahang; Shen, Yang; Zhang, Wei

    2018-02-01

    At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.

  5. Pinpointing the North Korea Nuclear tests with body waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Bao, X.; Flinders, A. F.

    2017-12-01

    On September 3, 2017, North Korea conducted its sixth and by far the largest nuclear test at the Punggye-ri test site. In this work, we apply a novel full-wave location method that combines a non-linear grid-search algorithm with the 3D strain Green's tensor database to locate this event. We use the first arrivals (Pn waves) and their immediate codas, which are likely dominated by waves scattered by the surface topography near the source, to pinpoint the source location. We assess the solution in the search volume using a least-squares misfit between the observed and synthetic waveforms, which are obtained using the collocated-grid finite difference method on curvilinear grids. We calculate the one standard deviation level of the 'best' solution as a posterior error estimation. Our results show that the waveform based location method allows us to obtain accurate solutions with a small number of stations. The solutions are absolute locations as opposed to relative locations based on relative travel times, because topography-scattered waves depend on the geometric relations between the source and the unique topography near the source. Moreover, we use both differential waveforms and traveltimes to locate pairs of the North Korea tests in years 2016 and 2017 to further reduce the effects of inaccuracies in the reference velocity model (CRUST 1.0). Finally, we compare our solutions with those of other studies based on satellite images and relative traveltimes.

  6. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  7. Terrestrial Planet Finder: Coda to 10 Years of Technology Development

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.

    2009-01-01

    The Terrestrial Planet Finder (TPF) was proposed as a mission concept to the 2000 Decadal Survey, and received a very high ranking amongst the major initiatives that were then reviewed. As proposed, it was a formation flying array of four 3-m class mid-infrared telescopes, linked together as an interferometer. Its science goal was to survey 150 nearby stars for the presence of Earth-like planets, to detect signs of life or habitability, and to enable revolutionary advances in high angular resolution astrophysics. The Decadal Survey Committee recommended that $200M be invested to advance TPF technology development in the Decade of 2000-2010. This paper presents the results of NASA's investment.

  8. AxIOM: Amphipod crustaceans from insular Posidonia oceanica seagrass meadows

    PubMed Central

    Heughebaert, André; Lepoint, Gilles

    2016-01-01

    Abstract Background The Neptune grass, Posidonia oceanica (L.) Delile, 1813, is the most widespread seagrass of the Mediterranean Sea. This foundation species forms large meadows that, through habitat and trophic services, act as biodiversity hotspots. In Neptune grass meadows, amphipod crustaceans are one of the dominant groups of vagile invertebrates, forming an abundant and diverse taxocenosis. They are key ecological components of the complex, pivotal, yet critically endangered Neptune grass ecosystems. Nevertheless, comprehensive qualitative and quantitative data about amphipod fauna found in Mediterranean Neptune grass meadows remain scarce, especially in insular locations. New information Here, we provide in-depth metadata about AxIOM, a sample-based dataset published on the GBIF portal. AxIOM is based on an extensive and spatially hierarchized sampling design with multiple years, seasons, day periods, and methods. Samples were taken along the coasts of Calvi Bay (Corsica, France) and of the Tavolara-Punta Coda Cavallo Marine Protected Area (Sardinia, Italy). In total, AxIOM contains 187 samples documenting occurrence (1775 records) and abundance (10720 specimens) of amphipod crustaceans belonging to 72 species spanning 29 families. The dataset is available at http://ipt.biodiversity.be/resource?r=axiom. PMID:27660521

  9. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    NASA Astrophysics Data System (ADS)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.

  10. Dense array recordings in the San Bernardino Valley of landers-big bear aftershocks: Basin surface waves, Moho reflections, and three-dimensional simulations

    USGS Publications Warehouse

    Frankel, Arthur

    1994-01-01

    Fourteen GEOS seismic recorders were deployed in the San Bernardino Valley to study the propagation of short-period (T ≈ 1 to 3 sec) surface waves and Moho reflections. Three dense arrays were used to determine the direction and speed of propagation of arrivals in the seismograms. The seismograms for a shallow (d ≈ 1 km) M 4.9 aftershock of the Big Bear earthquake exhibit a very long duration (60 sec) of sustained shaking at periods of about 2 sec. Array analysis indicates that these late arrivals are dominated by surface waves traveling in various directions across the Valley. Some energy is arriving from a direction 180° from the epicenter and was apparently reflected from the edge of the Valley opposite the source. A close-in aftershock (Δ = 25 km, depth = 7 km) displays substantial short-period surface waves at deep-soil sites. A three-dimensional (3D) finite difference simulation produces synthetic seismograms with durations similar to those of the observed records for this event, indicating the importance of S-wave to surface-wave conversion near the edge of the basin. Flat-layered models severely underpredict the duration and spectral amplification of this deep-soil site. I show an example where the coda wave amplitude ratio at 1 to 2 Hz between a deep-soil and a rock site does not equal the S-wave amplitude ratio, because of the presence of surface waves in the coda of the deep-soil site. For one of the events studied (Δ ≈ 90 km), there are sizable phases that are critically reflected from the Moho (PmP and SmS). At one of the rock sites, the SmS phase has a more peaked spectrum that the direct S wave.

  11. Lapse time and frequency-dependent coda wave attenuation for Delhi and its surrounding regions

    NASA Astrophysics Data System (ADS)

    Das, Rabin; Mukhopadhyay, Sagarika; Singh, Ravi Kant; Baidya, Pushap R.

    2018-07-01

    Attenuation of seismic wave energy of Delhi and its surrounding regions has been estimated using coda of local earthquakes. Estimated quality factor (Qc) values are strongly dependent on frequency and lapse time. Frequency dependence of Qc has been estimated from the relationship Qc(f) = Q0fn for different lapse time window lengths. Q0 and n values vary from 73 to 453 and 0.97 to 0.63 for lapse time window lengths of 15 s to 90 s respectively. Average estimated frequency dependent relation is, Qc(f) = 135 ± 8f0.96±0.02 for the entire region for a window length of 30 s, where the average Qc value varies from 200 at 1.5 Hz to 1962 at 16 Hz. These values show that the region is seismically active and highly heterogeneous. The entire study region is divided into two sub-regions according to the geology of the area to investigate if there is a spatial variation in attenuation characteristics in this region. It is observed that at smaller lapse time both regions have similar Qc values. However, at larger lapse times the rate of increase of Qc with frequency is larger for Region 2 compared to Region 1. This is understandable, as it is closer to the tectonically more active Himalayan ranges and seismically more active compared to Region 1. The difference in variation of Qc with frequencies for the two regions is such that at larger lapse time and higher frequencies Region 2 shows higher Qc compared to Region 1. For lower frequencies the opposite situation is true. This indicates that there is a systematic variation in attenuation characteristics from the south (Region 1) to the north (Region 2) in the deeper part of the study area. This variation can be explained in terms of an increase in heat flow and a decrease in the age of the rocks from south to north.

  12. Magnitude Based Discrimination of Manmade Seismic Events From Naturally Occurring Earthquakes in Utah, USA

    NASA Astrophysics Data System (ADS)

    Koper, K. D.; Pechmann, J. C.; Burlacu, R.; Pankow, K. L.; Stein, J. R.; Hale, J. M.; Roberson, P.; McCarter, M. K.

    2016-12-01

    We investigate the feasibility of using the difference between local (ML) and coda duration (MC) magnitude as a means of discriminating manmade seismic events from naturally occurring tectonic earthquakes in and around Utah. Using a dataset of nearly 7,000 well-located earthquakes in the Utah region, we find that ML-MC is on average 0.44 magnitude units smaller for mining induced seismicity (MIS) than for tectonic seismicity (TS). MIS occurs within near-surface low-velocity layers that act as a waveguide and preferentially increase coda duration relative to peak amplitude, while the vast majority of TS occurs beneath the near-surface waveguide. A second dataset of more than 3,700 probable explosions in the Utah region also has significantly lower ML-MC values than TS, likely for the same reason as the MIS. These observations suggest that ML-MC, or related measures of peak amplitude versus signal duration, may be useful for discriminating small explosions from earthquakes at local-to-regional distances. ML and MC can be determined for small events with relatively few observations, hence an ML-MC discriminant can be effective in cases where moment tensor inversion is not possible because of low data quality or poorly known Green's functions. Furthermore, an ML-MC discriminant does not rely on the existence of the fast attenuating Rg phase at regional distances. ML-MC may provide a local-to-regional distance extension of the mb-MS discriminant that has traditionally been effective at identifying large nuclear explosions with teleseismic data. This topic is of growing interest in forensic seismology, in part because the Comprehensive Nuclear Test Ban Treaty (CTBT) is a zero tolerance treaty that prohibits all nuclear explosions, no matter how small. If the CTBT were to come into force, source discrimination at local distances would be required to verify compliance.

  13. Trade-off of Elastic Structure and Q in Interpretations of Seismic Attenuation

    NASA Astrophysics Data System (ADS)

    Deng, Wubing; Morozov, Igor B.

    2017-10-01

    The quality factor Q is an important phenomenological parameter measured from seismic or laboratory seismic data and representing wave-energy dissipation rate. However, depending on the types of measurements and models or assumptions about the elastic structure, several types of Qs exist, such as intrinsic and scattering Qs, coda Q, and apparent Qs observed from wavefield fluctuations. We consider three general types of elastic structures that are commonly encountered in seismology: (1) shapes and dimensions of rock specimens in laboratory studies, (2) geometric spreading or scattering in body-, surface- and coda-wave studies, and (3) reflectivity on fine layering in reflection seismic studies. For each of these types, the measured Q strongly trades off with the (inherently limited) knowledge about the respective elastic structure. For the third of the above types, the trade-off is examined quantitatively in this paper. For a layered sequence of reflectors (e.g., an oil or gas reservoir or a hydrothermal zone), reflection amplitudes and phases vary with frequency, which is analogous to a reflection from a contrast in attenuation. We demonstrate a quantitative equivalence between phase-shifted reflections from anelastic zones and reflections from elastic layering. Reflections from the top of an elastic layer followed by weaker reflections from its bottom can appear as resulting from a low Q within or above this layer. This apparent Q can be frequency-independent or -dependent, according to the pattern of thin layering. Due to the layering, the interpreted Q can be positive or negative, and it can depend on source-receiver offsets. Therefore, estimating Q values from frequency-dependent or phase-shifted reflection amplitudes always requires additional geologic or rock-physics constraints, such as sparseness and/or randomness of reflectors, the absence of attenuation in certain layers, or specific physical mechanisms of attenuation. Similar conclusions about the necessity of extremely detailed models of the elastic structure apply to other types of Q measurements.

  14. Deep heterogeneous structure of active faults in the Kinki region, southwest Japan: Inversion analysis of coda envelopes

    NASA Astrophysics Data System (ADS)

    Nishigami, K.

    2006-12-01

    It is essential to estimate the deep structure of active faults related to the earthquake rupture process as well as the crustal structure related to the propagation of seismic waves, in order to improve the accuracy of estimating strong ground motion caused by future large inland earthquakes. In the Kinki region, southwest Japan, there are several active fault zones near large cities such as Osaka and Kyoto, and the evaluation of realistic strong ground motion is an important subject. We have been carrying out the Special Project for Earthquake Disaster Mitigation in Urban Areas, in the Kinki region for these purposes. In this presentation we will show the result of estimating the fault structure model of the Biwako-seigan, Hanaore, and Arima- Takatsuki fault zones. We estimated a 3-D distribution of relative scattering coefficients in the Kinki region, also in the vicinity of each active fault zone, by inversion of coda envelopes from local earthquakes. We analyzed 758 seismograms from 52 events which occurred in 2003, recorded at 50 stations of Kyoto Univ., Hi- net, and JMA. The preliminary result shows that active fault zones can be imaged as higher scattering than the surroundings. Based on previous studies of scattering properties in the crust, we consider that the relatively weaker scattering (namely more homogeneous) part on the fault plane may act as an asperity during future large earthquakes, and also that the part with relatively stronger scattering (namely more heterogeneous part) may become an initiation point of rupture. We are also studying the detailed distribution of microearthquakes, b-values, and velocity anomalies along these active fault zones. Combining these results, we will construct a possible fault model for each of the active fault zones. This study is sponsored by the Special Project for Earthquake Disaster Mitigation in Urban Areas from the Ministry of Education, Culture, Sports, Science and Technology of Japan.

  15. Rotational motions from the 2016, Central Italy seismic sequence, as observed by an underground ring laser gyroscope

    NASA Astrophysics Data System (ADS)

    Simonelli, A.; Igel, H.; Wassermann, J.; Belfi, J.; Di Virgilio, A.; Beverini, N.; De Luca, G.; Saccorotti, G.

    2018-05-01

    We present the analysis of rotational and translational ground motions from earthquakes recorded during October/November, 2016, in association with the Central Italy seismic-sequence. We use co-located measurements of the vertical ground rotation rate from a large ring laser gyroscope (RLG), and the three components of ground velocity from a broadband seismometer. Both instruments are positioned in a deep underground environment, within the Gran Sasso National Laboratories (LNGS) of the Istituto Nazionale di Fisica Nucleare (INFN). We collected dozens of events spanning the 3.5-5.9 Magnitude range, and epicentral distances between 30 km and 70 km. This data set constitutes an unprecedented observation of the vertical rotational motions associated with an intense seismic sequence at local distance. Under the plane wave approximation we process the data set in order to get an experimental estimation of the events back azimuth. Peak values of rotation rate (PRR) and horizontal acceleration (PGA) are markedly correlated, according to a scaling constant which is consistent with previous measurements from different earthquake sequences. We used a prediction model in use for Italy to calculate the expected PGA at the recording site, obtaining consequently predictions for PRR. Within the modeling uncertainties, predicted rotations are consistent with the observed ones, suggesting the possibility of establishing specific attenuation models for ground rotations, like the scaling of peak velocity and peak acceleration in empirical ground-motion prediction relationships. In a second step, after identifying the direction of the incoming wave-field, we extract phase velocity data using the spectral ratio of the translational and rotational components.. This analysis is performed over time windows associated with the P-coda, S-coda and Lg phase. Results are consistent with independent estimates of shear-wave velocities in the shallow crust of the Central Apennines.

  16. IRIS DMC products help explore the Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Trabant, C.; Hutko, A. R.; Bahavar, M.; Ahern, T. K.; Benson, R. B.; Casey, R.

    2011-12-01

    Within two hours after the great March 11, 2011 Tohoku earthquake the IRIS DMC started publishing automated data products through its Searchable Product Depository (SPUD), which provides quick viewing of many aspects of the data and preliminary analysis of this great earthquake. These products are part of the DMC's data product development effort intended to serve many purposes: stepping-stones for future research projects, data visualizations, data characterization, research result comparisons as well as outreach material. Our current and soon-to-be-released products that allow users to explore this and other global M>6.0 events include 1) Event Plots, which are a suite of maps, record sections, regional vespagrams and P-coda stacks 2) US Array Ground Motion Visualizations that show the vertical and horizontal global seismic wavefield sweeping across US Array including minor and major arc surface waves and their polarizations 3) back-projection movies that show the time history of short-period energy from the rupture 4) R1 source-time functions that show approximate duration and source directivity and 5) aftershock sequence maps and statistics movies based on NEIC alerts that self-update every hour in the first few days following the mainshock. Higher order information for the Tohoku event that can be inferred based on our products which will be highlighted include a rupture duration of order 150 sec (P-coda stacks, back-projections, R1 STFs) that ruptured approximately 400 km along strike primarily towards the south (back-projections, R1 STFs, aftershock animation) with a very low rupture velocity (back-projections, R1 STFs). All of our event-based products are automated and consistently produced shortly after the event so that they may serve as familiar baselines for the seismology research community. More details on these and other existing products are available at: http://www.iris.edu/dms/products/

  17. Time reversal seismic imaging using laterally reflected surface waves in southern California

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Tromp, J.; Plesch, A.; Shaw, J. H.

    2010-12-01

    We use observed post-surface-wave seismic waveforms to image shallow (upper 10 km) lateral reflectors in southern California. Our imaging technique employs the 3D crustal model m16 of Tape et al. (2009), which is accurate for most local earthquakes over the period range 2-30 s. Model m16 captures the resonance of the major sedimentary basins in southern California, as well as some lateral surface wave reflections associated with these basins. We apply a 3D Gaussian smoothing function (12 km horizontal, 2 km vertical) to model m16. This smoothing has the effect of suppressing synthetic waveforms within the period range of interest (3-10 s) that are associated with reflections (single and multiple) from these basins. The smoothed 3D model serves as the background model within which we propagate an ``adjoint wavefield'' comprised of time-reversed windows of post-surface-wave coda waveforms that are initiated at the respective station locations. This adjoint wavefield constructively interferes with the regular wavefield in the locations of potential reflectors. The potential reflectors are revealed in an ``event kernel,'' which is the time-integrated volumetric field for each earthquake. By summing (or ``stacking'') the event kernels from 28 well-recorded earthquakes, we identify several reflectors using this imaging procedure. The most prominent lateral reflectors occur in proximity to: the southernmost San Joaquin basin, the Los Angeles basin, the San Pedro basin, the Ventura basin, the Manix basin, the San Clemente--Santa Cruz--Santa Barbara ridge, and isolated segments of the San Jacinto and San Andreas faults. The correspondence between observed coherent coda waveforms and the imaged reflectors provides a solid basis for interpreting the kernel features as material contrasts. The 3D spatial extent and amplitude of the kernel features provide constraints on the geometry and material contrast of the imaged reflectors.

  18. Equatorial anisotropy of the Earth's inner inner core from autocorrelations of earthquake coda

    NASA Astrophysics Data System (ADS)

    Wang, T.; Song, X.; Xia, H.

    2014-12-01

    The anisotropic structure of the inner core seems complex with significant depth and lateral variations. An innermost inner core has been suggested with a distinct form of anisotropy, but it has considerable uncertainties in its form, size, or even existence. All the previous inner-core anisotropy models have assumed a cylindrical anisotropy with the symmetry axis parallel (or nearly parallel) to the Earth's spin axis. In this study, we obtain inner-core phases, PKIIKP2 and PKIKP2 (the round-trip phases between the station and its antipode that passes straight through the center of the Earth and that is reflected from the inner-core boundary, respectively), from stackings of autocorrelations of earthquake coda at seismic station clusters around the world. The differential travel times PKIIKP2 - PKIKP2, which are sensitive to inner-core structure, show fast arrivals at high latitudes. However, we also observed large variations of up to 10 s along equatorial paths. These observations can be explained by a cylindrical anisotropy in the inner inner core (IIC) (with a radius of slightly less than half the inner core radius) that has a fast axis aligned near the equator and a cylindrical anisotropy in the outer inner core (OIC) that has a fast axis along the north-south direction. The equatorial fast axis of the IIC is near the Central America and the Southeast Asia. The form of the anisotropy in the IIC is distinctly different from that in the OIC and the anisotropy amplitude in the IIC is about 70% stronger than in the OIC. The different forms of anisotropy may be explained by a two-phase system of iron in the inner core (hcp in the OIC and bcc in the IIC). These results may suggest a major shift of the tectonics of the inner core during its formation and growth.

  19. Equatorial anisotropy of the Earth's inner-inner core

    NASA Astrophysics Data System (ADS)

    Song, X.; Wang, T.; Xia, H.

    2015-12-01

    Anisotropy of Earth's inner core is a key to understand its evolution and the generation of the Earth's magnetic field. All the previous inner core anisotropy models have assumed a cylindrical anisotropy with the symmetry axis parallel (or nearly parallel) to the Earth's spin axis. However, we have recently found that the fast axis in the inner part of the inner core is close to the equator from inner-core waves extracted from earthquake coda. We obtained inner core phases, PKIIKP2 and PKIKP2 (round-trip phases between the station and its antipode that passes straight through the center of the Earth and that is reflected from the inner core boundary, respectively), from stackings of autocorrelations of the coda of large earthquakes (10,000~40,000 s after Mw>=7.0 earthquakes) at seismic station clusters around the world. We observed large variation of up to 10 s along equatorial paths in the differential travel times PKIIKP2 - PKIKP2, which are sensitive to inner-core structure. The observations can be explained by a cylindrical anisotropy in the inner inner core (IIC) (with a radius of slightly less than half the inner core radius) that has a fast axis aligned near the equator and a cylindrical anisotropy in the outer inner core (OIC) that has a fast axis along the north-south direction. We have obtained more observations using the combination of autocorrelations and cross-correlations at low-latitude station arrays. The results further confirm that the IIC has an equatorial anisotropy and a pattern different from the OIC. The equatorial fast axis of the IIC is near the Central America and the Southeast Asia. The drastic change in the fast axis and the form of anisotropy from the IIC to the OIC may suggest a phase change of the iron or a major shift in the crystallization and deformation during the formation and growth of the inner core.

  20. Attenuation Characteristics of High Frequency Seismic Waves in Southern India

    NASA Astrophysics Data System (ADS)

    Sivaram, K.; Utpal, Saikia; Kanna, Nagaraju; Kumar, Dinesh

    2017-07-01

    We present a systematic study of seismic attenuation and its related Q structure derived from the spectral analysis of P-, S-waves in the southern India. The study region is separated into parts of EDC (Eastern Dharwar Craton), Western Dharwar Craton (WDC) and Southern Granulite Terrain (SGT). The study is carried out in the frequency range 1-20 Hz, using a single-station spectral ratio technique. We make use of about 45 earthquakes, recorded in a network of about 32 broadband 3-component seismograph-stations, having magnitudes ( M L) varying from 1.6 to 4.5, to estimate the average seismic body wave attenuation quality factors; Q P and Q S. Their estimated average values are observed to be fitting to the power law form of Q = Q 0 f n . The averaged power law relations for Southern Indian region (as a whole) are obtained as Q P = (95 ± 1.12) f (1.32±0.01); Q S = (128 ± 1.84) f (1.49±0.01). Based on the stations and recorded local earthquakes, for parts of EDC, WDC and SGT, the average power law estimates are obtained as: Q P = (97 ± 5) f (1.40±0.03), Q S = (116 ± 1.5) f (1.48±0.01) for EDC region; Q P = (130 ± 7) f (1.20±0.03), Q S = (103 ± 3) f (1.49±0.02) for WDC region; Q P = (68 ± 2) f (1.4±0.02), Q S = (152 ± 6) f (1.48±0.02) for SGT region. These estimates are weighed against coda Q ( Q C) estimates, using the coda decay technique, which is based on a weak backscattering of S-waves. A major observation in the study of body wave analysis is the low body wave Q ( Q 0 < 200), moderately high value of the frequency-exponent, ` n' (>0.5) and Q S/ Q P ≫ 1, suggesting lateral stretches of dominant scattering mode of seismic wave propagation. This primarily could be attributed to possible thermal anomalies and spread of partially fluid-saturated rock-masses in the crust and upper mantle of the southern Indian region, which, however, needs further laboratory studies. Such physical conditions might partly be correlated to the active seismicity and intraplate tectonism, especially in SGT and EDC regions, as per the observed low- Q P and Q S values. Additionally, the enrichment of coda waves and significance of scattering mechanisms is evidenced in our observation of Q C > Q S estimates. Lapse time study shows Q C values increasing with lapse time. High Q C values at 40 s lapse times in WDC indicate that it may be a relatively stable region. In the absence of detailed body wave attenuation studies in this region, the frequency dependent Q relationships developed here are useful for the estimation of earthquake source parameters of the region. Also, these relations may be used for the simulation of earthquake strong ground motions which are required for the estimation of seismic hazard, geotechnical and retrofitting analysis of critical structures in the region.

  1. High-resolution imaging of the low velocity layer in Alaskan subduction zone with scattered waves and interferometry

    NASA Astrophysics Data System (ADS)

    Kim, D.; Keranen, K. M.; Abers, G. A.; Kim, Y.; Li, J.; Shillington, D. J.; Brown, L. D.

    2017-12-01

    The physical factors that control the rupture process of great earthquakes at convergent plate boundaries remain incompletely understood. While recent developments in imaging using the teleseismic wavefield have led to marked advances at wavelengths of a couple kilometers to tens of kilometers, higher resolution imaging of the rupture zone would improve the resolution of imaging and thus provide improved parameter estimation, as the teleseismic wavefield is fundamentally limited by its low frequency content. This study compares and evaluates two seismic imaging techniques using the high-frequency signals from teleseismic coda versus earthquake scattered waves to image the subducting Yakutat oceanic plateau in the Alaska subduction zone. We use earthquakes recorded by the MOOS PASSCAL broadband deployment in southern Alaska. In our first method, we select local earthquakes that lie directly beneath and laterally near the recording array for imaging, and extract body wave information via a simple autocorrelation and stacking. Profiles analogous to seismic reflection profile are constructed using the near-vertically travelling waves. In our second method, we compute teleseismic receiver functions within the 0.02-1.0 Hz frequency band. Both results image interfaces that we associate with the subducting oceanic plate in Alaska-Aleutian system, with greater resolution than commonly used methods with teleseismic sources. Structural details from our results can further our understanding of the conditions and materials that characterize the subduction megathrusts, and the techniques can be employed in other regions along the Alaska-Aleutian system and at other convergent margins with suitable seismic arrays.

  2. Making waves.

    NASA Astrophysics Data System (ADS)

    Townes, C. H.

    The author takes the reader on a behind-the-scenes tour of his way of working. Along the way, one learns about how the author came upon his surprising findings and how he managed to avoid obstacles in his path. He introduces the reader to the wonders of the universe, from the submicroscopic, most minute - the workings of atoms and the even smaller particles that make them up - to the vast outer reaches of space. His tour takes one along paths Townes pioneered: quantum electronics, microwave spectroscopy and the frontiers of our galaxy where he explored the dark, rarefied clouds of gas and dust where new stars form. The book concludes with a uniquely personal coda in which Townes suggests that science and religion occupy the same terrain.

  3. Observation and modeling of source effects in coda wave interferometry at Pavlof volcano

    USGS Publications Warehouse

    Haney, M.M.; van, Wijik K.; Preston, L.A.; Aldridge, D.F.

    2009-01-01

    Sorting out source and path effects for seismic waves at volcanoes is critical for the proper interpretation of underlying volcanic processes. Source or path effects imply that seismic waves interact strongly with the volcanic subsurface, either through partial resonance in a conduit (Garces et al., 2000; Sturton and Neuberg, 2006) or by random scattering in the heterogeneous volcanic edifice (Wegler and Luhr, 2001). As a result, both source and path effects can cause seismic waves to repeatedly sample parts of the volcano, leading to enhanced sensitivity to small changes in material properties at those locations. The challenge for volcano seismologists is to detect and reliably interpret these subtle changes for the purpose of monitoring eruptions. ?? 2009 Society of Exploration Geophysicists.

  4. Scientific Travels in the Irish Countryside

    NASA Astrophysics Data System (ADS)

    Greenslade, Thomas B.

    Ireland is one of the most popular destinations for American travelers. The market is enormous: while there are only 3.6 million Irish in the Republic of Ireland and 1.6 million in Northern Ireland, there are 40 million Americans of Irish descent. Almost every person you speak with in Ireland has a cousin in Chicago, an aunt in Boston and a brother in Los Angeles. The author has no Irish relatives at all, but went to Ireland in June 1998 and September 1999 to visit scientific sites. This article describes three of them: The collections of historical apparatus at the Universities in Maynooth and Galway, and the Great Rosse Telescope in Birr. I have added a short coda about geologic sites in Ireland.

  5. A two-component Bayesian mixture model to identify implausible gestational age.

    PubMed

    Mohammadian-Khoshnoud, Maryam; Moghimbeigi, Abbas; Faradmal, Javad; Yavangi, Mahnaz

    2016-01-01

    Background: Birth weight and gestational age are two important variables in obstetric research. The primary measure of gestational age is based on a mother's recall of her last menstrual period. This recall may cause random or systematic errors. Therefore, the objective of this study is to utilize Bayesian mixture model in order to identify implausible gestational age. Methods: In this cross-sectional study, medical documents of 502 preterm infants born and hospitalized in Hamadan Fatemieh Hospital from 2009 to 2013 were gathered. Preterm infants were classified to less than 28 weeks and 28 to 31 weeks. A two-component Bayesian mixture model was utilized to identify implausible gestational age; the first component shows the probability of correct and the second one shows the probability of incorrect classification of gestational ages. The data were analyzed through OpenBUGS 3.2.2 and 'coda' package of R 3.1.1. Results: The mean (SD) of the second component of less than 28 weeks and 28 to 31 weeks were 1179 (0.0123) and 1620 (0.0074), respectively. These values were larger than the mean of the first component for both groups which were 815.9 (0.0123) and 1061 (0.0074), respectively. Conclusion: Errors occurred in recording the gestational ages of these two groups of preterm infants included recording the gestational age less than the actual value at birth. Therefore, developing scientific methods to correct these errors is essential to providing desirable health services and adjusting accurate health indicators.

  6. Visco-elastic controlled-source full waveform inversion without surface waves

    NASA Astrophysics Data System (ADS)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  7. Relative role of intrinsic and scattering attenuation beneath the Andaman Islands, India and tectonic implications

    NASA Astrophysics Data System (ADS)

    Singh, Chandrani; Biswas, Rahul; Srijayanthi, G.; Ravi Kumar, M.

    2017-10-01

    The attenuation characteristics of seismic waves traversing the Andaman Nicobar subduction zone (ANSZ) are investigated using high quality data from a network of broadband stations operational since 2009. We initially studied the Coda wave attenuation (Qc-1) under the assumption of a single isotropic scattering model. Subsequently, following the multiple isotropic scattering hypothesis, we isolated the relative contributions of intrinsic (Qi-1) and scattering (Qsc-1) attenuation employing the Multiple Lapse Time Window Analysis (MLTWA) method within a frequency range 1.5-18 Hz. Results reveal a highly attenuative nature of the crust, with the values of Qc being frequency dependent. The intrinsic absorption is mostly found to be predominant compared to scattering attenuation. The dominance of Qi-1 in the crust may be attributed to the presence of fluids associated with the subducted slab. Our results are consistent with the low velocity zone reported for the region. A comparison of our results with those from other regions of the globe shows that the ANSZ falls under the category of high intrinsic attenuation zone. Interestingly, the character of ANSZ is identical to that of eastern Himalaya and southern Tibet, but entirely different from the Garhwal-Kumaun Himalaya and the source zone of Chamoli earthquake, due to the underlying mechanisms causing high attenuation.

  8. Using discrete wavelet transform features to discriminate between noise and phases in seismic waveforms

    NASA Astrophysics Data System (ADS)

    Forrest, R.; Ray, J.; Hansen, C. W.

    2017-12-01

    Currently, simple polarization metrics such as the horizontal-to-vertical ratio are used to discriminate between noise and various phases in three-component seismic waveform data collected at regional distances. Accurately establishing the identity and arrival of these waves in adverse signal-to-noise environments is helpful in detecting and locating the seismic events. In this work, we explore the use of multiresolution decompositions to discriminate between noise and event arrivals. A segment of the waveform lying inside a time-window that spans the coda of an arrival is subjected to a discrete wavelet decomposition. Multi-resolution classification features as well as statistical tests are derived from these wavelet decomposition quantities to quantify their discriminating power. Furthermore, we move to streaming data and address the problem of false positives by introducing ensembles of classifiers. We describe in detail results of these methods tuned from data obtained from Coronel Fontana, Argentina (CFAA), as well as Stephens Creek, Australia (STKA). Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  9. Magmatic arc structure around Mount Rainier, WA, from the joint inversion of receiver functions and surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Obrebski, Mathias; Abers, Geoffrey A.; Foster, Anna

    2015-01-01

    The deep magmatic processes in volcanic arcs are often poorly understood. We analyze the shear wave velocity (VS) distribution in the crust and uppermost mantle below Mount Rainier, in the Cascades arc, resolving the main velocity contrasts based on converted phases within P coda via source normalization or receiver function (RF) analysis. To alleviate the trade-off between depth and velocity, we use long period phase velocities (25-100 s) obtained from earthquake surface waves, and at shorter period (7-21 s) we use seismic noise cross correlograms. We use a transdimensional Bayesian scheme to explore the model space (VS in each layer, number of interfaces and their respective depths, level of noise on data). We apply this tool to 15 broadband stations from permanent and Earthscope temporary stations. Most results fall into two groups with distinctive properties. Stations east of the arc (Group I) have comparatively slower middle-to-lower crust (VS = 3.4-3.8 km/s at 25 km depth), a sharp Moho and faster uppermost mantle (VS = 4.2-4.4 km/s). Stations in the arc (Group II) have a faster lower crust (VS = 3.7-4 km/s) overlying a slower uppermost mantle (VS = 4.0-4.3 km/s), yielding a weak Moho. Lower crustal velocities east of the arc (Group I) most likely represent ancient subduction mélanges mapped nearby. The lower crust for Group II ranges from intermediate to felsic. We propose that intermediate-felsic to felsic rocks represent the prearc basement, while intermediate composition indicates the mushy andesitic crustal magmatic system plus solidified intrusion along the volcanic conduits. We interpret the slow upper mantle as partial melt.

  10. Imaging the Lowermost Mantle (D'') Beneath the Pacific Ocean with SKKS coda waves

    NASA Astrophysics Data System (ADS)

    Yu, Z.; Shang, X.; van der Hilst, R. D.

    2013-12-01

    Previous studies indicate considerable complexity in the lowermost mantle beneath the Pacific Ocean on a variety of spatial scales, such as large low-shear-velocity province (LLSVP), intermittent D'' discontinuities and isolated ultra-low-velocity zones (ULVZs). However, the resolution of travel time tomography is typically greater than 1000 km in deep mantle, and only a few regions can satisfy contingent sampling requirement for waveform modeling. On the other hand, generalized Radon transform (GRT) has a higher resolution (~400 km horizontally and ~30 km vertically) and can relax the restriction of source-receiver configuration. It has been successfully applied to central America and east Asia, which are speculated as the graveyard of subducted slabs. In this study we apply GRT to obtain a large-scale high-resolution image beneath (almost the whole) Pacific Ocean near the core-mantle boundary (CMB). More than 400,000 traces from ~8,000 events (5.8

  11. MatSeis and the GNEM R&E regional seismic anaylsis tools.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chael, Eric Paul; Hart, Darren M.; Young, Christopher John

    2003-08-01

    To improve the nuclear event monitoring capability of the U.S., the NNSA Ground-based Nuclear Explosion Monitoring Research & Engineering (GNEM R&E) program has been developing a collection of products known as the Knowledge Base (KB). Though much of the focus for the KB has been on the development of calibration data, we have also developed numerous software tools for various purposes. The Matlab-based MatSeis package and the associated suite of regional seismic analysis tools were developed to aid in the testing and evaluation of some Knowledge Base products for which existing applications were either not available or ill-suited. This presentationmore » will provide brief overviews of MatSeis and each of the tools, emphasizing features added in the last year. MatSeis was begun in 1996 and is now a fairly mature product. It is a highly flexible seismic analysis package that provides interfaces to read data from either flatfiles or an Oracle database. All of the standard seismic analysis tasks are supported (e.g. filtering, 3 component rotation, phase picking, event location, magnitude calculation), as well as a variety of array processing algorithms (beaming, FK, coherency analysis, vespagrams). The simplicity of Matlab coding and the tremendous number of available functions make MatSeis/Matlab an ideal environment for developing new monitoring research tools (see the regional seismic analysis tools below). New MatSeis features include: addition of evid information to events in MatSeis, options to screen picks by author, input and output of origerr information, improved performance in reading flatfiles, improved speed in FK calculations, and significant improvements to Measure Tool (filtering, multiple phase display), Free Plot (filtering, phase display and alignment), Mag Tool (maximum likelihood options), and Infra Tool (improved calculation speed, display of an F statistic stream). Work on the regional seismic analysis tools (CodaMag, EventID, PhaseMatch, and Dendro) began in 1999 and the tools vary in their level of maturity. All rely on MatSeis to provide necessary data (waveforms, arrivals, origins, and travel time curves). CodaMag Tool implements magnitude calculation by scaling to fit the envelope shape of the coda for a selected phase type (Mayeda, 1993; Mayeda and Walter, 1996). New tool features include: calculation of a yield estimate based on the source spectrum, display of a filtered version of the seismogram based on the selected band, and the output of codamag data records for processed events. EventID Tool implements event discrimination using phase ratios of regional arrivals (Hartse et al., 1997; Walter et al., 1999). New features include: bandpass filtering of displayed waveforms, screening of reference events based on SNR, multivariate discriminants, use of libcgi to access correction surfaces, and the output of discrim{_}data records for processed events. PhaseMatch Tool implements match filtering to isolate surface waves (Herrin and Goforth, 1977). New features include: display of the signal's observed dispersion and an option to use a station-based dispersion surface. Dendro Tool implements agglomerative hierarchical clustering using dendrograms to identify similar events based on waveform correlation (Everitt, 1993). New features include: modifications to include arrival information within the tool, and the capability to automatically add/re-pick arrivals based on the picked arrivals for similar events.« less

  12. Crustal Properties Across the Mid-Continent Rift via Transfer Function Analysis

    NASA Astrophysics Data System (ADS)

    Frederiksen, A. W.; Tyomkin, Y.; Campbell, R.; van der Lee, S.; Zhang, H.

    2015-12-01

    The Mid-Continent Rift (MCR), a failed Proterozoic rift structure in central North America, is a dominant feature of North American gravity maps. The rift underwent a combination of extension, magmatism, and later compression, and it is difficult to predict how these events affected the overall crustal thickness and bulk composition in the vicinity of the rift axis, though the associated gravity high indicates that large-volume mafic magmatism took place. The Superior Province Rifting Earthscope Experiment (SPREE) project instrumented the MCR with Flexible Array broadband seismographs from 2011 through 2013 in Minnesota and Wisconsin, along two lines crossing the rift axis as well as a line following the axis. We examine teleseismic P-coda data from SPREE and nearby Transportable Array instruments using a new technique: transfer-function analysis. In this approach, possible models of crustal structure are used to generate a predicted transfer function relating the radial and vertical components of the P coda at a particular site. The transfer function then allows generation of a misfit (between the true radial component and a synthetic radial component predicted from the vertical trace) without the need to perform receiver-function deconvolution, thus avoiding the deconvolution problems encountered with receiver functions in sedimentary basins. We use the transfer-function approach to perform a grid search over three crustal properties: crustal thickness, crustal P/S velocity ratio, and the thickness of an overlying sedimentary basin. Results for our SPREE/TA data set indicate that the crust is significantly thickened along the rift axis, with maximum thicknesses approaching 50 km; the crust is thinner (ca. 40 km) outside of the rift zone. The crustal thickness structure is particularly complex beneath southeastern Minnesota, where very strong Moho topography is present, as well as up to 2 km of sediment; further north, the Moho is smoother and the basin is not present. P/S ratio varies along the rift axis, suggesting a higher mafic component (higher ratio) in southern Minnesota. The complexity we see along the MCR is consistent with the results obtained by Zhang et al. (this conference) using receiver function analysis.

  13. Seismo-acoustic signals associated with degassing explosions recorded at Shishaldin Volcano, Alaska, 2003-2004

    USGS Publications Warehouse

    Petersen, T.

    2007-01-01

    In summer 2003, a Chaparral Model 2 microphone was deployed at Shishaldin Volcano, Aleutian Islands, Alaska. The pressure sensor was co-located with a short-period seismometer on the volcano’s north flank at a distance of 6.62 km from the active summit vent. The seismo-acoustic data exhibit a correlation between impulsive acoustic signals (1–2 Pa) and long-period (LP, 1–2 Hz) earthquakes. Since it last erupted in 1999, Shishaldin has been characterized by sustained seismicity consisting of many hundreds to two thousand LP events per day. The activity is accompanied by up to ∼200 m high discrete gas puffs exiting the small summit vent, but no significant eruptive activity has been confirmed. The acoustic waveforms possess similarity throughout the data set (July 2003–November 2004) indicating a repetitive source mechanism. The simplicity of the acoustic waveforms, the impulsive onsets with relatively short (∼10–20 s) gradually decaying codas and the waveform similarities suggest that the acoustic pulses are generated at the fluid–air interface within an open-vent system. SO2 measurements have revealed a low SO2 flux, suggesting a hydrothermal system with magmatic gases leaking through. This hypothesis is supported by the steady-state nature of Shishaldin’s volcanic system since 1999. Time delays between the seismic LP and infrasound onsets were acquired from a representative day of seismo-acoustic data. A simple model was used to estimate source depths. The short seismo-acoustic delay times have revealed that the seismic and acoustic sources are co-located at a depth of 240±200 m below the crater rim. This shallow depth is confirmed by resonance of the upper portion of the open conduit, which produces standing waves with f=0.3 Hz in the acoustic waveform codas. The infrasound data has allowed us to relate Shishaldin’s LP earthquakes to degassing explosions, created by gas volume ruptures from a fluid–air interface.

  14. Effects of Two Different Volume-Equated Weekly Distributed Short-Term Plyometric Training Programs on Futsal Players' Physical Performance.

    PubMed

    Yanci, Javier; Castillo, Daniel; Iturricastillo, Aitor; Ayarra, Rubén; Nakamura, Fábio Y

    2017-07-01

    Yanci, J, Castillo, D, Iturricastillo, A, Ayarra, R, and Nakamura, FY. Effects of two different volume-equated weekly distributed short-term plyometric training programs on futsal players' physical performance. J Strength Cond Res 31(7): 1787-1794, 2017-The aim was to analyze the effect of 2 different plyometric training programs (i.e., 1 vs. 2 sessions per week, same total weekly volume) on physical performance in futsal players. Forty-four futsal players were divided into 3 training groups differing in weekly plyometric training load: the 2 days per week plyometric training group (PT2D, n = 15), the 1 day per week plyometric training group (PT1D, n = 12), and the control group (CG, n = 12) which did not perform plyometric training. The results of this study showed that in-season futsal training per se was capable of improving repeat sprint ability (RSA) (effect size [ES] = -0.59 to -1.53). However, while change of direction ability (CODA) was maintained during the training period (ES = 0.00), 15-m sprint (ES = 0.73), and vertical jump (VJ) performance (ES = -0.30 to -1.37) were significantly impaired. By contrast, PT2D and PT1D plyometric training were effective in improving futsal players' 15-m sprint (ES = -0.64 to -1.00), CODA (ES = -1.83 to -5.50), and horizontal jump (ES = 0.33-0.64) performance. Nonetheless, all groups (i.e., PT2D, PT1D, and CG) presented a reduction in VJ performance (ES = -0.04 to -1.37). Regarding RSA performance, PT1D showed a similar improvement compared with CG (ES = -0.65 to -1.53) after the training intervention, whereas PT2D did not show significant change (ES = -0.04 to -0.38). These results may have considerable practical relevance for the optimal design of plyometric training programs for futsal players, given that a 1-day-per-week plyometric training program is more efficient than a 2-day-per-week plyometric training program to improve the futsal players' physical performance.

  15. Rotational motions from the 2016, Central Italy seismic sequence, as observed by an underground ring laser gyroscope

    NASA Astrophysics Data System (ADS)

    Simonelli, Andreino; Belfi, Jacopo; Beverini, Nicolò; Di Virgilio, Angela; Maccioni, Enrico; De Luca, Gaetano; Saccorotti, Gilberto; Wassermann, Joachim; Igel, Heiner

    2017-04-01

    We present analyses of rotational and translational ground motions from earthquakes recorded during October-November, 2016, in association with the Central Italy seismic-sequence. We use co-located measurements of the vertical ground rotation rate from a large ring laser gyroscope (RLG), and the three components of ground velocity from a broadband seismometer. Both instruments are positioned in a deep underground environment, within the Gran Sasso National Laboratories (LNGS) of the Istituto Nazionale di Fisica Nucleare (INFN). We collected dozen of events spanning the 3.5-5.9 Magnitude range, and epicentral distances between 40 km and 80 km. This data set constitutes an unprecedented observation of the vertical rotational motions associated with an intense seismic sequence at local distance. In theory - assuming plane wave propagation - the ratio between the vertical rotation rate and the transverse acceleration permits, in a single station approach, the estimation of apparent phase velocity in the case of SH arrivals or real phase velocity in the case of Love surface waves. This is a standard approach for the analysis of earthquakes at teleseismic distances, and the results reported by the literature are compatible with the expected phase velocities from the PREM model. Here we extend the application of the same approach to local events, thus exploring higher frequency ranges and larger rotation rate amplitudes. We use a novel approach to joint rotation/acceleration analysis based on the continuous wavelet transform (CWT). Wavelet coherence (WTC) is used as a filter for identifying those regions of the time-period plane where the rotation rate and transverse acceleration signals exhibit significant coherence. This allows retrieving estimates of phase velocities over the period range spanned by correlated arrivals. Coherency among ground rotation and translation is also observed throughout the coda of the P-wave arrival, an observation which is interpreted in terms of near-receiver P-SH converted energy due to 3D effects. Those particular coda waves, however, do exhibit a large variability in the rotation/acceleration ratio, as a likely consequence of differences in the wavepath and/or source mechanism.

  16. Uncertainty Analyses for Back Projection Methods

    NASA Astrophysics Data System (ADS)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  17. Evaluation of normalization methods in mammalian microRNA-Seq data

    PubMed Central

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  18. Tsunami waves generated by dynamically triggered aftershocks of the 2010 Haiti earthquake

    NASA Astrophysics Data System (ADS)

    Ten Brink, U. S.; Wei, Y.; Fan, W.; Miller, N. C.; Granja, J. L.

    2017-12-01

    Dynamically-triggered aftershocks, thought to be set off by the passage of surface waves, are currently not considered in tsunami warnings, yet may produce enough seafloor deformation to generate tsunamis on their own, as judged from new findings about the January 12, 2010 Haiti earthquake tsunami in the Caribbean Sea. This tsunami followed the Mw7.0 Haiti mainshock, which resulted from a complex rupture along the north shore of Tiburon Peninsula, not beneath the Caribbean Sea. The mainshock, moreover, had a mixed strike-slip and thrust focal mechanism. There were no recorded aftershocks in the Caribbean Sea, only small coastal landslides and rock falls on the south shore of Tiburon Peninsula. Nevertheless, a tsunami was recorded on deep-sea DART buoy 42407 south of the Dominican Republic and on the Santo Domingo tide gauge, and run-ups of ≤3 m were observed along a 90-km-long stretch of the SE Haiti coast. Three dynamically-triggered aftershocks south of Haiti have been recently identified within the coda of the mainshock (<200 s) by analyzing P wave arrivals recorded by dense seismic arrays, parsing the arrivals into 20-s-long stacks, and back-projecting the arrivals to the vicinity of the main shock (50-300 km). Two of the aftershocks, coming 20-40 s and 40-60 s after the mainshock, plot along NW-SE-trending submarine ridges in the Caribbean Sea south of Haiti. The third event, 120-140 s was located along the steep eastern slope of Bahoruco Peninsula, which is delineated by a normal fault. Forward tsunami models show that the arrival times of the DART buoy and tide gauge times are best fit by the earliest of the three aftershocks, with a Caribbean source 60 km SW of the mainshock rupture zone. Preliminary inversion of the DART buoy time series for fault locations and orientations confirms the location of the first source, but requires an additional unidentified source closer to shore 40 km SW of the mainshock rupture zone. This overall agreement between earthquake and tsunami analyses suggests that land-based earthquake ruptures and/or non-thrust main shocks can generate tsunamis by means of dynamically-triggered aftershocks. It also provides an independent verification to the back-projection seismic method, and it indicates that the active NE-SW shortening of Hispaniola extends southward into the Caribbean Sea.

  19. Locating low-frequency earthquakes using amplitude signals from seismograph stations: Examples from events at Montserrat, West Indies and from synthetic data

    NASA Astrophysics Data System (ADS)

    Jolly, A.; Jousset, P.; Neuberg, J.

    2003-04-01

    We determine locations for low-frequency earthquakes occurring prior to a collapse on June 25th, 1997 using signal amplitudes from a 7-station local seismograph network at the Soufriere Hills volcano on Montserrat, West Indies. Locations are determined by averaging the signal amplitude over the event waveform and inverting these data using an assumed amplitude decay model comprising geometrical spreading and attenuation. Resulting locations are centered beneath the active dome from 500 to 2000 m below sea level assuming body wave geometrical spreading and a quality factor of Q=22. Locations for the same events shifted systematically shallower by about 500 m assuming a surface wave geometrical spreading. Locations are consistent to results obtained using arrival time methods. The validity of the method is tested against synthetic low-frequency events constructed from a 2-D finite difference model including visco-elastic properties. Two example events are tested; one from a point source triggered in a low velocity conduit ranging between 100-1100 m below the surface, and the second triggered in a conduit located 1500-2500 m below the surface. Resulting seismograms have emergent onsets and extended codas and include the effect of conduit resonance. Employing geometrical spreading and attenuation from the finite-difference modelling, we obtain locations within the respective model conduits validating our approach.The location depths are sensitive to the assumed geometric spreading and Q model. We can distinguish between two sources separated by about 1000 meters only if we know the decay parameters.

  20. Alternative Fuels Data Center: Maps and Data

    Science.gov Websites

    2014 2015 2016 Acura 1 2 1 Audi 6 6 7 7 Bentley Motors 3 3 4 4 3 BMW 1 3 1 3 6 6 5 Buick 1 1 1 3 4 5 1 3 2 Cadillac 2 2 4 4 6 4 2 1 Chevrolet 1 2 3 1 4 4 6 5 7 7 8 7 8 9 11 10 11 14 20 26 24 30 27 Chrysler 2 2 3 2 4 1 1 1 3 3 4 2 3 3 3 3 5 5 Coda Automotive 1 1 0 Dodge 2 2 3 7 5 1 4 4 6 6 6 2 4 5 5 5 6

  1. Learning Theory Foundations of Simulation-Based Mastery Learning.

    PubMed

    McGaghie, William C; Harris, Ilene B

    2018-06-01

    Simulation-based mastery learning (SBML), like all education interventions, has learning theory foundations. Recognition and comprehension of SBML learning theory foundations are essential for thoughtful education program development, research, and scholarship. We begin with a description of SBML followed by a section on the importance of learning theory foundations to shape and direct SBML education and research. We then discuss three principal learning theory conceptual frameworks that are associated with SBML-behavioral, constructivist, social cognitive-and their contributions to SBML thought and practice. We then discuss how the three learning theory frameworks converge in the course of planning, conducting, and evaluating SBML education programs in the health professions. Convergence of these learning theory frameworks is illustrated by a description of an SBML education and research program in advanced cardiac life support. We conclude with a brief coda.

  2. An Alternative Approach to Analyze Ipsative Data. Revisiting Experiential Learning Theory.

    PubMed

    Batista-Foguet, Joan M; Ferrer-Rosell, Berta; Serlavós, Ricard; Coenders, Germà; Boyatzis, Richard E

    2015-01-01

    The ritualistic use of statistical models regardless of the type of data actually available is a common practice across disciplines which we dare to call type zero error. Statistical models involve a series of assumptions whose existence is often neglected altogether, this is specially the case with ipsative data. This paper illustrates the consequences of this ritualistic practice within Kolb's Experiential Learning Theory (ELT) operationalized through its Learning Style Inventory (KLSI). We show how using a well-known methodology in other disciplines-compositional data analysis (CODA) and log ratio transformations-KLSI data can be properly analyzed. In addition, the method has theoretical implications: a third dimension of the KLSI is unveiled providing room for future research. This third dimension describes an individual's relative preference for learning by prehension rather than by transformation. Using a sample of international MBA students, we relate this dimension with another self-assessment instrument, the Philosophical Orientation Questionnaire (POQ), and with an observer-assessed instrument, the Emotional and Social Competency Inventory (ESCI-U). Both show plausible statistical relationships. An intellectual operating philosophy (IOP) is linked to a preference for prehension, whereas a pragmatic operating philosophy (POP) is linked to transformation. Self-management and social awareness competencies are linked to a learning preference for transforming knowledge, whereas relationship management and cognitive competencies are more related to approaching learning by prehension.

  3. An Alternative Approach to Analyze Ipsative Data. Revisiting Experiential Learning Theory

    PubMed Central

    Batista-Foguet, Joan M.; Ferrer-Rosell, Berta; Serlavós, Ricard; Coenders, Germà; Boyatzis, Richard E.

    2015-01-01

    The ritualistic use of statistical models regardless of the type of data actually available is a common practice across disciplines which we dare to call type zero error. Statistical models involve a series of assumptions whose existence is often neglected altogether, this is specially the case with ipsative data. This paper illustrates the consequences of this ritualistic practice within Kolb's Experiential Learning Theory (ELT) operationalized through its Learning Style Inventory (KLSI). We show how using a well-known methodology in other disciplines—compositional data analysis (CODA) and log ratio transformations—KLSI data can be properly analyzed. In addition, the method has theoretical implications: a third dimension of the KLSI is unveiled providing room for future research. This third dimension describes an individual's relative preference for learning by prehension rather than by transformation. Using a sample of international MBA students, we relate this dimension with another self-assessment instrument, the Philosophical Orientation Questionnaire (POQ), and with an observer-assessed instrument, the Emotional and Social Competency Inventory (ESCI-U). Both show plausible statistical relationships. An intellectual operating philosophy (IOP) is linked to a preference for prehension, whereas a pragmatic operating philosophy (POP) is linked to transformation. Self-management and social awareness competencies are linked to a learning preference for transforming knowledge, whereas relationship management and cognitive competencies are more related to approaching learning by prehension. PMID:26617561

  4. A Study of Regional Wave Source Time Functions of Central Asian Earthquakes

    NASA Astrophysics Data System (ADS)

    Xie, J.; Perry, M. R.; Schult, F. R.; Wood, J.

    2014-12-01

    Despite the extensive use of seismic regional waves in seismic event identification and attenuation tomography, very little is known on how seismic sources radiate energy into these waves. For example, whether regional Lg wave has the same source spectrum as that of the local S has been questioned by Harr et al. and Frenkel et al. three decades ago; many current investigators assume source spectra in Lg, Sn, Pg, Pn and Lg coda waves have either the same or very similar corner frequencies, in contrast to local P and S spectra whose corner frequencies differ. The most complete information on how the finite source ruptures radiate energy into regional waves is contained in the time domain source time functions (STFs). To estimate the STFs of regional waves using the empirical Green's function (EGF) method, we have been substantially modifying a semi-automotive computer procedure to cope with the increasingly diverse and inconsistent naming patterns of new data files from the IRIS DMC. We are applying the modified procedure to many earthquakes in central Asia to study the STFs of various regional waves to see whether they have the same durations and pulse shapes, and how frequently source directivity occur. When applicable, we also examine the differences between STFs of local P and S waves and those of regional waves. The result of these analyses will be presented at the meeting.

  5. Site response variation due to the existence of near-field cracks based on strong motion records in the Shi-Wen river valley, southern Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Chi-Shin; Yu, Teng-To; Peng, Wen-Fei; Yeh, Yeoin-Tein; Lin, Sih-Siao

    2014-10-01

    Site effect analysis has been applied to investigate soil classification, alluvium depth, and fracture detection, although the majority of previous studies have typically focused only on the response of large-scale single structures. In contrast, we investigated the site effect for small-scale cracks using a case study in southern Taiwan to provide a means of monitoring slope stability or foundation integrity in situ using only an accelerometer. We adopted both the reference site and horizontal-to-vertical spectral ratio methods. We obtained seismographs associated with the typhoon-related development of a crack set (52 m long, 5 m deep) in a steep slope and compared the resonance frequency between two conditions (with and without cracks). Moreover, we divided the seismic waves into P, S, and coda waves and examined the seismic source effect. Our results demonstrate that frequencies of 14.5-17.5 Hz are most sensitive to these cracks, particularly for the E-W component of the P-waves, which coincides with the crack’s strike. Peak ground acceleration, which is controlled by seismic moment and attenuated distance, is another important factor determining the resonance results. Our results demonstrate that the ratio of temporal seismic waves can be used to detect the existence of nearby subsurface cracks.

  6. Subduction zone guided waves in Northern Chile

    NASA Astrophysics Data System (ADS)

    Garth, Thomas; Rietbrock, Andreas

    2016-04-01

    Guided wave dispersion is observed in subduction zones as high frequency energy is retained and delayed by low velocity structure in the subducting slab, while lower frequency energy is able to travel at the faster velocities associated with the surrounding mantle material. As subduction zone guided waves spend longer interacting with the low velocity structure of the slab than any other seismic phase, they have a unique capability to resolve these low velocity structures. In Northern Chile, guided wave arrivals are clearly observed on two stations in the Chilean fore-arc on permanent stations of the IPOC network. High frequency (> 5 Hz) P-wave arrivals are delayed by approximately 2 seconds compared to the low frequency (< 2 Hz) P-wave arrivals. Full waveform finite difference modelling is used to test the low velocity slab structure that cause this P-wave dispersion. The synthetic waveforms produced by these models are compared to the recorded waveforms. Spectrograms are used to compare the relative arrival times of different frequencies, while the velocity spectra is used to constrain the relative amplitude of the arrivals. Constraining the waveform in these two ways means that the full waveform is also matched, and the low pass filtered observed and synthetic waveforms can be compared. A combined misfit between synthetic and observed waveforms is then calculated following Garth & Rietbrock (2014). Based on this misfit criterion we constrain the velocity model by using a grid search approach. Modelling the guided wave arrivals suggest that the observed dispersion cannot be solely accounted for by a single low velocity layer as suggested by previous guided wave studies. Including dipping low velocity normal fault structures in the synthetic model not only accounts for the observed strong P-wave coda, but also produces a clear first motion dispersion. We therefore propose that the lithospheric mantle of the subducting Nazca plate is highly hydrated at intermediate depths by dipping low velocity normal faults. Additionally, we show that the low velocity oceanic crust persists to depths of up to 200 km, well beyond the depth range where the eclogite transition is expected to have occurred. Our results suggest that young subducting lithosphere also has the potential to carry much larger amounts of water to the mantle than has previously been appreciated.

  7. Stress Domain Effects in French Phonology and Phonological Development.

    PubMed

    Rose, Yvan; Dos Santos, Christophe

    In this paper, we discuss two distinct data sets. The first relates to the so-called allophonic process of closed-syllable laxing in Québec French, which targets final (stressed) vowels even though these vowels are arguably syllabified in open syllables in lexical representations. The second is found in the forms produced by a first language learner of European French, who displays an asymmetry in her production of CVC versus CVCV target (adult) forms. The former display full preservation (with concomitant manner harmony) of both consonants. The latter undergoes deletion of the initial syllable if the consonants are not manner-harmonic in the input. We argue that both patterns can be explained through a phonological process of prosodic strengthening targeting the head of the prosodic domain which, in the contexts described above, yields the incorporation of final consonants into the coda of the stressed syllable.

  8. Seismic Characterization of EGS Reservoirs

    NASA Astrophysics Data System (ADS)

    Templeton, D. C.; Pyle, M. L.; Matzel, E.; Myers, S.; Johannesson, G.

    2014-12-01

    To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance the traditional microearthquake detection and location methodologies at two EGS systems. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP are typically smaller magnitude events or events that occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event seismic location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation could be real or simply within the anticipated error range. We apply this methodology to the Basel EGS data set and compare it to another EGS dataset. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Monitoring stress related velocity variation in concrete with a 2 x 10(-5) relative resolution using diffuse ultrasound.

    PubMed

    Larose, Eric; Hall, Stephen

    2009-04-01

    Ultrasonic waves propagating in solids have stress-dependent velocities. The relation between stress (or strain) and velocity forms the basis of non-linear acoustics. In homogeneous solids, conventional time-of-flight techniques have measured this dependence with spectacular precision. In heterogeneous media such as concrete, the direct (ballistic) wave around 500 kHz is strongly attenuated and conventional techniques are less efficient. In this manuscript, the effect of weak stress changes on the late arrivals constituting the acoustic diffuse coda is tracked. A resolution of 2 x 10(-5) in relative velocity change is attained which corresponds to a sensitivity to stress change of better than 50 kPa. Therefore, the technique described here provides an original way to measure the non-linear parameter with stress variations on the order of tens of kPa.

  10. Inferring the thermal structure of the Panama Basin by seismic attenuation

    NASA Astrophysics Data System (ADS)

    Vargas-Jimenez, C. A.; Pulido, J. E.; Hobbs, R. W.

    2017-12-01

    Using recordings of earthquakes on Oceanic Bottom Seismographs and onshore stations on the coastal margins of Colombia, Panama, and Ecuador, we discriminate intrinsic and scattering attenuation processes in the upper lithosphere of the Panama Basin. The tomographic images of the derived coda-Q values are correlated with estimates of Curie Point Depth and measured and theoretical heat flow. Our study reveals three tectonic domains where magmatic/hydrothermal activity or lateral variations of the lithologic composition in the upper lithosphere can account for the modelled thermal structure and the anelasticity. We find that the Costa Rica Ridge and the Panama Fracture Zone are significant tectonic features in the study area. We interpret a large and deep intrinsic attenuation anomaly as related to the heat source at this ocean spreading center and show how interactions with regional fault systems cause contrasting attenuation anomalies.

  11. Thermal structure of the Panama Basin by analysis of seismic attenuation

    NASA Astrophysics Data System (ADS)

    Vargas, Carlos A.; Pulido, José E.; Hobbs, Richard W.

    2018-04-01

    Using recordings of earthquakes on Oceanic Bottom Seismographs and onshore stations on the coastal margins of Colombia, Panama, and Ecuador, we estimate attenuation parameters in the upper lithosphere of the Panama Basin. The tomographic images of the derived coda-Q values are correlated with estimates of Curie Point Depth and measured and theoretical heat flow. Our study reveals three tectonic domains where magmatic/hydrothermal activity or lateral variations of the lithologic composition in the upper lithosphere can account for the modeled thermal structure and the anelasticity. We find that the Costa Rica Ridge and the Panama Fracture Zone are significant tectonic features probably related to thermal anomalies detected in the study area. We interpret a large and deep intrinsic attenuation anomaly as related to the heat source at the Costa Rica Ridge and show how interactions with regional fault systems cause contrasting attenuation anomalies.

  12. Reflectometry diagnostics on TCV

    NASA Astrophysics Data System (ADS)

    Molina Cabrera, Pedro; Coda, Stefano; Porte, Laurie; Offeddu, Nicola; Tcv Team

    2017-10-01

    Both profile reflectometer and Doppler back-scattering (DBS) diagnostics are being developed for the TCV Tokamak using a steerable quasi-optical launcher and universal polarizers. First results will be presented. A pulse reflectometer is being developed to complement Thomson Scattering measurements of electron density, greatly increasing temporal resolution and also effectively enabling fluctuation measurements. Pulse reflectometry consists of sending short pulses of varying frequency and measuring the roundtrip group-delay with precise chronometers. A fast arbitrary waveform generator is used as a pulse source feeding frequency multipliers that bring the pulses to V-band. A DBS diagnostic is currently operational in TCV. DBS may be used to infer the perpendicular velocity and wave number spectrum of electron density fluctuations in the 3-15 cm-1 wave-number range. Off-the-shelf transceiver modules, originally used for VNA measurements, are being used in a Doppler radar configuration. See author list of S. Coda et al., 2017 Nucl. Fusion 57 102011.

  13. Early comprehension of the Spanish plural*

    PubMed Central

    Arias-Trejo, Natalia; Cantrell, Lisa M.; Smith, Linda B.; Alva Canto, Elda A.

    2015-01-01

    Understanding how linguistic cues map to the environment is crucial for early language comprehension and may provide a way for bootstrapping and learning words. Research has suggested that learning how plural syntax maps to the perceptual environment may show a trajectory in which children first learn surrounding cues (verbs, modifiers) before a full mastery of the noun morpheme alone. The Spanish plural system of simple codas, dominated by one allomorph -s, and with redundant agreement markers, may facilitate early understanding of how plural linguistic cues map to novel referents. Two-year-old Mexican children correctly identified multiple novel object referents when multiple verbal cues in a phrase indicated plurality as well as in instances when the noun morphology in novel nouns was the ONLY indicator of plurality. These results demonstrate Spanish-speaking children’s ability to use plural noun inflectional morphology to infer novel word referents which may have implications for their word learning. PMID:24560441

  14. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    PubMed

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  15. Location of early aftershocks of the 2004 Mid-Niigata Prefecture Earthquake (M = 6.8) in central Japan using seismogram envelopes as templates

    NASA Astrophysics Data System (ADS)

    Kosuga, M.

    2013-12-01

    The location of early aftershocks is very important to obtain information of mainshock fault, however, it is often difficult due to the long-lasting coda wave of mainshock and successive occurrence of afterrshocks. To overcome this difficulty, we developed a method of location using seismogram envelopes as templates, and applied the method to the early aftershock sequence of the 2004 Mid-Niigata Prefecture (Chuetsu) Earthquake (M = 6.8) in central Japan. The location method composes of three processes. The first process is the calculation of cross-correlation coefficients between a continuous (target) and template envelopes. We prepare envelopes by taking the logarithm of root-mean-squared amplitude of band-pass filtered seismograms. We perform the calculation by shifting the time window to obtain a set of cross-correlation values for each template. The second process is the event detection (selection of template) and magnitude estimate. We search for the events in descending order of cross-correlation in a time window excluding the dead times around the previously detected events. Magnitude is calculated by the amplitude ratio of target and template envelopes. The third process is the relative event location to the selected template. We applied this method to the Chuetsu earthquake, a large inland earthquake with extensive aftershock activity. The number of detected events depends on the number of templates, frequency range, and the threshold value of cross-correlation. We set the threshold as 0.5 by referring to the histogram of cross-correlation. During a period of one-hour from the mainshock, we could detect more events than the JMA catalog. The location of events is generally near the catalog location. Though we should improve the methods of relative location and magnitude estimate, we conclude that the proposed method works adequately even just after the mainshock of large inland earthquake. Acknowledgement: We thank JMA, NIED, and the University of Tokyo for providing arrival time data, and waveform data. This work was supported by JSPS KAKENHI Grant Number 23540487.

  16. Significant Variation of Post-critical SsPmp Amplitude as a Result of Variation in Near-surface Velocity: Observations from the Yellowknife Array

    NASA Astrophysics Data System (ADS)

    Ferragut, G.; Liu, T.; Klemperer, S. L.

    2017-12-01

    In recent years Virtual Deep Seismic Sounding (VDSS) emerged as a novel method to image the Moho, which uses the post-critical reflection P waves at the Moho generated by teleseismic S waves at the free surface near the receivers (SsPmp). However, observed SsPmp sometimes have significantly lower amplitude than predicted, raising doubts among the seismic community on the theoretical basis of the method. With over two decades of continuous digital broadband records and major subduction zones in the range of 30-50 degrees, the Yellowknife Array in northern Canada provides a rich opportunity for observation of post-critical SsPmp. We analyze S wave coda of events with epicenter distances of 30-50°, and pay special attention to earthquakes in a narrow azimuth range that ­­­encompasses the Kamchatka Peninsula. Among 21 events with strong direct S energy on the radial components, we observe significant variation of SsPmp energy. After associating the SsPmp energy with the virtual source location of each event, we observe a general trend of decreasing SsPmp energy from NE to SW. As the trend coincides with the transition from exposed basement of the Slave Craton to Paleozoic platform covered by Phanerozoic sediment, we interpret the decreasing SsPmp energy as a result of lower S velocity at the virtual sources, which reduces S-to-P reflection coefficients. We plan to include more events from the Aleutian Islands, the virtual sources of which are primarily located in the Paleozoic platform. This will allow us to further investigate the relationship between SsPmp amplitude and near-surface velocity.

  17. Word structures of Granada Spanish-speaking preschoolers with typical versus protracted phonological development.

    PubMed

    Bernhardt, B May; Hanson, R; Perez, D; Ávila, C; Lleó, C; Stemberger, J P; Carballo, G; Mendoza, E; Fresneda, D; Chávez-Peón, M

    2015-01-01

    Research on children's word structure development is limited. Yet, phonological intervention aims to accelerate the acquisition of both speech-sounds and word structure, such as word length, stress or shapes in CV sequences. Until normative studies and meta-analyses provide in-depth information on this topic, smaller investigations can provide initial benchmarks for clinical purposes. To provide preliminary reference data for word structure development in a variety of Spanish with highly restricted coda use: Granada Spanish (similar to many Hispano-American varieties). To be clinically applicable, such data would need to show differences by age, developmental typicality and word structure complexity. Thus, older typically developing (TD) children were expected to show higher accuracy than younger children and those with protracted phonological development (PPD). Complex or phonologically marked forms (e.g. multisyllabic words, clusters) were expected to be late developing. Participants were 59 children aged 3-5 years in Granada, Spain: 30 TD children, and 29 with PPD and no additional language impairments. Single words were digitally recorded by a native Spanish speaker using a 103-word list and transcribed by native Spanish speakers, with confirmation by a second transcriber team and acoustic analysis. The program Phon 1.5 provided quantitative data. In accordance with expectations, the TD and older age groups had better-established word structures than the younger children and those with PPD. Complexity was also relevant: more structural mismatches occurred in multisyllabic words, initial unstressed syllables and clusters. Heterosyllabic consonant sequences were more accurate than syllable-initial sequences. The most common structural mismatch pattern overall was consonant deletion, with syllable deletion most common in 3-year-olds and children with PPD. The current study provides preliminary reference data for word structure development in a Spanish variety with restricted coda use, both by age and types of word structures. Between ages 3 and 5 years, global measures (whole word match, word shape match) distinguished children with typical versus protracted phonological development. By age 4, children with typical development showed near-mastery of word structures, whereas 4- and 5-year-olds with PPD continued to show syllable deletion and cluster reduction, especially in multisyllabic words. The results underline the relevance of multisyllabic words and words with clusters in Spanish phonological assessment and the utility of word structure data for identification of protracted phonological development. © 2014 Royal College of Speech and Language Therapists.

  18. Diffuse Waves and Energy Densities Near Boundaries

    NASA Astrophysics Data System (ADS)

    Sanchez-Sesma, F. J.; Rodriguez-Castellanos, A.; Campillo, M.; Perton, M.; Luzon, F.; Perez-Ruiz, J. A.

    2007-12-01

    Green function can be retrieved from averaging cross correlations of motions within a diffuse field. In fact, it has been shown that for an elastic inhomogeneous, anisotropic medium under equipartitioned, isotropic illumination, the average cross correlations are proportional to the imaginary part of Green function. For instance coda waves are due to multiple scattering and their intensities follow diffusive regimes. Coda waves and the noise sample the medium and effectively carry information along their paths. In this work we explore the consequences of assuming both source and receiver at the same point. From the observable side, the autocorrelation is proportional to the energy density at a given point. On the other hand, the imaginary part of the Green function at the source itself is finite because the singularity of Green function is restricted to the real part. The energy density at a point is proportional with the trace of the imaginary part of Green function tensor at the source itself. The Green function availability may allow establishing the theoretical energy density of a seismic diffuse field generated by a background equipartitioned excitation. We study an elastic layer with free surface and overlaying a half space and compute the imaginary part of the Green function for various depths. We show that the resulting spectrum is indeed closely related to the layer dynamic response and the corresponding resonant frequencies are revealed. One implication of present findings lies in the fact that spatial variations may be useful in detecting the presence of a target by its signature in the distribution of diffuse energy. These results may be useful in assessing the seismic response of a given site if strong ground motions are scarce. It suffices having a reasonable illumination from micro earthquakes and noise. We consider that the imaginary part of Green function at the source is a spectral signature of the site. The relative importance of the peaks of this energy spectrum, ruling out non linear effects, may influence the seismic response for future earthquakes. Partial supports from DGAPA-UNAM, Project IN114706, Mexico; from Proyect MCyT CGL2005-05500-C02/BTE, Spain; from project DyETI of INSU-CNRS, France, and from the Instituto Mexicano del Petróleo are greatly appreciated.

  19. Rg to Lg Scattering Observations and Modeling

    NASA Astrophysics Data System (ADS)

    Baker, G. E.; Stevens, J. L.; Xu, H.

    2005-12-01

    Lg is important to explosion yield estimation and earthquake/explosion discrimination, but the source of explosion generated Lg is still an area of active investigation. We investigate the contribution of Rg scattering to Lg. Common spectral nulls in vertical component Rg and Lg have been interpreted as evidence that scattered Rg is the dominant source of Lg in some areas. The nulls are assumed to result from non-spherical components of the explosion source, modeled as a CLVD located above the explosion. We compare Rg with 3-component Sg and Lg spectra in different source areas. Wavenumber synthetics and nonlinear source calculations constrain the predicted source spectra of Rg and directly generated Lg. Modal scattering calculations place bounds on the contribution of Rg to Lg relative to pS, S*, and directly generated S-waves. Rg recorded east and west of the Quartz 3 Deep Seismic Sounding explosion have persistent spectral nulls, but at different frequencies. The azimuthal dependence of the source spectra suggests that it may not be simply related to a CLVD source. The spectral nulls of Sg, Lg, and Lg coda do not correspond to the Rg spectral nulls, so for this overburied source, the spectral observations do not indicate that Rg scattering is a dominant contributor to Lg. Preliminary comparisons of Rg with Lg spectra for events from the Semipalatinsk Test Site yield a similar result. We compare Rg at 20-100 km with Lg at 650 km for Balapan and Degelen explosions with known yield and source depth. The events range from 130 to 50 percent of theoretical containment depth, so relative contributions from a CLVD are expected to vary significantly. For studied previously NTS and Kazakh depth of burial data, the use of 3-components provides further insight into scattering between components. In a complementary analysis, to assess whether S-wave generation is affected by source depth or scaled depth, we have examined regional phase amplitudes of 13 Degelen explosions with known yields and source depths. Initial Pn, the entire P wavetrain, Sn, Lg, and Lg coda have similar log amplitude vs. log yield curves. The slope of those curves varies with frequency, from approximately 0.84 at 0.6 Hz to 0.65 at 6 Hz. We will complement these results with similar observations of Balapan explosion records.

  20. In-situ changes in the elastic wave velocity of rock with increasing temperature using high-resolution coda wave interferometry

    NASA Astrophysics Data System (ADS)

    Griffiths, Luke; Heap, Michael; Lengliné, Olivier; Schmittbuhl, Jean; Baud, Patrick

    2017-04-01

    Rock undergoes fluctuations in temperature in various settings in Earth's crust, including areas of volcanic or geothermal activity, or industrial environments such as hydrocarbon or geothermal reservoirs. Changes in temperature can cause thermal stresses that can result in the formation of microcracks, which affect the mechanical, physical, and transport properties of rocks. Of the affected physical properties, the elastic wave velocity of rock is particularly sensitive to microcracking. Monitoring the evolution of elastic wave velocity during the thermal stressing of rock therefore provides valuable insight into thermal cracking processes. One monitoring technique is Coda Wave Interferometry (CWI), which infers high-resolution changes in the medium from changes in multiple-scattered elastic waves. We have designed a new experimental setup to perform CWI whilst cyclically heating and cooling samples of granite (cylinders of 20 mm diameter and 40 mm length). In our setup, the samples are held between two pistons within a tube furnace and are heated and cooled at a rate of 1 °C/min to temperatures of up to 300 °C. Two high temperature piezo-transducers are each in contact with an opposing face of the rock sample. The servo-controlled uniaxial press compensates for the thermal expansion and contraction of the pistons and the sample, keeping the coupling between the transducers and the sample, and the axial force acting on the sample, constant throughout. Our setup is designed for simultaneous acoustic emission monitoring (AE is commonly used as a proxy for microcracking), and so we can follow thermal microcracking precisely by combining the AE and CWI techniques. We find that during the first heating/cooling cycle, the onset of thermal microcracking occurs at a relatively low temperature of around 65 °C. The CWI shows that elastic wave velocity decreases with increasing temperature and increases during cooling. Upon cooling, back to room temperature, there is an irreversible relative decrease in velocity of several percent associated with the presence of new thermal microcracks. Our data suggest that few new microcracks were formed when the same sample was subject to subsequent identical heating/cooling cycles as changes in the elastic wave velocity are near-reversible. Our results shed light on the temperature conditions required for thermal microcracking and the influence of temperature on elastic wave velocity with applications to a wide variety of geoscientific disciplines.

  1. Inferences on the Physical Nature of Earth's Inner Core Boundary Region from Observations of Antipodal PKIKP and PKIIKP Waves

    NASA Astrophysics Data System (ADS)

    Cormier, V. F.; Attanayake, J.; Thomas, C.; Koper, K. D.; Miller, M. S.

    2017-12-01

    The Earth's Inner Core Boundary (ICB) is considered a uniform and sharp liquid-to-solid transition in standard Earth models such as PREM and AK135-F. By analysing seismic wave reflections emanating from the ICB, this hypothesis of a simple ICB can be tested. Observed absolute and relative amplitudes and coda of the PKiKP phase that is reflected on the topside of the ICB suggest that the ICB is neither uniform nor has a simple structure. Similarly, waves that are reflected from the underside of the ICB - PKIIKP phase - can be used to determine the physical nature of the region immediately below the ICB. Using high-frequency synthetic waveform experiments, we confirm that antipodal PKIIKP amplitudes can discriminate the state of the uppermost 10 km of the inner core: A standard liquid-to-solid ICB (high shear velocity/shear modulus discontinuity) produces a maximum PKIIKP amplitude equal to only a factor of 0.14 of the PKIKP amplitude, whereas a non-standard liquid-to-near liquid ICB (low shear velocity/shear modulus discontinuity) can produce PKIIKP amplitudes comparable to PKIKP. We searched for PKIIKP in individual and stacked array waveforms in the 170° - 180° distance range for the 2000 to 2016 time period globally to compare with our synthetic results. We attribute a lack of PKIIKP detection in the stacked array recordings due to (1) ranges closer to 170° and not 180°, where the PKIIKP signal-to-noise ratio is very poor; (2) scattered coda following PKIKP masking the PKIIKP phase; and (3) large azimuthal variations of array recordings closer to 180° preventing the formation of an accurate beam. Envelopes of individual recordings in the 178° - 180° distance range, however, clearly show energy peaks correlating with the travel time of PKIIKP phase. Our global set of PKIIKP/PKIKP energy ratio measurements vary between 0.1 and 1.1, indicating significant structural complexity immediately below the ICB. While a complex inner core anisotropy structure and ICB topography could influence these energy ratios, we favor a hypothesis of a thin transition layer of thickness < 10 km below the ICB having a laterally varying shear modulus (or shear velocity) to explain observed rapid lateral variations of PKIIKP/PKIKP energy ratios.

  2. Synthetic Pn and Sn phases and the frequency dependence of Q of oceanic lithosphere

    NASA Astrophysics Data System (ADS)

    Sereno, Thomas J., Jr.; Orcutt, John A.

    1987-04-01

    The oceanic lithosphere is an extremely efficient waveguide for high-frequency seismic energy. In particular, the propagation of the regional to teleseismic oceanic Pn and Sn phases is largely controlled by properties of the oceanic plates. The shallow velocity gradient in the sub-Moho lithosphere results in a nearly linear travel time curve for these oceanic phases and an onset velocity near the material velocity of the uppermost mantle. The confinement of Pn/Sn to the lithosphere imposes a constraint on the maximum range that a normally refracted wave can be observed. The rapid disappearance of Sn and the discontinuous drop in Pn/Sn group velocity beyond a critical distance, dependent upon the local thickness of the lithosphere, are interpreted as a shadowing effect of the low Q asthenosphere. Wave number integration was used to compute complete synthetic seismograms for a model of oceanic lithosphere. The results were compared to data collected during the 1983 Ngendei Seismic Experiment in the southwest Pacific. The Pn/Sn coda is successfully modeled as a sum of leaky organ-pipe modes in the sediment layer and oceanic water column. While scattering is present to some degree, it is not required to explain the long duration and complicated nature of the Pn/Sn wave trains. The presence of extremely high frequencies in Pn/Sn phases and the greater efficiency of Sn than Pn propagation are interpreted in terms of an absorption band rheology. A shorter high-frequency relaxation time for P waves than for S waves results in a rheology with the property that Qα > Qβ at low frequency while Qβ > Qα at high frequency, consistent with the teleseismic Pn/Sn observations. The absorption band model is to viewed as only an approximation to the true frequency dependence of Q in the oceanic lithosphere for which analytic expressions for the material dispersion have been developed.

  3. Near-Source Scattering of Explosion-Generated Rg: Insight From Difference Spectrograms of NTS Explosions

    NASA Astrophysics Data System (ADS)

    Gupta, I.; Chan, W.; Wagner, R.

    2005-12-01

    Several recent studies of the generation of low-frequency Lg from explosions indicate that the Lg wavetrain from explosions contains significant contributions from (1) the scattering of explosion-generated Rg into S and (2) direct S waves from the non-spherical spall source associated with a buried explosion. The pronounced spectral nulls observed in Lg spectra of Yucca Flats (NTS) and Semipalatinsk explosions (Patton and Taylor, 1995; Gupta et al., 1997) are related to Rg excitation caused by spall-related block motions in a conical volume over the shot point, which may be approximately represented by a compensated linear vector dipole (CLVD) source (Patton et al., 2005). Frequency-dependent excitation of Rg waves should be imprinted on all scattered P, S and Lg waves. A spectrogram may be considered as a three-dimensional matrix of numbers providing amplitude and frequency information for each point in the time series. We found difference spectrograms, derived from a normal explosion and a closely located over-buried shot recorded at the same common station, to be remarkably useful for an understanding of the origin and spectral contents of various regional phases. This technique allows isolation of source characteristics, essentially free from path and recording site effects, since the overburied shot acts as the empirical Green's function. Application of this methodology to several pairs of closely located explosions shows that the scattering of explosion-generated Rg makes significant contribution to not only Lg and its coda but also to the two other regional phases Pg (presumably by the scattering of Rg into P) and Sn. The scattered energy, identified by the presence of a spectral null at the appropriate frequency, generally appears to be more prominent in the somewhat later-arriving sections of Pg, Sn, and Lg than in the initial part. Difference spectrograms appear to provide a powerful new technique for understanding the mechanism of near-source scattering of explosion-generated Rg and its contribution to various regional phases.

  4. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    PubMed Central

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  5. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  6. High-resolution lithospheric imaging with seismic interferometry

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Campman, Xander; Draganov, Deyan; Wapenaar, Kees

    2010-10-01

    In recent years, there has been an increase in the deployment of relatively dense arrays of seismic stations. The availability of spatially densely sampled global and regional seismic data has stimulated the adoption of industry-style imaging algorithms applied to converted- and scattered-wave energy from distant earthquakes, leading to relatively high-resolution images of the lower crust and upper mantle. We use seismic interferometry to extract reflection responses from the coda of transmitted energy from distant earthquakes. In theory, higher-resolution images can be obtained when migrating reflections obtained with seismic interferometry rather than with conversions, traditionally used in lithospheric imaging methods. Moreover, reflection data allow the straightforward application of algorithms previously developed in exploration seismology. In particular, the availability of reflection data allows us to extract from it a velocity model using standard multichannel data-processing methods. However, the success of our approach relies mainly on a favourable distribution of earthquakes. In this paper, we investigate how the quality of the reflection response obtained with interferometry is influenced by the distribution of earthquakes and the complexity of the transmitted wavefields. Our analysis shows that a reasonable reflection response could be extracted if (1) the array is approximately aligned with an active zone of earthquakes, (2) different phase responses are used to gather adequate angular illumination of the array and (3) the illumination directions are properly accounted for during processing. We illustrate our analysis using a synthetic data set with similar illumination and source-side reverberation characteristics as field data recorded during the 2000-2001 Laramie broad-band experiment. Finally, we apply our method to the Laramie data, retrieving reflection data. We extract a 2-D velocity model from the reflections and use this model to migrate the data. On the final reflectivity image, we observe a discontinuity in the reflections. We interpret this discontinuity as the Cheyenne Belt, a suture zone between Archean and Proterozoic terranes.

  7. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    PubMed

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  8. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Complex frequency analysis Tornillo earthquake Lokon Volcano in North Sulawesi period 1 January-17 March 2016

    NASA Astrophysics Data System (ADS)

    Hasanah, Intan; Syahbana, Devy Kamil; Santoso, Agus; Palupi, Indriati Retno

    2017-07-01

    Indonesia consists of 127 active volcanoes, that causing Indonesia has a very active seismic activity. The observed temporal variation in the complex frequency analysis of Tornillo earthquake in this study at Lokon Volcano, North Sulawesi occured during the period from January 1 to March 17, 2016. This research was conducted using the SOMPI method, with parameters of complex frequency is oscillation frequency (f) and decay coda character of wave (Q Factor). The purpose of this research was to understand the condition of dynamics of fluids inside Lokon Volcano in it's period. The analysis was based on the Sompi homogeneous equation Auto-Regressive (AR). The results of this study were able to estimate the dynamics of fluids inside Lokon Volcano and identify the content of the fluid and dynamics dimension crust. Where the Tornillo earthquake in this period has a value of Q (decay waves) are distributed under 200 and frequency distributed between 3-4 Hz. Tornillo earthquake was at a shallow depth of less than 2 km and paraded to the Tompaluan Crater. From the analysis of complex frequencies, it can be estimated if occured an eruption at Lokon Volcano in it's period, the estimated type of eruption was phreatic eruption. With an estimated composition of the fluid in the form of Misty Gas a mass fraction of gas ranging between 0-100%. Another possible fluid contained in Lokon Volcano is water vapor with the gas volume fraction range 10-90%.

  10. Retrieving robust noise-based seismic velocity changes from sparse data sets: synthetic tests and application to Klyuchevskoy volcanic group (Kamchatka)

    NASA Astrophysics Data System (ADS)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N. M.; Droznin, D. V.; Droznina, S. Ya; Senyukov, S. L.; Gordeev, E. I.

    2018-05-01

    Continuous noise-based monitoring of seismic velocity changes provides insights into volcanic unrest, earthquake mechanisms and fluid injection in the sub-surface. The standard monitoring approach relies on measuring travel time changes of late coda arrivals between daily and reference noise cross-correlations, usually chosen as stacks of daily cross-correlations. The main assumption of this method is that the shape of the noise correlations does not change over time or, in other terms, that the ambient-noise sources are stationary through time. These conditions are not fulfilled when a strong episodic source of noise, such as a volcanic tremor for example, perturbs the reconstructed Green's function. In this paper we propose a general formulation for retrieving continuous time series of noise-based seismic velocity changes without the requirement of any arbitrary reference cross-correlation function. Instead, we measure the changes between all possible pairs of daily cross-correlations and invert them using different smoothing parameters to obtain the final velocity change curve. We perform synthetic tests in order to establish a general framework for future applications of this technique. In particular, we study the reliability of velocity change measurements versus the stability of noise cross-correlation functions. We apply this approach to a complex dataset of noise cross-correlations at Klyuchevskoy volcanic group (Kamchatka), hampered by loss of data and the presence of highly non-stationary seismic tremors.

  11. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    PubMed

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  12. Temporal change in shallow subsurface P- and S-wave velocities and S-wave anisotropy inferred from coda wave interferometry

    NASA Astrophysics Data System (ADS)

    Yamamoto, M.; Nishida, K.; Takeda, T.

    2012-12-01

    Recent progresses in theoretical and observational researches on seismic interferometry reveal the possibility to detect subtle change in subsurface seismic structure. This high sensitivity of seismic interferometry to the medium properties may thus one of the most important ways to directly observe the time-lapse behavior of shallow crustal structure. Here, using the coda wave interferometry, we show the co-seismic and post-seismic changes in P- and S-wave velocities and S-wave anisotropy associated with the 2011 off the Pacific coast of Tohoku earthquake (M9.0). In this study, we use the acceleration data recorded at KiK-net stations operated by NIED, Japan. Each KiK-net station has a borehole whose typical depth is about 100m, and two three-component accelerometers are installed at the top and bottom of the borehole. To estimate the shallow subsurface P- and S-wave velocities and S-wave anisotropy between two sensors and their temporal change, we select about 1000 earthquakes that occurred between 2004 and 2012, and extract body waves propagating between borehole sensors by computing the cross-correlation functions (CCFs) of 3 x 3 component pairs. We use frequency bands of 2-4, 4-8, 8-16 Hz in our analysis. Each averaged CCF shows clear wave packets traveling between borehole sensors, and their travel times are almost consistent with those of P- and S-waves calculated from the borehole log data. Until the occurrence of the 2011 Tohoku earthquake, the estimated travel time at each station is rather stable with time except for weak seasonal/annual variation. On the other hand, the 2011 Tohoku earthquake and its aftershocks cause sudden decrease in the S-wave velocity at most of the KiK-net stations in eastern Japan. The typical value of S-wave velocity changes, which are measured by the time-stretching method, is about 5-15%. After this co-seismic change, the S-wave velocity gradually recovers with time, and the recovery continues for over one year following the logarithm of the lapse time. At some stations, the estimated P-wave velocity also shows co-seismic velocity decrease and subsequent gradual recovery. However, the magnitude of estimated P-wave velocity change is much smaller than that of S-wave, and at the other stations, the magnitude of P-wave velocity change is smaller than the resolution of our analysis. Using the CCFs computed from horizontal components, we also determine the seismic anisotropy in subsurface structure, and examine its temporal change. The estimated strength of anisotropy strength shows co-seismic increase at most of stations where co-seismic velocity change is detected. Nevertheless, the direction of anisotropy after the 2011 Tohoku earthquake stays about the same as before. These results suggest that, in addition to the change in pore pressure and corresponding decrease in the rigidity, the change in the aspect ratio of pre-existing subsurface fractures/micro-crack may be another key mechanism causing the co-seismic velocity change in shallow subsurface structures.

  13. A study of ground motion attenuation in the Southern Great Basin, Nevada-California, using several techniques for estimates of Qs , log A 0, and coda Q

    NASA Astrophysics Data System (ADS)

    Rogers, A. M.; Harmsen, S. C.; Herrmann, R. B.; Meremonte, M. E.

    1987-04-01

    As a first step in the assessment of the earthquake hazard in the southern Great Basin of Nevada-California, this study evaluates the attenuation of peak vertical ground motions using a number of different regression models applied to unfiltered and band-pass-filtered ground motion data. These data are concentrated in the distance range 10-250 km. The regression models include parameters to account for geometric spreading, anelastic attenuation with a power law frequency dependence, source size, and station site effects. We find that the data are most consistent with an essentially frequency-independent Q and a geometric spreading coefficient less than 1.0. Regressions are also performed on vertical component peak amplitudes reexpressed as pseudo-Wood-Anderson peak amplitude estimates (PWA), permitting comparison with earlier work that used Wood-Anderson (WA) data from California. Both of these results show that Q values in this region are high relative to California, having values in the range 700-900 over the frequency band 1-10 Hz. Comparison of ML magnitudes from stations BRK and PAS for earthquakes in the southern Great Basin shows that these two stations report magnitudes with differences that are distance dependent. This bias suggests that the Richter log A0 curve appropriate to California is too steep for earthquakes occurring in southern Nevada, a result implicitly supporting our finding that Q values are higher than those in California. The PWA attenuation functions derived from our data also indicate that local magnitudes reported by California observatories for earthquakes in this region may be overestimated by as much as 0.8 magnitude units in some cases. Both of these results will have an effect on the assessment of the earthquake hazard in this region. The robustness of our regression technique to extract the correct geometric spreading coefficient n and anelastic attenuation Q is tested by applying the technique to simulated data computed with given n and Q values. Using a stochastic modeling technique, we generate suites of seismograms for the distance range 10-200 km and for both WA and short-period vertical component seismometers. Regressions on the peak amplitudes from these records show that our regression model extracts values of n and Q approximately equal to the input values for either low-Q California attenuation or high-Q southern Nevada attenuation. Regressions on stochastically modeled WA and PWA amplitudes also provides a method of evaluating differences in magnitudes from WA and PWA amplitudes due to recording instrument response characteristics alone. These results indicate a difference between MLWA and MLPWA equal to 0.15 magnitude units, which we term the residual instrument correction. In contrast to the peak amplitude results, coda Q determinations using the single scatterer theory indicate that Qc values are dependent on source type and are proportional to ƒp, where p = 0.8 to 1.0. This result suggests that a difference exists between attenuation mechanisms for direct waves and backscattered waves in this region.

  14. Comparison of Arterial Spin-labeling Perfusion Images at Different Spatial Normalization Methods Based on Voxel-based Statistical Analysis.

    PubMed

    Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi

    2017-01-01

    Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.

  15. Experimental studies of breaking of elastic tired wheel under variable normal load

    NASA Astrophysics Data System (ADS)

    Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.

    2017-10-01

    The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.

  16. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack; Galal, Ken

    1992-01-01

    Methods are presented to normalize the attitude quaternion in two extended Kalman filters (EKF), namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). It is concluded that all the normalization methods work well and yield comparable results. In the AEKF, normalization is not essential, since the data chosen for the test do not have a rapidly varying attitude. In the MEKF, normalization is necessary to avoid divergence of the attitude estimate. All of the methods of the methods behave similarly when the spacecraft experiences low angular rates.

  17. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis.

    PubMed

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-12-13

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.

  18. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis

    PubMed Central

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-01-01

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387

  19. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  20. New Data Pre-processing on Assessing of Obstructive Sleep Apnea Syndrome: Line Based Normalization Method (LBNM)

    NASA Astrophysics Data System (ADS)

    Akdemir, Bayram; Güneş, Salih; Yosunkaya, Şebnem

    Sleep disorders are a very common unawareness illness among public. Obstructive Sleep Apnea Syndrome (OSAS) is characterized with decreased oxygen saturation level and repetitive upper respiratory tract obstruction episodes during full night sleep. In the present study, we have proposed a novel data normalization method called Line Based Normalization Method (LBNM) to evaluate OSAS using real data set obtained from Polysomnography device as a diagnostic tool in patients and clinically suspected of suffering OSAS. Here, we have combined the LBNM and classification methods comprising C4.5 decision tree classifier and Artificial Neural Network (ANN) to diagnose the OSAS. Firstly, each clinical feature in OSAS dataset is scaled by LBNM method in the range of [0,1]. Secondly, normalized OSAS dataset is classified using different classifier algorithms including C4.5 decision tree classifier and ANN, respectively. The proposed normalization method was compared with min-max normalization, z-score normalization, and decimal scaling methods existing in literature on the diagnosis of OSAS. LBNM has produced very promising results on the assessing of OSAS. Also, this method could be applied to other biomedical datasets.

  1. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  2. "We communicated that way for a reason": language practices and language ideologies among hearing adults whose parents are deaf.

    PubMed

    Pizer, Ginger; Walters, Keith; Meier, Richard P

    2013-01-01

    Families with deaf parents and hearing children are often bilingual and bimodal, with both a spoken language and a signed one in regular use among family members. When interviewed, 13 American hearing adults with deaf parents reported widely varying language practices, sign language abilities, and social affiliations with Deaf and Hearing communities. Despite this variation, the interviewees' moral judgments of their own and others' communicative behavior suggest that these adults share a language ideology concerning the obligation of all family members to expend effort to overcome potential communication barriers. To our knowledge, such a language ideology is not similarly pervasive among spoken-language bilingual families, raising the question of whether there is something unique about family bimodal bilingualism that imposes different rights and responsibilities on family members than spoken-language family bilingualism does. This ideology unites an otherwise diverse group of interviewees, where each one preemptively denied being a "typical CODA [children of deaf adult]."

  3. High-resolution probing of inner core structure with seismic interferometry

    NASA Astrophysics Data System (ADS)

    Huang, Hsin-Hua; Lin, Fan-Chi; Tsai, Victor C.; Koper, Keith D.

    2015-12-01

    Increasing complexity of Earth's inner core has been revealed in recent decades as the global distribution of seismic stations has improved. The uneven distribution of earthquakes, however, still causes a biased geographical sampling of the inner core. Recent developments in seismic interferometry, which allow for the retrieval of core-sensitive body waves propagating between two receivers, can significantly improve ray path coverage of the inner core. In this study, we apply such earthquake coda interferometry to 1846 USArray stations deployed across the U.S. from 2004 through 2013. Clear inner core phases PKIKP2 and PKIIKP2 are observed across the entire array. Spatial analysis of the differential travel time residuals between the two phases reveals significant short-wavelength variation and implies the existence of strong structural variability in the deep Earth. A linear N-S trending anomaly across the middle of the U.S. may reflect an asymmetric quasi-hemispherical structure deep within the inner core with boundaries of 99°W and 88°E.

  4. Glacier microseismicity

    USGS Publications Warehouse

    West, Michael E.; Larsen, Christopher F.; Truffer, Martin; O'Neel, Shad; LeBlanc, Laura

    2010-01-01

    We present a framework for interpreting small glacier seismic events based on data collected near the center of Bering Glacier, Alaska, in spring 2007. We find extremely high microseismicity rates (as many as tens of events per minute) occurring largely within a few kilometers of the receivers. A high-frequency class of seismicity is distinguished by dominant frequencies of 20–35 Hz and impulsive arrivals. A low-frequency class has dominant frequencies of 6–15 Hz, emergent onsets, and longer, more monotonic codas. A bimodal distribution of 160,000 seismic events over two months demonstrates that the classes represent two distinct populations. This is further supported by the presence of hybrid waveforms that contain elements of both event types. The high-low-hybrid paradigm is well established in volcano seismology and is demonstrated by a comparison to earthquakes from Augustine Volcano. We build on these parallels to suggest that fluid-induced resonance is likely responsible for the low-frequency glacier events and that the hybrid glacier events may be caused by the rush of water into newly opening pathways.

  5. A dynamic evolution model of human opinion as affected by advertising

    NASA Astrophysics Data System (ADS)

    Luo, Gui-Xun; Liu, Yun; Zeng, Qing-An; Diao, Su-Meng; Xiong, Fei

    2014-11-01

    We propose a new model to investigate the dynamics of human opinion as affected by advertising, based on the main idea of the CODA model and taking into account two practical factors: one is that the marginal influence of an additional friend will decrease with an increasing number of friends; the other is the decline of memory over time. Simulations show several significant conclusions for both advertising agencies and the general public. A small difference of advertising’s influence on individuals or advertising coverage will result in significantly different advertising effectiveness within a certain interval of value. Compared to the value of advertising’s influence on individuals, the advertising coverage plays a more important role due to the exponential decay of memory. Meanwhile, some of the obtained results are in accordance with people’s daily cognition about advertising. The real key factor in determining the success of advertising is the intensity of exchanging opinions, and people’s external actions always follow their internal opinions. Negative opinions also play an important role.

  6. Wideband acoustic records of explosive volcanic eruptions at Stromboli: New insights on the explosive process and the acoustic source

    NASA Astrophysics Data System (ADS)

    Goto, A.; Ripepe, M.; Lacanna, G.

    2014-06-01

    Wideband acoustic waves, both inaudible infrasound (<20 Hz) and audible component (>20 Hz), generated by strombolian eruptions were recorded at 5 kHz and correlated with video images. The high sample rate revealed that in addition to the known initial infrasound, the acoustic signal includes an energetic high-frequency (typically >100 Hz) coda. This audible signal starts before the positive infrasound onset goes negative. We suggest that the infrasonic onset is due to magma doming at the free surface, whereas the immediate high-frequency signal reflects the following explosive discharge flow. During strong gas-rich eruptions, positively skewed shockwave-like components with sharp compression and gradual depression appeared. We suggest that successive bursting of overpressurized small bubbles and the resultant volcanic jets sustain the highly gas-rich explosions and emit the audible sound. When the jet is supersonic, microexplosions of ambient air entrained in the hot jet emit the skewed waveforms.

  7. Stability of monitoring weak changes in multiply scattering media with ambient noise correlation: laboratory experiments.

    PubMed

    Hadziioannou, Céline; Larose, Eric; Coutant, Olivier; Roux, Philippe; Campillo, Michel

    2009-06-01

    Previous studies have shown that small changes can be monitored in a scattering medium by observing phase shifts in the coda. Passive monitoring of weak changes through ambient noise correlation has already been applied to seismology, acoustics, and engineering. Usually, this is done under the assumption that a properly reconstructed Green function (GF), as well as stable background noise sources, is necessary. In order to further develop this monitoring technique, a laboratory experiment was performed in the 2.5 MHz range in a gel with scattering inclusions, comparing an active (pulse-echo) form of monitoring to a passive (correlation) one. Present results show that temperature changes in the medium can be observed even if the GF of the medium is not reconstructed. Moreover, this article establishes that the GF reconstruction in the correlations is not a necessary condition: The only condition to monitoring with correlation (passive experiment) is the relative stability of the background noise structure.

  8. The Crsut Structure of Northwest Mexico Through Multipath Surface Waves Analysis

    NASA Astrophysics Data System (ADS)

    Hincapie, J.; Doser, D. I.; Ortega, R.

    2005-12-01

    The location of the crystalline basement and other crustal features in Northwestern Mexico (Sonora, and Chihuahua) is not well defined. This information is required to better understand its tectonic setting. Several researchers have carried out preliminary studies with results that show a great uncertainty about the velocity structure of the region as well. The only conclusion those studies agree upon is that the region has remarkable similarities with the southwestern U.S. Our study uses information from earthquakes originating in the Gulf of California, and recorded at broadband stations in the U.S. (Arizona, New Mexico, Texas) to determine the velocity structure of the region. Because earthquake sources occur along a 1200km long zone within the gulf, we are able to sample a variety of travel paths within Northwest Mexico. We will analyze Pnl waveforms, coda dacay, and surface waves to build a regional velocity attenuation model. The results are compared to regional gravity and magnetic maps.

  9. Sleep underpins the plasticity of language production.

    PubMed

    Gaskell, M Gareth; Warker, Jill; Lindsay, Shane; Frost, Rebecca; Guest, James; Snowdon, Reza; Stackhouse, Abigail

    2014-07-01

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep. © The Author(s) 2014.

  10. Cultural Hitchhiking in the Matrilineal Whales.

    PubMed

    Whitehead, Hal; Vachon, Felicia; Frasier, Timothy R

    2017-05-01

    Five species of whale with matrilineal social systems (daughters remain with mothers) have remarkably low levels of mitochondrial DNA diversity. Non-heritable matriline-level demography could reduce genetic diversity but the required conditions are not consistent with the natural histories of the matrilineal whales. The diversity of nuclear microsatellites is little reduced in the matrilineal whales arguing against bottlenecks. Selective sweeps of the mitochondrial genome are feasible causes but it is not clear why these only occurred in the matrilineal species. Cultural hitchhiking (cultural selection reducing diversity at neutral genetic loci transmitted in parallel to the culture) is supported in sperm whales which possess suitable matrilineal socio-cultural groups (coda clans). Killer whales are delineated into ecotypes which likely originated culturally. Culture, bottlenecks and selection, as well as their interactions, operating between- or within-ecotypes, may have reduced their mitochondrial diversity. The societies, cultures and genetics of false killer and two pilot whale species are insufficiently known to assess drivers of low mitochondrial diversity.

  11. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  12. A Comparison of a Maximum Exertion Method and a Model-Based, Sub-Maximum Exertion Method for Normalizing Trunk EMG

    PubMed Central

    Cholewicki, Jacek; van Dieën, Jaap; Lee, Angela S.; Reeves, N. Peter

    2011-01-01

    The problem with normalizing EMG data from patients with painful symptoms (e.g. low back pain) is that such patients may be unwilling or unable to perform maximum exertions. Furthermore, the normalization to a reference signal, obtained from a maximal or sub-maximal task, tends to mask differences that might exist as a result of pathology. Therefore, we presented a novel method (GAIN method) for normalizing trunk EMG data that overcomes both problems. The GAIN method does not require maximal exertions (MVC) and tends to preserve distinct features in the muscle recruitment patterns for various tasks. Ten healthy subjects performed various isometric trunk exertions, while EMG data from 10 muscles were recorded and later normalized using the GAIN and MVC methods. The MVC method resulted in smaller variation between subjects when tasks were executed at the three relative force levels (10%, 20%, and 30% MVC), while the GAIN method resulted in smaller variation between subjects when the tasks were executed at the three absolute force levels (50 N, 100 N, and 145 N). This outcome implies that the MVC method provides a relative measure of muscle effort, while the GAIN-normalized EMG data gives an estimate of the absolute muscle force. Therefore, the GAIN-normalized EMG data tends to preserve the EMG differences between subjects in the way they recruit their muscles to execute various tasks, while the MVC-normalized data will tend to suppress such differences. The appropriate choice of the EMG normalization method will depend on the specific question that an experimenter is attempting to answer. PMID:21665489

  13. EMG normalization method based on grade 3 of manual muscle testing: Within- and between-day reliability of normalization tasks and application to gait analysis.

    PubMed

    Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane

    2018-02-01

    Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. [Primary culture of human normal epithelial cells].

    PubMed

    Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun

    2017-11-28

    The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.

  15. Methodological study of affine transformations of gene expression data with proposed robust non-parametric multi-dimensional normalization method.

    PubMed

    Bengtsson, Henrik; Hössjer, Ola

    2006-03-01

    Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.

  16. Comparing cancer vs normal gene expression profiles identifies new disease entities and common transcriptional programs in AML patients.

    PubMed

    Rapin, Nicolas; Bagger, Frederik Otzen; Jendholm, Johan; Mora-Jensen, Helena; Krogh, Anders; Kohlmann, Alexander; Thiede, Christian; Borregaard, Niels; Bullinger, Lars; Winther, Ole; Theilgaard-Mönch, Kim; Porse, Bo T

    2014-02-06

    Gene expression profiling has been used extensively to characterize cancer, identify novel subtypes, and improve patient stratification. However, it has largely failed to identify transcriptional programs that differ between cancer and corresponding normal cells and has not been efficient in identifying expression changes fundamental to disease etiology. Here we present a method that facilitates the comparison of any cancer sample to its nearest normal cellular counterpart, using acute myeloid leukemia (AML) as a model. We first generated a gene expression-based landscape of the normal hematopoietic hierarchy, using expression profiles from normal stem/progenitor cells, and next mapped the AML patient samples to this landscape. This allowed us to identify the closest normal counterpart of individual AML samples and determine gene expression changes between cancer and normal. We find the cancer vs normal method (CvN method) to be superior to conventional methods in stratifying AML patients with aberrant karyotype and in identifying common aberrant transcriptional programs with potential importance for AML etiology. Moreover, the CvN method uncovered a novel poor-outcome subtype of normal-karyotype AML, which allowed for the generation of a highly prognostic survival signature. Collectively, our CvN method holds great potential as a tool for the analysis of gene expression profiles of cancer patients.

  17. Towards Simulating a Realistic Planetary Seismic Wavefield: The Contribution of the Megaregolith and Low-Velocity Waveguides

    NASA Technical Reports Server (NTRS)

    Schmerr, Nicholas C.; Weber, Renee C.; Lin, Pei-Ying Patty; Thorne, Michael Scott; Garnero, Ed J.

    2011-01-01

    Lunar seismograms are distinctly different from their terrestrial counterparts. The Apollo lunar seismometers recorded moonquakes without distinct P- or S-wave arrivals; instead waves arrive as a diffuse coda that decays over several hours making the identification of body waves difficult. The unusual character of the lunar seismic wavefield is generally tied to properties of the megaregolith: it consists of highly fractured and broken crustal rock, the result of extensive bombardment of the Moon. The megaregolith extends several kilometers into the lunar crust, possibly into the mantle in some regions, and is covered by a thin coating of fine-scale dust. These materials possess very low seismic velocities that strongly scatter the seismic wavefield at high frequencies. Directly modeling the effects of the megaregolith to simulate an accurate lunar seismic wavefield is a challenging computational problem, owing to the inherent 3-D nature of the problem and the high frequencies (greater than 1 Hz) required. Here we focus on modeling the long duration code, studying the effects of the low velocities found in the megaregolith. We produce synthetic seismograms using 1-D slowness integration methodologies, GEMINI and reflectivity, and a 3-D Cartesian finite difference code, Wave Propagation Program, to study the effect of thin layers of low velocity on the surface of a planet. These codes allow us generate seismograms with dominant frequencies of approximately 1 Hz. For background lunar seismic structure we explore several models, including the recent model of Weber et al., Science, 2011. We also investigate variations in megaregolithic thickness, velocity, attenuation, and seismogram frequency content. Our results are compared to the Apollo seismic dataset, using both a cross correlation technique and integrated envelope approach to investigate coda decay. We find our new high frequency results strongly support the hypothesis that the long duration of the lunar seismic codes is generated by the presence of the low velocity megaregolith, and that the diffuse arrivals are a combination of scattered energy and multiple reverberations within this layer. The 3-D modeling indicates the extreme surface topography of the Moon adds only a small contribution to scattering effects, though local geology may play a larger role. We also study the effects of the megaregolith on core reflected and converted phases and other body waves. Our analysis indicates detection of core interacting arrivals with a polarization filter technique is robust and lends the possibility of detecting other body waves from the Moon.

  18. Towards harmonized seismic analysis across Europe using supervised machine learning approaches

    NASA Astrophysics Data System (ADS)

    Zaccarelli, Riccardo; Bindi, Dino; Cotton, Fabrice; Strollo, Angelo

    2017-04-01

    In the framework of the Thematic Core Services for Seismology of EPOS-IP (European Plate Observing System-Implementation Phase), a service for disseminating a regionalized logic-tree of ground motions models for Europe is under development. While for the Mediterranean area the large availability of strong motion data qualified and disseminated through the Engineering Strong Motion database (ESM-EPOS), supports the development of both selection criteria and ground motion models, for the low-to-moderate seismic regions of continental Europe the development of ad-hoc models using weak motion recordings of moderate earthquakes is unavoidable. Aim of this work is to present a platform for creating application-oriented earthquake databases by retrieving information from EIDA (European Integrated Data Archive) and applying supervised learning models for earthquake records selection and processing suitable for any specific application of interest. Supervised learning models, i.e. the task of inferring a function from labelled training data, have been extensively used in several fields such as spam detection, speech and image recognition and in general pattern recognition. Their suitability to detect anomalies and perform a semi- to fully- automated filtering on large waveform data set easing the effort of (or replacing) human expertise is therefore straightforward. Being supervised learning algorithms capable of learning from a relatively small training set to predict and categorize unseen data, its advantage when processing large amount of data is crucial. Moreover, their intrinsic ability to make data driven predictions makes them suitable (and preferable) in those cases where explicit algorithms for detection might be unfeasible or too heuristic. In this study, we consider relatively simple statistical classifiers (e.g., Naive Bayes, Logistic Regression, Random Forest, SVMs) where label are assigned to waveform data based on "recognized classes" needed for our use case. These classes might be a simply binary case (e.g., "good for analysis" vs "bad") or more complex one (e.g., "good for analysis" vs "low SNR", "multi-event", "bad coda envelope"). It is important to stress the fact that our approach can be generalized to any use case providing, as in any supervised approach, an adequate training set of labelled data, a feature-set, a statistical classifier, and finally model validation and evaluation. Examples of use cases considered to develop the system prototype are the characterization of the ground motion in low seismic areas; harmonized spectral analysis across Europe for source and attenuation studies; magnitude calibration; coda analysis for attenuation studies.

  19. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less

  1. A humanistic environment for dental schools: what are dental students experiencing?

    PubMed

    Quick, Karin K

    2014-12-01

    A Commission on Dental Accreditation (CODA) standard now requires that dental schools commit to establishing a "humanistic culture and learning environment" for all members of the academic environment. The aim of this study was to identify students' perceptions of factors that affect the dental school environment and to test differences in their experiences in terms of gender and year. This picture of the existing environment was meant to serve as a first step toward creating and supporting a more humanistic academic environment. A mixed-methods approach was used for data collection during the 2009-10 and 2010-11 academic years at one U.S. dental school. Four focus groups were first conducted to explore challenges and conflicts faced by students during their dental education. A written survey informed by the focus group results was then used to obtain quantitative data. The survey response rate was 47 percent (N=188). Faculty inconsistency, cheating, and belittlement/disrespect were experienced by many of the responding dental students during their education, similar to what has been documented in medicine. These students also reported experiencing both constructive communication (90 percent) and destructive communication (up to 32 percent). The female students reported more gender discrimination and sexual harassment than their male peers, and the clinical students reported more experience with belittlement and destructive communication than the preclinical students. The results suggest that greater effort should be directed toward creating a more humanistic environment in dental schools. Based on the issues identified, steps academic institutions can take to improve these environments and student skills are outlined.

  2. Mojibake - The rehearsal of word fragments in verbal recall.

    PubMed

    Lange-Küttner, Christiane; Sykorova, Eva

    2015-01-01

    Theories of verbal rehearsal usually assume that whole words are being rehearsed. However, words consist of letter sequences, or syllables, or word onset-vowel-coda, amongst many other conceptualizations of word structure. A more general term is the 'grain size' of word units (Ziegler and Goswami, 2005). In the current study, a new method measured the quantitative percentage of correctly remembered word structure. The amount of letters in the correct letter sequence as per cent of word length was calculated, disregarding missing or added letters. A forced rehearsal was tested by repeating each memory list four times. We tested low frequency (LF) English words versus geographical (UK) town names to control for content. We also tested unfamiliar international (INT) non-words and names of international (INT) European towns to control for familiarity. An immediate versus distributed repetition was tested with a between-subject design. Participants responded with word fragments in their written recall especially when they had to remember unfamiliar words. While memory of whole words was sensitive to content, presentation distribution and individual sex and language differences, recall of word fragments was not. There was no trade-off between memory of word fragments with whole word recall during the repetition, instead also word fragments significantly increased. Moreover, while whole word responses correlated with each other during repetition, and word fragment responses correlated with each other during repetition, these two types of word recall responses were not correlated with each other. Thus there may be a lower layer consisting of free, sparse word fragments and an upper layer that consists of language-specific, orthographically and semantically constrained words.

  3. Relative seismic velocity variations correlate with deformation at Kīlauea volcano.

    PubMed

    Donaldson, Clare; Caudron, Corentin; Green, Robert G; Thelen, Weston A; White, Robert S

    2017-06-01

    Seismic noise interferometry allows the continuous and real-time measurement of relative seismic velocity through a volcanic edifice. Because seismic velocity is sensitive to the pressurization state of the system, this method is an exciting new monitoring tool at active volcanoes. Despite the potential of this tool, no studies have yet comprehensively compared velocity to other geophysical observables on a short-term time scale at a volcano over a significant length of time. We use volcanic tremor (~0.3 to 1.0 Hz) at Kīlauea as a passive source for interferometry to measure relative velocity changes with time. By cross-correlating the vertical component of day-long seismic records between ~230 station pairs, we extract coherent and temporally consistent coda wave signals with time lags of up to 120 s. Our resulting time series of relative velocity shows a remarkable correlation between relative velocity and the radial tilt record measured at Kīlauea summit, consistently correlating on a time scale of days to weeks for almost the entire study period (June 2011 to November 2015). As the summit continually deforms in deflation-inflation events, the velocity decreases and increases, respectively. Modeling of strain at Kīlauea suggests that, during inflation of the shallow magma reservoir (1 to 2 km below the surface), most of the edifice is dominated by compression-hence closing cracks and producing faster velocities-and vice versa. The excellent correlation between relative velocity and deformation in this study provides an opportunity to understand better the mechanisms causing seismic velocity changes at volcanoes, and therefore realize the potential of passive interferometry as a monitoring tool.

  4. Near-surface versus fault zone damage following the 1999 Chi-Chi earthquake: Observation and simulation of repeating earthquakes

    USGS Publications Warehouse

    Chen, Kate Huihsuan; Furumura, Takashi; Rubinstein, Justin L.

    2015-01-01

    We observe crustal damage and its subsequent recovery caused by the 1999 M7.6 Chi-Chi earthquake in central Taiwan. Analysis of repeating earthquakes in Hualien region, ~70 km east of the Chi-Chi earthquake, shows a remarkable change in wave propagation beginning in the year 2000, revealing damage within the fault zone and distributed across the near surface. We use moving window cross correlation to identify a dramatic decrease in the waveform similarity and delays in the S wave coda. The maximum delay is up to 59 ms, corresponding to a 7.6% velocity decrease averaged over the wave propagation path. The waveform changes on either side of the fault are distinct. They occur in different parts of the waveforms, affect different frequencies, and the size of the velocity reductions is different. Using a finite difference method, we simulate the effect of postseismic changes in the wavefield by introducing S wave velocity anomaly in the fault zone and near the surface. The models that best fit the observations point to pervasive damage in the near surface and deep, along-fault damage at the time of the Chi-Chi earthquake. The footwall stations show the combined effect of near-surface and the fault zone damage, where the velocity reduction (2–7%) is twofold to threefold greater than the fault zone damage observed in the hanging wall stations. The physical models obtained here allow us to monitor the temporal evolution and recovering process of the Chi-Chi fault zone damage.

  5. Waveform tomography of crustal structure in the south San Francisco Bay region

    USGS Publications Warehouse

    Pollitz, F.F.; Fletcher, J.P.

    2005-01-01

    We utilize a scattering-based seismic tomography technique to constrain crustal tructure around the southern San Francisco Bay region (SFBR). This technique is based on coupled traveling wave scattering theory, which has usually been applied to the interpretation of surface waves in large regional-scale studies. Using fully three-dimensional kernels, this technique is here applied to observed P, S, and surface waves of intermediate period (3-4 s dominant period) observed following eight selected regional events. We use a total of 73 seismograms recorded by a U.S. Geological Survey short-period seismic array in the western Santa Clara Valley, the Berkeley Digital Seismic Network, and the Northern California Seismic Network. Modifications of observed waveforms due to scattering from crustal structure include (positive or negative) amplification, delay, and generation of coda waves. The derived crustal structure explains many of the observed signals which cannot be explained with a simple layered structure. There is sufficient sensitivity to both deep and shallow crustal structure that even with the few sources employed in the present study, we obtain shallow velocity structure which is reasonably consistent with previous P wave tomography results. We find a depth-dependent lateral velocity contrast across the San Andreas fault (SAF), with higher velocities southwest of the SAF in the shallow crust and higher velocities northeast of the SAF in the midcrust. The method does not have the resolution to identify very slow sediment velocities in the upper approximately 3 km since the tomographic models are smooth at a vertical scale of about 5 km. Copyright 2005 by the American Geophysical Union.

  6. Design and Outcomes of a Comprehensive Care Experience Level System to Evaluate and Monitor Dental Students' Clinical Progress.

    PubMed

    Teich, Sorin T; Roperto, Renato; Alonso, Aurelio A; Lang, Lisa A

    2016-06-01

    A Comprehensive Care Experience Level (CCEL) system that is aligned with Commission on Dental Accreditation (CODA) standards, promotes comprehensive care and prevention, and addresses flaws observed in previous Relative Value Units (RVU)-based programs has been implemented at the School of Dental Medicine, Case Western Reserve University since 2011. The purpose of this article is to report on the design, implementation, and preliminary outcomes of this novel clinical evaluation system. With the development of the CCEL concept, it was decided not to award points for procedures performed on competency exams. The reason behind this decision was that exams are not learning opportunities and are evaluated with summative tools. To determine reasonable alternative requirements, production data from previous classes were gathered and translated into CCEL points. These RVU points had been granted selectively only for restorative procedures completed after the initial preparation stage of the treatment plan, and achievement of the required levels was checked at multiple points during the clinical curriculum. Results of the CCEL system showed that low performing students increased their productivity, overall production at graduation increased significantly, and fluoride utilization to prevent caries rose by an order of magnitude over the RVU system. The CCEL program also allowed early identification and remediation of students having difficulty in the clinic. This successful implementation suggests that the CCEL concept has the potential for widespread adoption by dental schools. This method also can be used as a behavior modification tool to achieve specific patient care or clinical educational goals as illustrated by the way caries prevention was promoted through the program.

  7. Frequency Domain Full-Waveform Inversion in Imaging Thrust Related Features

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; Zelt, C. A.

    2010-12-01

    Seismic acquisition in rough terrain such as mountain belts suffers from problems related to near-surface conditions such as statics, inconsistent energy penetration, rapid decay of signal, and imperfect receiver coupling. Moreover in the presence of weakly compacted soil, strong ground roll may obscure the reflection arrivals at near offsets further diminishing the scope of estimating a reliable near surface image though conventional processing. Traveltime and waveform inversion not only overcome the simplistic assumptions inherent in conventional processing such as hyperbolic moveout and convolution model, but also use parts of the seismic coda, such as the direct arrival and refractions, that are discarded in the latter. Traveltime and waveform inversion are model-based methods that honour the physics of wave propagation. Given the right set of preconditioned data and starting model, waveform inversion in particular has been realized as a powerful tool for velocity model building. This paper examines two case studies on waveform inversion using real data from the Naga Thrust Belt in the Northeast India. Waveform inversion in this paper is performed in the frequency domain and is multiscale in nature i.e., the inversion progressively ascends from the lower to the higher end of the frequency spectra increasing the wavenumber content of the recovered model. Since the real data are band limited, the success of waveform inversion depends on how well the starting model can account for the missing low wavenumbers. In this paper it is observed that the required starting model can be prepared using the regularized inversion of direct and reflected arrival times.

  8. Outcomes mapping: a method for dental schools to coordinate learning and assessment based on desired characteristics of a graduate.

    PubMed

    Schneider, Galen B; Cunningham-Ford, Marsha A; Johnsen, David C; Eckert, Mary Lynn; Mulder, Michael

    2014-09-01

    This project, utilizing a seldom-used approach to dental education, was designed to define the desired characteristics of a graduating dental student; convert those characteristics to educational outcomes; and use those outcomes to map a dental school's learning and assessment programs, based on outcomes rather than courses and disciplines. A detailed rubric of the outcomes expected of a graduating dental student from this school was developed, building on Commission on Dental Accreditation (CODA) standards and the school's competencies. The presence of each characteristic in the rubric was mapped within and across courses and disciplines. To assess implementation of the rubric, members of two faculty committees and all fourth-year students were asked to use it to rate 1) the importance of each characteristic, 2) the extent to which the school teaches and assesses each, and 3) the extent to which each counts toward overall assessment of competence. All thirty-three faculty members (100 percent) on the committees participated, as did forty-six of the fifty-five students (84 percent). The groups gave high scores to the importance of each characteristic, especially for knowledge and technical competence (then separate categories but merged in the final rubric) and for self-assessment, as well as the extent to which they are being taught and assessed. Respondents most commonly named critical thinking as the area that should be emphasized more. Mapping the curriculum and creating its related database allow the faculty and administration to more systematically coordinate learning and assessment than was possible with a course-based approach.

  9. Imaging a Fault Boundary System Using Controlled-Source Data Recorded on a Large-N Seismic Array

    NASA Astrophysics Data System (ADS)

    Paschall, O. C.; Chen, T.; Snelson, C. M.; Ralston, M. D.; Rowe, C. A.

    2016-12-01

    The Source Physics Experiment (SPE) is a series of chemical explosions conducted in southern Nevada with an objective of improving nuclear explosion monitoring. Five chemical explosions have occurred thus far in granite, the most recent being SPE-5 on April 26, 2016. The SPE series will improve our understanding of seismic wave propagation (primarily S-waves) due to explosions, and allow better discrimination of background seismicity such as earthquakes and explosions. The Large-N portion of the project consists of 996 receiver stations. Half of the stations were vertical component and the other half were three-component geophones. All receivers were deployed for 30 days and recorded the SPE-5 shot, earthquakes, noise, and an additional controlled-source: a large weight-drop, which is a 13,000 kg modified industrial pile driver. In this study, we undertake reflection processing of waveforms from the weight-drop, as recorded by a line of sensors extracted from the Large-N array. The profile is 1.2 km in length with 25 m station spacing and 100 m shot point spacing. This profile crosses the Boundary Fault that separates granite body and an alluvium basin, a strong acoustic impedance boundary that scatters seismic energy into S-waves and coda. The data were processed with traditional seismic reflection processing methods that include filtering, deconvolution, and stacking. The stack will be used to extract the location of the splays of the Boundary Fault and provide geologic constraints to the modeling and simulation teams within the SPE project.

  10. Hearing children of Deaf parents: Gender and birth order in the delegation of the interpreter role in culturally Deaf families

    PubMed Central

    de Andrade, Victor

    2018-01-01

    Background Culturally, hearing children born to Deaf parents may have to mediate two different positions within the hearing and Deaf cultures. However, there appears to be little written about the experiences of hearing children born to Deaf parents in the South African context. Objective This study sought to investigate the roles of children of Deaf adults (CODAs) as interpreters in Deaf-parented families, more specifically, the influence of gender and birth order in language brokering. Method Two male and eight female participants between the ages of 21 and 40 years were recruited through purposive and snowball sampling strategies. A qualitative design was employed and data were collected using a semi-structured, open-ended interview format. Themes which emerged were analysed using thematic analysis. Results The findings indicated that there was no formal assignment of the interpreter role; however, female children tended to assume the role of interpreter more often than the male children. Also, it appeared as though the older children shifted the responsibility for interpreting to younger siblings. The participants in this study indicated that they interpreted in situations where they felt they were not developmentally or emotionally ready, or in situations which they felt were better suited for older siblings or for siblings of another gender. Conclusion This study highlights a need for the formalisation of interpreting services for Deaf people in South Africa in the form of professional interpreters rather than the reliance on hearing children as interpreters in order to mediate between Deaf and hearing cultures. PMID:29850437

  11. A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  12. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    PubMed

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  13. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  14. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    PubMed

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  16. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  17. Normalized Temperature Contrast Processing in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  18. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions.

    PubMed

    Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng

    2015-07-28

    Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.

  19. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  20. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    PubMed

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ, Med, TMM, DESeq, and Q did not noticeably improve gene expression normalization, regardless of read length. Other normalization methods were more efficient when alignment accuracy was low; Sailfish with RPKM gave the best normalization results. When alignment accuracy was high, RC was sufficient for gene expression calculation. And we suggest ignoring poly-A tail during differential gene expression analysis.

  1. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  2. Normalization of white matter intensity on T1-weighted images of patients with acquired central nervous system demyelination.

    PubMed

    Ghassemi, Rezwan; Brown, Robert; Narayanan, Sridar; Banwell, Brenda; Nakamura, Kunio; Arnold, Douglas L

    2015-01-01

    Intensity variation between magnetic resonance images (MRI) hinders comparison of tissue intensity distributions in multicenter MRI studies of brain diseases. The available intensity normalization techniques generally work well in healthy subjects but not in the presence of pathologies that affect tissue intensity. One such disease is multiple sclerosis (MS), which is associated with lesions that prominently affect white matter (WM). To develop a T1-weighted (T1w) image intensity normalization method that is independent of WM intensity, and to quantitatively evaluate its performance. We calculated median intensity of grey matter and intraconal orbital fat on T1w images. Using these two reference tissue intensities we calculated a linear normalization function and applied this to the T1w images to produce normalized T1w (NT1) images. We assessed performance of our normalization method for interscanner, interprotocol, and longitudinal normalization variability, and calculated the utility of the normalization method for lesion analyses in clinical trials. Statistical modeling showed marked decreases in T1w intensity differences after normalization (P < .0001). We developed a WM-independent T1w MRI normalization method and tested its performance. This method is suitable for longitudinal multicenter clinical studies for the assessment of the recovery or progression of disease affecting WM. Copyright © 2014 by the American Society of Neuroimaging.

  3. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    PubMed

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  4. Mapping crustal heterogeneity using Lg propagation efficiency throughout the Middle East, Mediterranean, Southern Europe and Northern Africa

    USGS Publications Warehouse

    McNamara, D.E.; Walter, W.R.

    2001-01-01

    In this paper we describe a technique for mapping the lateral variation of Lg characteristics such as Lg blockage, efficient Lg propagation, and regions of very high attenuation in the Middle East, North Africa, Europe and the Mediterranean regions. Lg is used in a variety of seismological applications from magnitude estimation to identification of nuclear explosions for monitoring compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT). These applications can give significantly biased results if the Lg phase is reduced or blocked by discontinuous structure or thin crust. Mapping these structures using quantitative techniques for determining Lg amplitude attenuation can break down when the phase is below background noise. In such cases Lg blockage and inefficient propagation zones are often mapped out by hand. With our approach, we attempt to visually simplify this information by imaging crustal structure anomalies that significantly diminish the amplitude of Lg. The visualization of such anomalies is achieved by defining a grid of cells that covers the entire region of interest. We trace Lg rays for each event/ station pair, which is simply the great circle path, and attribute to each cell a value equal to the maximum value of the Lg/P-coda amplitude ratio for all paths traversing that particular cell. The resulting map, from this empirical approach, is easily interpreted in terms of crustal structure and can successfully image small blockage features often missed by analysis of raypaths alone. This map can then be used to screen out events with blocked Lg prior to performing Q tomography, and to avoid using Lg-based methods of event identification for the CTBT in regions where they cannot work. For this study we applied our technique to one of the most tectonically complex regions on the earth. Nearly 9000 earthquake/station raypaths, traversing the vast region comprised of the Middle East, Mediterranean, Southern Europe and Northern Africa, have been analyzed. We measured the amplitude of Lg relative to the P-coda and mapped the lateral variation of Lg propagation efficiency. With the relatively dense coverage provided by the numerous crossing paths we are able to map out the pattern of crustal heterogeneity that gives rise to the observed character of Lg propagation. We observe that the propagation characteristics of Lg within the region of interest are very complicated but are readily correlated with the different tectonic environments within the region. For example, clear strong Lg arrivals are observed for paths crossing the stable continental interiors of Northern Africa and the Arabian Shield. In contrast, weakened to absent Lg is observed for paths crossing much of the Middle East, and Lg is absent for paths traversing the Mediterranean. Regions that block Lg transmission within the Middle East are very localized and include the Caspian Sea, the Iranian Plateau and the Red Sea. Resolution is variable throughout the region and strongly depends on the distribution of seismicity and recording stations. Lg propagation is best resolved within the Middle East where regions of crustal heterogeneity on the order of 100 km are imaged (e.g., South Caspian Sea and Red Sea). Crustal heterogeneity is resolvable but is poorest in seismically quiescent Northern Africa.

  5. A computed tomography-based spatial normalization for the analysis of [18F] fluorodeoxyglucose positron emission tomography of the brain.

    PubMed

    Cho, Hanna; Kim, Jin Su; Choi, Jae Yong; Ryu, Young Hoon; Lyoo, Chul Hyoung

    2014-01-01

    We developed a new computed tomography (CT)-based spatial normalization method and CT template to demonstrate its usefulness in spatial normalization of positron emission tomography (PET) images with [(18)F] fluorodeoxyglucose (FDG) PET studies in healthy controls. Seventy healthy controls underwent brain CT scan (120 KeV, 180 mAs, and 3 mm of thickness) and [(18)F] FDG PET scans using a PET/CT scanner. T1-weighted magnetic resonance (MR) images were acquired for all subjects. By averaging skull-stripped and spatially-normalized MR and CT images, we created skull-stripped MR and CT templates for spatial normalization. The skull-stripped MR and CT images were spatially normalized to each structural template. PET images were spatially normalized by applying spatial transformation parameters to normalize skull-stripped MR and CT images. A conventional perfusion PET template was used for PET-based spatial normalization. Regional standardized uptake values (SUV) measured by overlaying the template volume of interest (VOI) were compared to those measured with FreeSurfer-generated VOI (FSVOI). All three spatial normalization methods underestimated regional SUV values by 0.3-20% compared to those measured with FSVOI. The CT-based method showed slightly greater underestimation bias. Regional SUV values derived from all three spatial normalization methods were correlated significantly (p < 0.0001) with those measured with FSVOI. CT-based spatial normalization may be an alternative method for structure-based spatial normalization of [(18)F] FDG PET when MR imaging is unavailable. Therefore, it is useful for PET/CT studies with various radiotracers whose uptake is expected to be limited to specific brain regions or highly variable within study population.

  6. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  7. Rhythm-based heartbeat duration normalization for atrial fibrillation detection.

    PubMed

    Islam, Md Saiful; Ammour, Nassim; Alajlan, Naif; Aboalsamh, Hatim

    2016-05-01

    Screening of atrial fibrillation (AF) for high-risk patients including all patients aged 65 years and older is important for prevention of risk of stroke. Different technologies such as modified blood pressure monitor, single lead ECG-based finger-probe, and smart phone using plethysmogram signal have been emerging for this purpose. All these technologies use irregularity of heartbeat duration as a feature for AF detection. We have investigated a normalization method of heartbeat duration for improved AF detection. AF is an arrhythmia in which heartbeat duration generally becomes irregularly irregular. From a window of heartbeat duration, we estimate the possible rhythm of the majority of heartbeats and normalize duration of all heartbeats in the window based on the rhythm so that we can measure the irregularity of heartbeats for both AF and non-AF rhythms in the same scale. Irregularity is measured by the entropy of distribution of the normalized duration. Then we classify a window of heartbeats as AF or non-AF by thresholding the measured irregularity. The effect of this normalization is evaluated by comparing AF detection performances using duration with the normalization, without normalization, and with other existing normalizations. Sensitivity and specificity of AF detection using normalized heartbeat duration were tested on two landmark databases available online and compared with results of other methods (with/without normalization) by receiver operating characteristic (ROC) curves. ROC analysis showed that the normalization was able to improve the performance of AF detection and it was consistent for a wide range of sensitivity and specificity for use of different thresholds. Detection accuracy was also computed for equal rates of sensitivity and specificity for different methods. Using normalized heartbeat duration, we obtained 96.38% accuracy which is more than 4% improvement compared to AF detection without normalization. The proposed normalization method was found useful for improving performance and robustness of AF detection. Incorporation of this method in a screening device could be crucial to reduce the risk of AF-related stroke. In general, the incorporation of the rhythm-based normalization in an AF detection method seems important for developing a robust AF screening device. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Data-driven identification of intensity normalization region based on longitudinal coherency of 18F-FDG metabolism in the healthy brain.

    PubMed

    Zhang, Huiwei; Wu, Ping; Ziegler, Sibylle I; Guan, Yihui; Wang, Yuetao; Ge, Jingjie; Schwaiger, Markus; Huang, Sung-Cheng; Zuo, Chuantao; Förster, Stefan; Shi, Kuangyu

    2017-02-01

    In brain 18 F-FDG PET data intensity normalization is usually applied to control for unwanted factors confounding brain metabolism. However, it can be difficult to determine a proper intensity normalization region as a reference for the identification of abnormal metabolism in diseased brains. In neurodegenerative disorders, differentiating disease-related changes in brain metabolism from age-associated natural changes remains challenging. This study proposes a new data-driven method to identify proper intensity normalization regions in order to improve separation of age-associated natural changes from disease related changes in brain metabolism. 127 female and 128 male healthy subjects (age: 20 to 79) with brain 18 F-FDG PET/CT in the course of a whole body cancer screening were included. Brain PET images were processed using SPM8 and were parcellated into 116 anatomical regions according to the AAL template. It is assumed that normal brain 18 F-FDG metabolism has longitudinal coherency and this coherency leads to better model fitting. The coefficient of determination R 2 was proposed as the coherence coefficient, and the total coherence coefficient (overall fitting quality) was employed as an index to assess proper intensity normalization strategies on single subjects and age-cohort averaged data. Age-associated longitudinal changes of normal subjects were derived using the identified intensity normalization method correspondingly. In addition, 15 subjects with clinically diagnosed Parkinson's disease were assessed to evaluate the clinical potential of the proposed new method. Intensity normalizations by paracentral lobule and cerebellar tonsil, both regions derived from the new data-driven coherency method, showed significantly better coherence coefficients than other intensity normalization regions, and especially better than the most widely used global mean normalization. Intensity normalization by paracentral lobule was the most consistent method within both analysis strategies (subject-based and age-cohort averaging). In addition, the proposed new intensity normalization method using the paracentral lobule generates significantly higher differentiation from the age-associated changes than other intensity normalization methods. Proper intensity normalization can enhance the longitudinal coherency of normal brain glucose metabolism. The paracentral lobule followed by the cerebellar tonsil are shown to be the two most stable intensity normalization regions concerning age-dependent brain metabolism. This may provide the potential to better differentiate disease-related changes from age-related changes in brain metabolism, which is of relevance in the diagnosis of neurodegenerative disorders. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression.

    PubMed

    Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia

    2014-05-17

    High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository.

  10. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression

    PubMed Central

    2014-01-01

    Background High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Results Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the “common reference design” and processed as “pseudo-single-channel”. They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription–polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Conclusions Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository. PMID:24886675

  11. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    PubMed

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  12. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data

    PubMed Central

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A.; Peddada, Shyamal D.

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html PMID:29456555

  13. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  14. Matrix Approach of Seismic Wave Imaging: Application to Erebus Volcano

    NASA Astrophysics Data System (ADS)

    Blondel, T.; Chaput, J.; Derode, A.; Campillo, M.; Aubry, A.

    2017-12-01

    This work aims at extending to seismic imaging a matrix approach of wave propagation in heterogeneous media, previously developed in acoustics and optics. More specifically, we will apply this approach to the imaging of the Erebus volcano in Antarctica. Volcanoes are actually among the most challenging media to explore seismically in light of highly localized and abrupt variations in density and wave velocity, extreme topography, extensive fractures, and the presence of magma. In this strongly scattering regime, conventional imaging methods suffer from the multiple scattering of waves. Our approach experimentally relies on the measurement of a reflection matrix associated with an array of geophones located at the surface of the volcano. Although these sensors are purely passive, a set of Green's functions can be measured between all pairs of geophones from ice-quake coda cross-correlations (1-10 Hz) and forms the reflection matrix. A set of matrix operations can then be applied for imaging purposes. First, the reflection matrix is projected, at each time of flight, in the ballistic focal plane by applying adaptive focusing at emission and reception. It yields a response matrix associated with an array of virtual geophones located at the ballistic depth. This basis allows us to get rid of most of the multiple scattering contribution by applying a confocal filter to seismic data. Iterative time reversal is then applied to detect and image the strongest scatterers. Mathematically, it consists in performing a singular value decomposition of the reflection matrix. The presence of a potential target is assessed from a statistical analysis of the singular values, while the corresponding eigenvectors yield the corresponding target images. When stacked, the results obtained at each depth give a three-dimensional image of the volcano. While conventional imaging methods lead to a speckle image with no connection to the actual medium's reflectivity, our method enables to highlight a chimney-shaped structure inside Erebus volcano with true positive rates ranging from 80% to 95%. Although computed independently, the results at each depth are spatially consistent, substantiating their physical reliability. The identified structure is therefore likely to describe accurately the internal structure of the Erebus volcano.

  15. Feasibility of Computed Tomography-Guided Methods for Spatial Normalization of Dopamine Transporter Positron Emission Tomography Image

    PubMed Central

    Kim, Jin Su; Cho, Hanna; Choi, Jae Yong; Lee, Seung Ha; Ryu, Young Hoon; Lyoo, Chul Hyoung; Lee, Myung Sik

    2015-01-01

    Background Spatial normalization is a prerequisite step for analyzing positron emission tomography (PET) images both by using volume-of-interest (VOI) template and voxel-based analysis. Magnetic resonance (MR) or ligand-specific PET templates are currently used for spatial normalization of PET images. We used computed tomography (CT) images acquired with PET/CT scanner for the spatial normalization for [18F]-N-3-fluoropropyl-2-betacarboxymethoxy-3-beta-(4-iodophenyl) nortropane (FP-CIT) PET images and compared target-to-cerebellar standardized uptake value ratio (SUVR) values with those obtained from MR- or PET-guided spatial normalization method in healthy controls and patients with Parkinson’s disease (PD). Methods We included 71 healthy controls and 56 patients with PD who underwent [18F]-FP-CIT PET scans with a PET/CT scanner and T1-weighted MR scans. Spatial normalization of MR images was done with a conventional spatial normalization tool (cvMR) and with DARTEL toolbox (dtMR) in statistical parametric mapping software. The CT images were modified in two ways, skull-stripping (ssCT) and intensity transformation (itCT). We normalized PET images with cvMR-, dtMR-, ssCT-, itCT-, and PET-guided methods by using specific templates for each modality and measured striatal SUVR with a VOI template. The SUVR values measured with FreeSurfer-generated VOIs (FSVOI) overlaid on original PET images were also used as a gold standard for comparison. Results The SUVR values derived from all four structure-guided spatial normalization methods were highly correlated with those measured with FSVOI (P < 0.0001). Putaminal SUVR values were highly effective for discriminating PD patients from controls. However, the PET-guided method excessively overestimated striatal SUVR values in the PD patients by more than 30% in caudate and putamen, and thereby spoiled the linearity between the striatal SUVR values in all subjects and showed lower disease discrimination ability. Two CT-guided methods showed comparable capability with the MR-guided methods in separating PD patients from controls and showed better correlation between putaminal SUVR values and the parkinsonian motor severity than the PET-guided method. Conclusion CT-guided spatial normalization methods provided reliable striatal SUVR values comparable to those obtained with MR-guided methods. CT-guided methods can be useful for analyzing dopamine transporter PET images when MR images are unavailable. PMID:26147749

  16. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  17. The use of normal forms for analysing nonlinear mechanical vibrations

    PubMed Central

    Neild, Simon A.; Champneys, Alan R.; Wagg, David J.; Hill, Thomas L.; Cammarano, Andrea

    2015-01-01

    A historical introduction is given of the theory of normal forms for simplifying nonlinear dynamical systems close to resonances or bifurcation points. The specific focus is on mechanical vibration problems, described by finite degree-of-freedom second-order-in-time differential equations. A recent variant of the normal form method, that respects the specific structure of such models, is recalled. It is shown how this method can be placed within the context of the general theory of normal forms provided the damping and forcing terms are treated as unfolding parameters. The approach is contrasted to the alternative theory of nonlinear normal modes (NNMs) which is argued to be problematic in the presence of damping. The efficacy of the normal form method is illustrated on a model of the vibration of a taut cable, which is geometrically nonlinear. It is shown how the method is able to accurately predict NNM shapes and their bifurcations. PMID:26303917

  18. Feasibility of Computed Tomography-Guided Methods for Spatial Normalization of Dopamine Transporter Positron Emission Tomography Image.

    PubMed

    Kim, Jin Su; Cho, Hanna; Choi, Jae Yong; Lee, Seung Ha; Ryu, Young Hoon; Lyoo, Chul Hyoung; Lee, Myung Sik

    2015-01-01

    Spatial normalization is a prerequisite step for analyzing positron emission tomography (PET) images both by using volume-of-interest (VOI) template and voxel-based analysis. Magnetic resonance (MR) or ligand-specific PET templates are currently used for spatial normalization of PET images. We used computed tomography (CT) images acquired with PET/CT scanner for the spatial normalization for [18F]-N-3-fluoropropyl-2-betacarboxymethoxy-3-beta-(4-iodophenyl) nortropane (FP-CIT) PET images and compared target-to-cerebellar standardized uptake value ratio (SUVR) values with those obtained from MR- or PET-guided spatial normalization method in healthy controls and patients with Parkinson's disease (PD). We included 71 healthy controls and 56 patients with PD who underwent [18F]-FP-CIT PET scans with a PET/CT scanner and T1-weighted MR scans. Spatial normalization of MR images was done with a conventional spatial normalization tool (cvMR) and with DARTEL toolbox (dtMR) in statistical parametric mapping software. The CT images were modified in two ways, skull-stripping (ssCT) and intensity transformation (itCT). We normalized PET images with cvMR-, dtMR-, ssCT-, itCT-, and PET-guided methods by using specific templates for each modality and measured striatal SUVR with a VOI template. The SUVR values measured with FreeSurfer-generated VOIs (FSVOI) overlaid on original PET images were also used as a gold standard for comparison. The SUVR values derived from all four structure-guided spatial normalization methods were highly correlated with those measured with FSVOI (P < 0.0001). Putaminal SUVR values were highly effective for discriminating PD patients from controls. However, the PET-guided method excessively overestimated striatal SUVR values in the PD patients by more than 30% in caudate and putamen, and thereby spoiled the linearity between the striatal SUVR values in all subjects and showed lower disease discrimination ability. Two CT-guided methods showed comparable capability with the MR-guided methods in separating PD patients from controls and showed better correlation between putaminal SUVR values and the parkinsonian motor severity than the PET-guided method. CT-guided spatial normalization methods provided reliable striatal SUVR values comparable to those obtained with MR-guided methods. CT-guided methods can be useful for analyzing dopamine transporter PET images when MR images are unavailable.

  19. Empirical evaluation of data normalization methods for molecular classification

    PubMed Central

    Huang, Huei-Chung

    2018-01-01

    Background Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers—an increasingly important application of microarrays in the era of personalized medicine. Methods In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. Results In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Conclusion Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy. PMID:29666754

  20. Empirical evaluation of data normalization methods for molecular classification.

    PubMed

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  1. Syllable-constituent perception by hearing-aid users: Common factors in quiet and noise

    PubMed Central

    Miller, James D.; Watson, Charles S.; Leek, Marjorie R.; Dubno, Judy R.; Wark, David J.; Souza, Pamela E.; Gordon-Salant, Sandra; Ahlstrom, Jayne B.

    2017-01-01

    The abilities of 59 adult hearing-aid users to hear phonetic details were assessed by measuring their abilities to identify syllable constituents in quiet and in differing levels of noise (12-talker babble) while wearing their aids. The set of sounds consisted of 109 frequently occurring syllable constituents (45 onsets, 28 nuclei, and 36 codas) spoken in varied phonetic contexts by eight talkers. In nominal quiet, a speech-to-noise ratio (SNR) of 40 dB, scores of individual listeners ranged from about 23% to 85% correct. Averaged over the range of SNRs commonly encountered in noisy situations, scores of individual listeners ranged from about 10% to 71% correct. The scores in quiet and in noise were very strongly correlated, R = 0.96. This high correlation implies that common factors play primary roles in the perception of phonetic details in quiet and in noise. Otherwise said, hearing-aid users' problems perceiving phonetic details in noise appear to be tied to their problems perceiving phonetic details in quiet and vice versa. PMID:28464618

  2. Measuring the economic costs of discrimination experienced by people with mental health problems: development of the Costs of Discrimination Assessment (CODA).

    PubMed

    Wright, Steve; Henderson, Claire; Thornicroft, Graham; Sharac, Jessica; McCrone, Paul

    2015-05-01

    Stigma and discrimination are faced by many with mental health problems and this may affect the uptake of services and engagement in leisure and recreational activities. The aims of this study were to develop a schedule to measure the impact of stigma and discrimination on service use, employment and leisure activities and to estimate the value of such reductions. A questionnaire, the Cost of Discrimination Assessment, was developed and piloted in a sample people with mental health problems. Costs were calculated and test-retest reliability assessed. Test-retest reliability was good for most items. A substantial proportion of the sample had experienced negative impacts on employment as a result of stigma and discrimination. Around one-fifth had reduced contacts with general practitioners in the previous 6 months due to stigma and discrimination and the leisure activity most affected was visiting pubs/restaurants/café. In conclusion, stigma and discrimination result in reduced use of services and reduced engagement in leisure activities. This represents a welfare loss to individuals.

  3. Improved Microseismicity Detection During Newberry EGS Stimulations

    DOE Data Explorer

    Templeton, Dennise

    2013-10-01

    Effective enhanced geothermal systems (EGS) require optimal fracture networks for efficient heat transfer between hot rock and fluid. Microseismic mapping is a key tool used to infer the subsurface fracture geometry. Traditional earthquake detection and location techniques are often employed to identify microearthquakes in geothermal regions. However, most commonly used algorithms may miss events if the seismic signal of an earthquake is small relative to the background noise level or if a microearthquake occurs within the coda of a larger event. Consequently, we have developed a set of algorithms that provide improved microearthquake detection. Our objective is to investigate the microseismicity at the DOE Newberry EGS site to better image the active regions of the underground fracture network during and immediately after the EGS stimulation. Detection of more microearthquakes during EGS stimulations will allow for better seismic delineation of the active regions of the underground fracture system. This improved knowledge of the reservoir network will improve our understanding of subsurface conditions, and allow improvement of the stimulation strategy that will optimize heat extraction and maximize economic return.

  4. Improved Microseismicity Detection During Newberry EGS Stimulations

    DOE Data Explorer

    Templeton, Dennise

    2013-11-01

    Effective enhanced geothermal systems (EGS) require optimal fracture networks for efficient heat transfer between hot rock and fluid. Microseismic mapping is a key tool used to infer the subsurface fracture geometry. Traditional earthquake detection and location techniques are often employed to identify microearthquakes in geothermal regions. However, most commonly used algorithms may miss events if the seismic signal of an earthquake is small relative to the background noise level or if a microearthquake occurs within the coda of a larger event. Consequently, we have developed a set of algorithms that provide improved microearthquake detection. Our objective is to investigate the microseismicity at the DOE Newberry EGS site to better image the active regions of the underground fracture network during and immediately after the EGS stimulation. Detection of more microearthquakes during EGS stimulations will allow for better seismic delineation of the active regions of the underground fracture system. This improved knowledge of the reservoir network will improve our understanding of subsurface conditions, and allow improvement of the stimulation strategy that will optimize heat extraction and maximize economic return.

  5. The trigger mechanism of low-frequency earthquakes on Montserrat

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.; Tuffen, H.; Collier, L.; Green, D.; Powell, T.; Dingwell, D.

    2006-05-01

    A careful analysis of low-frequency seismic events on Soufrièere Hills volcano, Montserrat, points to a source mechanism that is non-destructive, repetitive, and has a stationary source location. By combining these seismological clues with new field evidence and numerical magma flow modelling, we propose a seismic trigger model which is based on brittle failure of magma in the glass transition. Loss of heat and gas from the magma results in a strong viscosity gradient across a dyke or conduit. This leads to a build-up of shear stress near the conduit wall where magma can rupture in a brittle manner, as field evidence from a rhyolitic dyke demonstrates. This brittle failure provides seismic energy, the majority of which is trapped in the conduit or dyke forming the low-frequency coda of the observed seismic signal. The trigger source location marks the transition from ductile conduit flow to friction-controlled magma ascent. As the trigger mechanism is governed by the depth-dependent magma parameters, the source location remains fixed at a depth where the conditions allow brittle failure. This is reflected in the fixed seismic source locations.

  6. Using the Seismic Amplitude Decay of Low-Frequency Events to Constrain Magma Properties.

    NASA Astrophysics Data System (ADS)

    Smith, P. J.; Neuberg, J. W.

    2007-12-01

    Low-frequency events are considered a key part of volcanic monitoring, as they are one of the few tools available that can link surface observations directly to internal volcanic processes and properties. Our model for their generation on the Soufrière Hills Volcano, Montserrat, is brittle fracturing of the magma at the conduit walls, providing the seismic trigger mechanism, followed by conduit resonance. The attenuation of seismic waves in a viscous magma is highly dependent on the properties of the attenuating material, in particular the viscous friction, controlled by the melt viscosity, gas content and diffusivity. Therefore we can use the seismicity to gain information on these magma properties. This research uses a two-dimensional viscoelastic finite-difference model, with the attenuative behaviour of the magma parameterised by an array of Standard Linear Solids. By examining the relationship between the amplitude decay of the synthetic low-frequency events, the intrinsic attenuation and the elastic parameter contrast, this research aims to link observables such as amplitude decay of the coda directly to properties such as the magma viscosity.

  7. Looking Back to Move Ahead: Interprofessional Education in Dental Education.

    PubMed

    Hamil, Lindsey M

    2017-08-01

    Interprofessional education (IPE) is a widely recognized and critical component of dental and health professions education and is included in two of the predoctoral education standards required by the Commission on Dental Accreditation (CODA). Following a review of the literature on the state of IPE education in U.S. dental education programs, this article revisits six institutions identified in previous research as exemplars successfully implementing IPE on their campuses. Interviews were conducted with leaders at the following programs: Columbia University, Medical University of South Carolina, University of Colorado Anschutz Medical Campus, University of Florida, University of Minnesota, and Western University of Health Sciences. Strengths and weakness of IPE in dental education are discussed, along with opportunities for the future including reducing barriers to scheduling, increasing intraprofessional education, and consistent outcomes assessment. The article concludes with lessons learned by administrators and suggestions for improving incorporation of these requirements into predoctoral dental education programs by emphasizing the importance of IPE and dentistry's role in overall health. This article was written as part of the project "Advancing Dental Education in the 21 st Century."

  8. Effects of horizontal plyometric training volume on soccer players' performance.

    PubMed

    Yanci, Javier; Los Arcos, Asier; Camara, Jesús; Castillo, Daniel; García, Alberto; Castagna, Carlo

    2016-01-01

    The aim of this study was to examine the dose response effect of strength and conditioning programmes, involving horizontally oriented plyometric exercises, on relevant soccer performance variables. Sixteen soccer players were randomly allocated to two 6-week plyometric training groups (G1 and G2) differing by imposed (twice a week) training volume. Post-training G1 (4.13%; d = 0.43) and G2 (2.45%; d = 0.53) moderately improved their horizontal countermovement jump performance. Significant between-group differences (p < 0.01) in the vertical countermovement jump for force production time (T2) were detected post-training. No significant and practical (p > 0.05, d = trivial or small) post-training improvements in sprint, change of direction ability (CODA) and horizontal arm swing countermovement jump were reported in either group. Horizontal plyometric training was effective in promoting improvement in injury prevention variables. Doubling the volume of a horizontal plyometric training protocol was shown to have no additional effect over functional aspects of soccer players' performance.

  9. Comparing Effects of Cluster-Coupled Patterns on Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Si, Xia-Meng; Zhang, Yan-Chao

    2012-07-01

    Community structure is another important feature besides small-world and scale-free property of complex networks. Communities can be coupled through specific fixed links between nodes, or occasional encounter behavior. We introduce a model for opinion evolution with multiple cluster-coupled patterns, in which the interconnectivity denotes the coupled degree of communities by fixed links, and encounter frequency controls the coupled degree of communities by encounter behaviors. Considering the complicated cognitive system of people, the CODA (continuous opinions and discrete actions) update rules are used to mimic how people update their decisions after interacting with someone. It is shown that, large interconnectivity and encounter frequency both can promote consensus, reduce competition between communities and propagate some opinion successfully across the whole population. Encounter frequency is better than interconnectivity at facilitating the consensus of decisions. When the degree of social cohesion is same, small interconnectivity has better effects on lessening the competence between communities than small encounter frequency does, while large encounter frequency can make the greater degree of agreement across the whole populations than large interconnectivity can.

  10. Chapter 3 – Phenomenology of Tsunamis: Statistical Properties from Generation to Runup

    USGS Publications Warehouse

    Geist, Eric L.

    2015-01-01

    Observations related to tsunami generation, propagation, and runup are reviewed and described in a phenomenological framework. In the three coastal regimes considered (near-field broadside, near-field oblique, and far field), the observed maximum wave amplitude is associated with different parts of the tsunami wavefield. The maximum amplitude in the near-field broadside regime is most often associated with the direct arrival from the source, whereas in the near-field oblique regime, the maximum amplitude is most often associated with the propagation of edge waves. In the far field, the maximum amplitude is most often caused by the interaction of the tsunami coda that develops during basin-wide propagation and the nearshore response, including the excitation of edge waves, shelf modes, and resonance. Statistical distributions that describe tsunami observations are also reviewed, both in terms of spatial distributions, such as coseismic slip on the fault plane and near-field runup, and temporal distributions, such as wave amplitudes in the far field. In each case, fundamental theories of tsunami physics are heuristically used to explain the observations.

  11. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging

    PubMed Central

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2017-01-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539

  12. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  13. Comparison of normalization methods for the analysis of metagenomic gene abundance data.

    PubMed

    Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik

    2018-04-20

    In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.

  14. Statistical Traffic Anomaly Detection in Time-Varying Communication Networks

    DTIC Science & Technology

    2015-02-01

    methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Statistical Traffic Anomaly Detection in Time...our methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Index Terms—Statistical anomaly detection...anomaly detection but also for understanding the normal traffic in time-varying networks. C. Comparison with vanilla stochastic methods For both types

  15. Statistical Traffic Anomaly Detection in Time Varying Communication Networks

    DTIC Science & Technology

    2015-02-01

    methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Statistical Traffic Anomaly Detection in Time...our methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Index Terms—Statistical anomaly detection...anomaly detection but also for understanding the normal traffic in time-varying networks. C. Comparison with vanilla stochastic methods For both types

  16. A new EEG synchronization strength analysis method: S-estimator based normalized weighted-permutation mutual information.

    PubMed

    Cui, Dong; Pu, Weiting; Liu, Jing; Bian, Zhijie; Li, Qiuli; Wang, Lei; Gu, Guanghua

    2016-10-01

    Synchronization is an important mechanism for understanding information processing in normal or abnormal brains. In this paper, we propose a new method called normalized weighted-permutation mutual information (NWPMI) for double variable signal synchronization analysis and combine NWPMI with S-estimator measure to generate a new method named S-estimator based normalized weighted-permutation mutual information (SNWPMI) for analyzing multi-channel electroencephalographic (EEG) synchronization strength. The performances including the effects of time delay, embedding dimension, coupling coefficients, signal to noise ratios (SNRs) and data length of the NWPMI are evaluated by using Coupled Henon mapping model. The results show that the NWPMI is superior in describing the synchronization compared with the normalized permutation mutual information (NPMI). Furthermore, the proposed SNWPMI method is applied to analyze scalp EEG data from 26 amnestic mild cognitive impairment (aMCI) subjects and 20 age-matched controls with normal cognitive function, who both suffer from type 2 diabetes mellitus (T2DM). The proposed methods NWPMI and SNWPMI are suggested to be an effective index to estimate the synchronization strength. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Laser-induced differential normalized fluorescence method for cancer diagnosis

    DOEpatents

    Vo-Dinh, Tuan; Panjehpour, Masoud; Overholt, Bergein F.

    1996-01-01

    An apparatus and method for cancer diagnosis are disclosed. The diagnostic method includes the steps of irradiating a tissue sample with monochromatic excitation light, producing a laser-induced fluorescence spectrum from emission radiation generated by interaction of the excitation light with the tissue sample, and dividing the intensity at each wavelength of the laser-induced fluorescence spectrum by the integrated area under the laser-induced fluorescence spectrum to produce a normalized spectrum. A mathematical difference between the normalized spectrum and an average value of a reference set of normalized spectra which correspond to normal tissues is calculated, which provides for amplifying small changes in weak signals from malignant tissues for improved analysis. The calculated differential normalized spectrum is correlated to a specific condition of a tissue sample.

  18. Laser-induced differential normalized fluorescence method for cancer diagnosis

    DOEpatents

    Vo-Dinh, T.; Panjehpour, M.; Overholt, B.F.

    1996-12-03

    An apparatus and method for cancer diagnosis are disclosed. The diagnostic method includes the steps of irradiating a tissue sample with monochromatic excitation light, producing a laser-induced fluorescence spectrum from emission radiation generated by interaction of the excitation light with the tissue sample, and dividing the intensity at each wavelength of the laser-induced fluorescence spectrum by the integrated area under the laser-induced fluorescence spectrum to produce a normalized spectrum. A mathematical difference between the normalized spectrum and an average value of a reference set of normalized spectra which correspond to normal tissues is calculated, which provides for amplifying small changes in weak signals from malignant tissues for improved analysis. The calculated differential normalized spectrum is correlated to a specific condition of a tissue sample. 5 figs.

  19. First LOCSMITH locations of deep moonquakes

    NASA Astrophysics Data System (ADS)

    Hempel, S.; Knapmeyer, M.; Sens-Schönfelder, C.; Oberst, J.

    2008-09-01

    Introduction Several thousand seismic events were recorded by the Apollo seismic network from 19691977. Different types of events can be distinguished: meteoroid impacts, thermal quakes and internally caused moonquakes. The latter subdivide into shallow (100 to 300km) and deep moonquakes (700 to 1100km), which are by far the most common events. The deep quakes would be no immediate danger to inhabitated stations on the Earth's Moon because of their relatively low magnitude and great depth. However, they bear important information on lunar structure and evolution, and their distribution probably reflects their source mechanism. In this study, we reinvestigate location patterns of deep lunar quakes. LOCSMITH The core of this study is a new location method (LOCSMITH, [1]). This algorithm uses time intervals rather than time instants as input, which contain the dedicated arrival with probability 1. LOCSMITH models and compares theoretical and actual travel times on a global scale and uses an adaptive grid to search source locations compatible with all observations. The output is a set of all possible hypocenters for the considered region of repeating, tidally triggered moonquake activity, called clusters. The shape and size of these sets gives a better estimate of the location uncertainty than the formal standard deviations returned by classical methods. This is used for grading of deep moonquake clusters according to the currently available data quality. Classification of deep moonquakes As first step, we establish a reciprocal dependence of size and shape of LOCSMITH location clouds on number of arrivals. Four different shapes are recognized, listed here in an order corresponding to decreasing spatial resolution: 1. "Balls", which are well defined and relatively small types of sets resembling the commonly assumed error ellipsoid. These are found in the best cases with many observations. Locations in this shape are obtained for clusters 1, 18 or 33, these were already well located by earlier works [2,3]. 2. The next best shape of a location set is the "banana" as found for clusters 5, 39 or 53 [Fig. 1]. In this case, only limited depth resolution is available, and the solution spreads over a large volume. The size of a "banana" could be minimized by either finding a not yet discovered shear wave arrival or estimating a S arrival time interval by considering the coda instead of a clear S arrival. 3. Shape of clouds we call "cones" are formed by clusters for which no compressional wave arrivals, but three S arrivals were picked. Such solutions were found for clusters 35, 201 or 218 [Fig. 2]. A depth limitation is given only by the surface of the Moon's far side. In previous works, locations of these clusters were usually determined with a fixed depth, thus neglecting all depth uncertainty [2]. 4. The fourth and worst class shows a "disc"like shape with no depth resolution and almost no latitude resolution. Clusters of this class, like 4, 23 or 43, were not located so far. From class 1 ("ball") to 4 ("disc") the amount of possible hypocenters increases. So we also found a correlation between size and shape of volumes containing possible hypocenter solutions. Aim We classified all clusters according to the solution set scheme by using arrival times of [2] with an estimated error of ±10s as input for LOCSMITH. We reprocess selected clusters of each class to come up with the special requirements and possibilities of this new location method. As said above, one of the requirements of LOCSMITH is the definition of a time interval instead of a time instant for input, and an interesting option is using an estimated S arrival time interval derived from coda and scattering model, lacking a clear S arrival. We try to find fully automated methods for each processing step, dependent on the quality of data. Methods For despiking we merged methods by [4] and [5] and achieve very good results even for worst case as already presented in [6]. Prior to stacking we developed a complex multiparameter correlation algorithm to calculate the optimum time shift. Results We present relocations of selected deep moonquakes in context of data availability and quality. Previous locations are often contained in our location clouds, but realistic location uncertainties allow large deviations from the best fitting solutions, including locations on the far side of the Moon. Perspective By developing new methods for data processing and using the LOCSMITH locating algorithm we hope to reduce the location uncertainty sufficiently to make sure that all sources are on the near side, or to prove a far side origin of some of them. This would answer questions of hemispheric symmetry of lunar deep seismicity and the Moon's internal structure. References [1] Knapmeyer (2008) accepted to GJI. [2] Nakamura (2005) JGR, 110, E01001. [3] Lognonné (2003) EPSL, 211, 2744. [4] Bulow (2005) JGR, 110, E10003. [5] Sonnemann (2005) EGU05A07960. [6] Hempel, Knapmeyer, Oberst (2008) EGU2008A07989.

  20. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  1. Mojibake – The rehearsal of word fragments in verbal recall

    PubMed Central

    Lange-Küttner, Christiane; Sykorova, Eva

    2015-01-01

    Theories of verbal rehearsal usually assume that whole words are being rehearsed. However, words consist of letter sequences, or syllables, or word onset-vowel-coda, amongst many other conceptualizations of word structure. A more general term is the ‘grain size’ of word units (Ziegler and Goswami, 2005). In the current study, a new method measured the quantitative percentage of correctly remembered word structure. The amount of letters in the correct letter sequence as per cent of word length was calculated, disregarding missing or added letters. A forced rehearsal was tested by repeating each memory list four times. We tested low frequency (LF) English words versus geographical (UK) town names to control for content. We also tested unfamiliar international (INT) non-words and names of international (INT) European towns to control for familiarity. An immediate versus distributed repetition was tested with a between-subject design. Participants responded with word fragments in their written recall especially when they had to remember unfamiliar words. While memory of whole words was sensitive to content, presentation distribution and individual sex and language differences, recall of word fragments was not. There was no trade-off between memory of word fragments with whole word recall during the repetition, instead also word fragments significantly increased. Moreover, while whole word responses correlated with each other during repetition, and word fragment responses correlated with each other during repetition, these two types of word recall responses were not correlated with each other. Thus there may be a lower layer consisting of free, sparse word fragments and an upper layer that consists of language-specific, orthographically and semantically constrained words. PMID:25941500

  2. Relative seismic velocity variations correlate with deformation at Kīlauea volcano

    PubMed Central

    Donaldson, Clare; Caudron, Corentin; Green, Robert G.; Thelen, Weston A.; White, Robert S.

    2017-01-01

    Seismic noise interferometry allows the continuous and real-time measurement of relative seismic velocity through a volcanic edifice. Because seismic velocity is sensitive to the pressurization state of the system, this method is an exciting new monitoring tool at active volcanoes. Despite the potential of this tool, no studies have yet comprehensively compared velocity to other geophysical observables on a short-term time scale at a volcano over a significant length of time. We use volcanic tremor (~0.3 to 1.0 Hz) at Kīlauea as a passive source for interferometry to measure relative velocity changes with time. By cross-correlating the vertical component of day-long seismic records between ~230 station pairs, we extract coherent and temporally consistent coda wave signals with time lags of up to 120 s. Our resulting time series of relative velocity shows a remarkable correlation between relative velocity and the radial tilt record measured at Kīlauea summit, consistently correlating on a time scale of days to weeks for almost the entire study period (June 2011 to November 2015). As the summit continually deforms in deflation-inflation events, the velocity decreases and increases, respectively. Modeling of strain at Kīlauea suggests that, during inflation of the shallow magma reservoir (1 to 2 km below the surface), most of the edifice is dominated by compression—hence closing cracks and producing faster velocities—and vice versa. The excellent correlation between relative velocity and deformation in this study provides an opportunity to understand better the mechanisms causing seismic velocity changes at volcanoes, and therefore realize the potential of passive interferometry as a monitoring tool. PMID:28782009

  3. Another Look at Strong Ground Motion Accelerations and Stress Drop

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Prieto, G.; Ide, S.; Hanks, T. C.; Beroza, G. C.

    2010-12-01

    The relationship between earthquake stress drop and ground motion acceleration is central to seismic hazard analysis. We revisit measurements of root-mean-square (RMS) acceleration, arms, using KikNet accelerometer data from Japan. We directly measure RMS and peak acceleration, and estimate both apparent stress and corner frequencies using the empirical Green’s function (eGf) coda method of Baltay et al. [2010]. We predict armsfrom corner frequency and stress drop following McGuire and Hanks [1980] to compare with measurements. The theoretical relationship does a good job of predicting observed arms. We use four earthquake sequences in Japan to investigate the source parameters and accelerations: the 2008 Iwate-Miyagi earthquake; the off-Kamaishi repeating sequence; and the 2004 and 2007 Niigata events. In each data set, we choose events that are nearly co-located so that the path term to any station is constant. Small events are used as empirical Green’s functions to correct for propagation effects. For all sequences, we find that the apparent stress averages ~1 MPa for most events. Corner frequencies are consistent with Mo-1/3 scaling. We find the ratio of stress drop and apparent stress to be 5, consistent with the theoretical derivation of Singh and Ordaz [1994], using a Brune [1970] spectra. armsis theoretically proportional to stress drop and the inverse square root of the corner frequency. We show that this calculation can be used as a proxy for armsobservations from strong motion records, using recent data from the four earthquake sequences mentioned above. Even for the Iwate-Miyagi mainshock, which experienced over 4 g’s of acceleration, we find that apparent stress, stress drop and corner frequency follow expected scaling laws and support self-similarity.

  4. Moment tensor inversion of ground motion from mining-induced earthquakes, Trail Mountain, Utah

    USGS Publications Warehouse

    Fletcher, Joe B.; McGarr, A.

    2005-01-01

    A seismic network was operated in the vicinity of the Trail Mountain mine, central Utah, from the summer of 2000 to the spring of 2001 to investigate the seismic hazard to a local dam from mining-induced events that we expect to be triggered by future coal mining in this area. In support of efforts to develop groundmotion prediction relations for this situation, we inverted ground-motion recordings for six mining-induced events to determine seismic moment tensors and then to estimate moment magnitudes M for comparison with the network coda magnitudes Mc. Six components of the tensor were determined, for an assumed point source, following the inversion method of McGarr (1992a), which uses key measurements of amplitude from obvious features of the displacement waveforms. When the resulting moment tensors were decomposed into implosive and deviatoric components, we found that four of the six events showed a substantial volume reduction, presumably due to coseismic closure of the adjacent mine openings. For these four events, the volume reduction ranges from 27% to 55% of the shear component (fault area times average slip). Radiated seismic energy, computed from attenuation-corrected body-wave spectra, ranged from 2.4 ?? 105 to 2.4 ?? 106 J for events with M from 1.3 to 1.8, yielding apparent stresses from 0.02 to 0.06 MPa. The energy released for each event, approximated as the product of volume reduction and overburden stress, when compared with the corresponding seismic energies, revealed seismic efficiencies ranging from 0.5% to 7%. The low apparent stresses are consistent with the shallow focal depths of 0.2 to 0.6 km and rupture in a low stress/low strength regime compared with typical earthquake source regions at midcrustal depths.

  5. Quantitative computed tomography features and clinical manifestations associated with the extent of bronchiectasis in patients with moderate-to-severe COPD

    PubMed Central

    Bak, So Hyeon; Kim, Soohyun; Hong, Yoonki; Heo, Jeongwon; Lim, Myoung-Nam; Kim, Woo Jin

    2018-01-01

    Background Few studies have investigated the quantitative computed tomography (CT) features associated with the severity of bronchiectasis in COPD patients. The purpose of this study was to identify the quantitative CT features and clinical values to determine the extent of bronchiectasis in moderate-to-severe COPD patients. Methods A total of 127 moderate-to-severe COPD patients were selected from the cohort of COPD in Dusty Areas (CODA). The study subjects were classified into three groups according to the extent of bronchiectasis on CT: no bronchiectasis, mild bronchiectasis, and moderate-to-severe bronchiectasis. The three groups were compared with respect to demographic data, symptoms, medical history, serum inflammatory markers, pulmonary function, and quantitative CT values. Results Among 127 moderate-to-severe COPD subjects, 73 patients (57.5%) were detected to have bronchiectasis, 51 patients (40.2%) to have mild bronchiectasis, and 22 patients (17.3%) to have moderate-to-severe bronchiectasis. Compared with COPD patients without bronchiectasis, those with bronchiectasis were older and had higher frequency of prior tuberculosis, lower prevalence of bronchodilator reversibility (BDR), and more severe air trapping (P < 0.05). Moderate-to-severe bronchiectasis patients had lower body mass index (BMI), higher frequency of prior tuberculosis, lower prevalence of BDR, worse pulmonary function, and more severe air trapping (P < 0.05) than those in the mild bronchiectasis group. Conclusion Moderate-to-severe bronchiectasis was associated with a history of pulmonary tuberculosis, lower BMI, severe airflow obstruction, and lower BDR in moderate-to-severe COPD patients. Quantitative analysis of CT showed that severe air trapping was associated with the extent of bronchiectasis in these patients. PMID:29750028

  6. Systematic detection of seismic events at Mount St. Helens with an ultra-dense array

    NASA Astrophysics Data System (ADS)

    Meng, X.; Hartog, J. R.; Schmandt, B.; Hotovec-Ellis, A. J.; Hansen, S. M.; Vidale, J. E.; Vanderplas, J.

    2016-12-01

    During the summer of 2014, an ultra-dense array of 900 geophones was deployed around the crater of Mount St. Helens and continuously operated for 15 days. This dataset provides us an unprecedented opportunity to systematically detect seismic events around an active volcano and study their underlying mechanisms. We use a waveform-based matched filter technique to detect seismic events from this dataset. Due to the large volume of continuous data ( 1 TB), we performed the detection on the GPU cluster Stampede (https://www.tacc.utexas.edu/systems/stampede). We build a suite of template events from three catalogs: 1) the standard Pacific Northwest Seismic Network (PNSN) catalog (45 events); 2) the catalog from Hansen&Schmandt (2015) obtained with a reverse-time imaging method (212 events); and 3) the catalog identified with a matched filter technique using the PNSN permanent stations (190 events). By searching for template matches in the ultra-dense array, we find 2237 events. We then calibrate precise relative magnitudes for template and detected events, using a principal component fit to measure waveform amplitude ratios. The magnitude of completeness and b-value of the detected catalog is -0.5 and 1.1, respectively. Our detected catalog shows several intensive swarms, which are likely driven by fluid pressure transients in conduits or slip transients on faults underneath the volcano. We are currently relocating the detected catalog with HypoDD and measuring the seismic velocity changes at Mount St. Helens using the coda wave interferometry of detected repeating earthquakes. The accurate temporal-spatial migration pattern of seismicity and seismic property changes should shed light on the physical processes beneath Mount St. Helens.

  7. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.

  8. Normalized Legal Drafting and the Query Method.

    ERIC Educational Resources Information Center

    Allen, Layman E.; Engholm, C. Rudy

    1978-01-01

    Normalized legal drafting, a mode of expressing ideas in legal documents so that the syntax that relates the constituent propositions is simplified and standardized, and the query method, a question-asking activity that teaches normalized drafting and provides practice, are examined. Some examples are presented. (JMD)

  9. Stress Drops of Earthquakes on the Subducting Pacific Plate in the South-East off Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Saito, Y.; Yamada, T.

    2013-12-01

    Large earthquakes have been occurring repeatedly in the South-East of Hokkaido, Japan, where the Pacific Plate subducts beneath the Okhotsk Plate in the north-west direction. For example, the 2003 Tokachi-oki earthquake (Mw8.3 determined by USGS) took place in the region on September 26, 2003. Yamanaka and Kikuchi (2003) analyzed the slip distribution of the earthquake and concluded that the 2003 earthquake had ruptured the deeper half of the fault plane of the 1952 Tokachi-oki earthquake. Miyazaki et al. (2004) reported that a notable afterslip was observed at adjacent areas to the coseismic rupture zone of the 2003 earthquake, which suggests that there would be significant heterogeneities of strength, stress and frictional properties on the surface of the Pacific Plate in the region. In addition, some previous studies suggest that the region with a large slip in large earthquakes permanently have large difference of strength and the dynamic frictional stress level and that it would be able to predict the spatial pattern of slip in the next large earthquake by analyzing the stress drop of small earthquakes (e.g. Allmann and Shearer, 2007 and Yamada et al., 2010). We estimated stress drops of 150 earthquakes (4.2 ≤ M ≤ 5.0), using S-coda waves, or the waveforms from 4.00 to 9.11 seconds after the S wave arrivals, of Hi-net data. The 150 earthquakes were the ones that occurred from June, 2002 to December, 2010 in south-east of Hokkaido, Japan, from 40.5N to 43.5N and from 141.0E to 146.5E. First we selected waveforms of the closest earthquakes with magnitudes between 3.0 and 3.2 to individual 150 earthquakes as empirical Green's functions. We then calculated source spectral ratios of the 150 pairs of interested earthquakes and EGFs by deconvolving the individual S-coda waves. We finally estimated corner frequencies of earthquakes from the spectral ratios by assuming the omega-squared model of Boatwright (1978) and calculated stress drops of the earthquakes by using the model of Madariaga (1976). The estimated values of stress drop range from 1 to 10 MPa with a little number of outliers(Fig.(a)). Fig.(b) shows the spatial distribution of stress drops in south-east off Hokkaido, Japan. We found that earthquakes occurred around 42N 145E had larger stress drops. We are going to analyze smaller earthquakes and investigate the spatial pattern of the stress drop in the future. Fig. (a) Estimated values of stress drop with respect to seismic moments of earthquakes. (b) Spatial distribution of stress drops.

  10. Gene length corrected trimmed mean of M-values (GeTMM) processing of RNA-seq data performs similarly in intersample analyses while improving intrasample comparisons.

    PubMed

    Smid, Marcel; Coebergh van den Braak, Robert R J; van de Werken, Harmen J G; van Riet, Job; van Galen, Anne; de Weerd, Vanja; van der Vlugt-Daane, Michelle; Bril, Sandra I; Lalmahomed, Zarina S; Kloosterman, Wigard P; Wilting, Saskia M; Foekens, John A; IJzermans, Jan N M; Martens, John W M; Sieuwerts, Anieta M

    2018-06-22

    Current normalization methods for RNA-sequencing data allow either for intersample comparison to identify differentially expressed (DE) genes or for intrasample comparison for the discovery and validation of gene signatures. Most studies on optimization of normalization methods typically use simulated data to validate methodologies. We describe a new method, GeTMM, which allows for both inter- and intrasample analyses with the same normalized data set. We used actual (i.e. not simulated) RNA-seq data from 263 colon cancers (no biological replicates) and used the same read count data to compare GeTMM with the most commonly used normalization methods (i.e. TMM (used by edgeR), RLE (used by DESeq2) and TPM) with respect to distributions, effect of RNA quality, subtype-classification, recurrence score, recall of DE genes and correlation to RT-qPCR data. We observed a clear benefit for GeTMM and TPM with regard to intrasample comparison while GeTMM performed similar to TMM and RLE normalized data in intersample comparisons. Regarding DE genes, recall was found comparable among the normalization methods, while GeTMM showed the lowest number of false-positive DE genes. Remarkably, we observed limited detrimental effects in samples with low RNA quality. We show that GeTMM outperforms established methods with regard to intrasample comparison while performing equivalent with regard to intersample normalization using the same normalized data. These combined properties enhance the general usefulness of RNA-seq but also the comparability to the many array-based gene expression data in the public domain.

  11. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  12. Informative graphing of continuous safety variables relative to normal reference limits.

    PubMed

    Breder, Christopher D

    2018-05-16

    Interpreting graphs of continuous safety variables can be complicated because differences in age, gender, and testing site methodologies data may give rise to multiple reference limits. Furthermore, data below the lower limit of normal are compressed relative to those points above the upper limit of normal. The objective of this study is to develop a graphing technique that addresses these issues and is visually intuitive. A mock dataset with multiple reference ranges is initially used to develop the graphing technique. Formulas are developed for conditions where data are above the upper limit of normal, normal, below the lower limit of normal, and below the lower limit of normal when the data value equals zero. After the formulae are developed, an anonymized dataset from an actual set of trials for an approved drug is evaluated comparing the technique developed in this study to standard graphical methods. Formulas are derived for the novel graphing method based on multiples of the normal limits. The formula for values scaled between the upper and lower limits of normal is a novel application of a readily available scaling formula. The formula for the lower limit of normal is novel and addresses the issue of this value potentially being indeterminate when the result to be scaled as a multiple is zero. The formulae and graphing method described in this study provides a visually intuitive method to graph continuous safety data including laboratory values, vital sign data.

  13. CGHnormaliter: an iterative strategy to enhance normalization of array CGH data with imbalanced aberrations

    PubMed Central

    van Houte, Bart PP; Binsl, Thomas W; Hettling, Hannes; Pirovano, Walter; Heringa, Jaap

    2009-01-01

    Background Array comparative genomic hybridization (aCGH) is a popular technique for detection of genomic copy number imbalances. These play a critical role in the onset of various types of cancer. In the analysis of aCGH data, normalization is deemed a critical pre-processing step. In general, aCGH normalization approaches are similar to those used for gene expression data, albeit both data-types differ inherently. A particular problem with aCGH data is that imbalanced copy numbers lead to improper normalization using conventional methods. Results In this study we present a novel method, called CGHnormaliter, which addresses this issue by means of an iterative normalization procedure. First, provisory balanced copy numbers are identified and subsequently used for normalization. These two steps are then iterated to refine the normalization. We tested our method on three well-studied tumor-related aCGH datasets with experimentally confirmed copy numbers. Results were compared to a conventional normalization approach and two more recent state-of-the-art aCGH normalization strategies. Our findings show that, compared to these three methods, CGHnormaliter yields a higher specificity and precision in terms of identifying the 'true' copy numbers. Conclusion We demonstrate that the normalization of aCGH data can be significantly enhanced using an iterative procedure that effectively eliminates the effect of imbalanced copy numbers. This also leads to a more reliable assessment of aberrations. An R-package containing the implementation of CGHnormaliter is available at . PMID:19709427

  14. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  15. THE REASONING METHODS AND REASONING ABILITY IN NORMAL AND MENTALLY RETARDED GIRLS AND THE REASONING ABILITY OF NORMAL AND MENTALLY RETARDED BOYS AND GIRLS.

    ERIC Educational Resources Information Center

    CAPOBIANCO, RUDOLPH J.; AND OTHERS

    A STUDY WAS MADE TO ESTABLISH AND ANALYZE THE METHODS OF SOLVING INDUCTIVE REASONING PROBLEMS BY MENTALLY RETARDED CHILDREN. THE MAJOR OBJECTIVES WERE--(1) TO EXPLORE AND DESCRIBE REASONING IN MENTALLY RETARDED CHILDREN, (2) TO COMPARE THEIR METHODS WITH THOSE UTILIZED BY NORMAL CHILDREN OF APPROXIMATELY THE SAME MENTAL AGE, (3) TO EXPLORE THE…

  16. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  17. Challenges in microarray class discovery: a comprehensive examination of normalization, gene selection and clustering

    PubMed Central

    2010-01-01

    Background Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre-processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization. Result We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions. Conclusions The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data. PMID:20937082

  18. Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.

    PubMed

    Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira

    2012-07-15

    Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com

  19. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    USGS Publications Warehouse

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  20. Normal modes of the shallow water system on the cubed sphere

    NASA Astrophysics Data System (ADS)

    Kang, H. G.; Cheong, H. B.; Lee, C. H.

    2017-12-01

    Spherical harmonics expressed as the Rossby-Haurwitz waves are the normal modes of non-divergent barotropic model. Among the normal modes in the numerical models, the most unstable mode will contaminate the numerical results, and therefore the investigation of normal mode for a given grid system and a discretiztaion method is important. The cubed-sphere grid which consists of six identical faces has been widely adopted in many atmospheric models. This grid system is non-orthogonal grid so that calculation of the normal mode is quiet challenge problem. In the present study, the normal modes of the shallow water system on the cubed sphere discretized by the spectral element method employing the Gauss-Lobatto Lagrange interpolating polynomials as orthogonal basis functions is investigated. The algebraic equations for the shallow water equation on the cubed sphere are derived, and the huge global matrix is constructed. The linear system representing the eigenvalue-eigenvector relations is solved by numerical libraries. The normal mode calculated for the several horizontal resolution and lamb parameters will be discussed and compared to the normal mode from the spherical harmonics spectral method.

  1. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    PubMed

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  2. CNN-based ranking for biomedical entity normalization.

    PubMed

    Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong

    2017-10-03

    Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.

  3. Normal mode analysis and applications in biological physics.

    PubMed

    Dykeman, Eric C; Sankey, Otto F

    2010-10-27

    Normal mode analysis has become a popular and often used theoretical tool in the study of functional motions in enzymes, viruses, and large protein assemblies. The use of normal modes in the study of these motions is often extremely fruitful since many of the functional motions of large proteins can be described using just a few normal modes which are intimately related to the overall structure of the protein. In this review, we present a broad overview of several popular methods used in the study of normal modes in biological physics including continuum elastic theory, the elastic network model, and a new all-atom method, recently developed, which is capable of computing a subset of the low frequency vibrational modes exactly. After a review of the various methods, we present several examples of applications of normal modes in the study of functional motions, with an emphasis on viral capsids.

  4. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  5. A method for estimating direct normal solar irradiation from satellite data for a tropical environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjai, Serm

    In order to investigate a potential use of concentrating solar power technologies and select an optimum site for these technologies, it is necessary to obtain information on the geographical distribution of direct normal solar irradiation over an area of interest. In this work, we have developed a method for estimating direct normal irradiation from satellite data for a tropical environment. The method starts with the estimation of global irradiation on a horizontal surface from MTSAT-1R satellite data and other ground-based ancillary data. Then a satellite-based diffuse fraction model was developed and used to estimate the diffuse component of the satellite-derivedmore » global irradiation. Based on this estimated global and diffuse irradiation and the solar radiation incident angle, the direct normal irradiation was finally calculated. To evaluate its performance, the method was used to estimate the monthly average hourly direct normal irradiation at seven pyrheliometer stations in Thailand. It was found that values of monthly average hourly direct normal irradiation from the measurements and those estimated from the proposed method are in reasonable agreement, with a root mean square difference of 16% and a mean bias of -1.6%, with respect to mean measured values. After the validation, this method was used to estimate the monthly average hourly direct normal irradiation over Thailand by using MTSAT-1R satellite data for the period from June 2005 to December 2008. Results from the calculation were displayed as hourly and yearly irradiation maps. These maps reveal that the direct normal irradiation in Thailand was strongly affected by the tropical monsoons and local topography of the country. (author)« less

  6. A comparison of per sample global scaling and per gene normalization methods for differential expression analysis of RNA-seq data.

    PubMed

    Li, Xiaohong; Brock, Guy N; Rouchka, Eric C; Cooper, Nigel G F; Wu, Dongfeng; O'Toole, Timothy E; Gill, Ryan S; Eteleeb, Abdallah M; O'Brien, Liz; Rai, Shesh N

    2017-01-01

    Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level.

  7. A comparison of per sample global scaling and per gene normalization methods for differential expression analysis of RNA-seq data

    PubMed Central

    Li, Xiaohong; Brock, Guy N.; Rouchka, Eric C.; Cooper, Nigel G. F.; Wu, Dongfeng; O’Toole, Timothy E.; Gill, Ryan S.; Eteleeb, Abdallah M.; O’Brien, Liz

    2017-01-01

    Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level. PMID:28459823

  8. Valuation of Normal Range of Ankle Systolic Blood Pressure in Subjects with Normal Arm Systolic Blood Pressure.

    PubMed

    Gong, Yi; Cao, Kai-wu; Xu, Jin-song; Li, Ju-xiang; Hong, Kui; Cheng, Xiao-shu; Su, Hai

    2015-01-01

    This study aimed to establish a normal range for ankle systolic blood pressure (SBP). A total of 948 subjects who had normal brachial SBP (90-139 mmHg) at investigation were enrolled. Supine BP of four limbs was simultaneously measured using four automatic BP measurement devices. The ankle-arm difference (An-a) on SBP of both sides was calculated. Two methods were used for establishing normal range of ankle SBP: the 99% method was decided on the 99% reference range of actual ankle BP, and the An-a method was the sum of An-a and the low or up limits of normal arm SBP (90-139 mmHg). Whether in the right or left side, the ankle SBP was significantly higher than the arm SBP (right: 137.1 ± 16.9 vs 119.7 ± 11.4 mmHg, P<0.05). Based on the 99% method, the normal range of ankle SBP was 94~181 mmHg for the total population, 84~166 mmHg for the young (18-44 y), 107~176 mmHg for the middle-aged(45-59 y) and 113~179 mmHg for the elderly (≥ 60 y) group. As the An-a on SBP was 13 mmHg in the young group and 20 mmHg in both middle-aged and elderly groups, the normal range of ankle SBP on the An-a method was 103-153 mmHg for young and 110-160 mmHg for middle-elderly subjects. A primary reference for normal ankle SBP was suggested as 100-165 mmHg in the young and 110-170 mmHg in the middle-elderly subjects.

  9. Optimal consistency in microRNA expression analysis using reference-gene-based normalization.

    PubMed

    Wang, Xi; Gardiner, Erin J; Cairns, Murray J

    2015-05-01

    Normalization of high-throughput molecular expression profiles secures differential expression analysis between samples of different phenotypes or biological conditions, and facilitates comparison between experimental batches. While the same general principles apply to microRNA (miRNA) normalization, there is mounting evidence that global shifts in their expression patterns occur in specific circumstances, which pose a challenge for normalizing miRNA expression data. As an alternative to global normalization, which has the propensity to flatten large trends, normalization against constitutively expressed reference genes presents an advantage through their relative independence. Here we investigated the performance of reference-gene-based (RGB) normalization for differential miRNA expression analysis of microarray expression data, and compared the results with other normalization methods, including: quantile, variance stabilization, robust spline, simple scaling, rank invariant, and Loess regression. The comparative analyses were executed using miRNA expression in tissue samples derived from subjects with schizophrenia and non-psychiatric controls. We proposed a consistency criterion for evaluating methods by examining the overlapping of differentially expressed miRNAs detected using different partitions of the whole data. Based on this criterion, we found that RGB normalization generally outperformed global normalization methods. Thus we recommend the application of RGB normalization for miRNA expression data sets, and believe that this will yield a more consistent and useful readout of differentially expressed miRNAs, particularly in biological conditions characterized by large shifts in miRNA expression.

  10. Calculation of grain boundary normals directly from 3D microstructure images

    DOE PAGES

    Lieberman, E. J.; Rollett, A. D.; Lebensohn, R. A.; ...

    2015-03-11

    The determination of grain boundary normals is an integral part of the characterization of grain boundaries in polycrystalline materials. These normal vectors are difficult to quantify due to the discretized nature of available microstructure characterization techniques. The most common method to determine grain boundary normals is by generating a surface mesh from an image of the microstructure, but this process can be slow, and is subject to smoothing issues. A new technique is proposed, utilizing first order Cartesian moments of binary indicator functions, to determine grain boundary normals directly from a voxelized microstructure image. In order to validate the accuracymore » of this technique, the surface normals obtained by the proposed method are compared to those generated by a surface meshing algorithm. Specifically, the local divergence between the surface normals obtained by different variants of the proposed technique and those generated from a surface mesh of a synthetic microstructure constructed using a marching cubes algorithm followed by Laplacian smoothing is quantified. Next, surface normals obtained with the proposed method from a measured 3D microstructure image of a Ni polycrystal are used to generate grain boundary character distributions (GBCD) for Σ3 and Σ9 boundaries, and compared to the GBCD generated using a surface mesh obtained from the same image. Finally, the results show that the proposed technique is an efficient and accurate method to determine voxelized fields of grain boundary normals.« less

  11. Method for construction of normalized cDNA libraries

    DOEpatents

    Soares, Marcelo B.; Efstratiadis, Argiris

    1998-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.

  12. Method for construction of normalized cDNA libraries

    DOEpatents

    Soares, M.B.; Efstratiadis, A.

    1998-11-03

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3` noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries. 19 figs.

  13. Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments

    PubMed Central

    Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed

    2013-01-01

    In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135

  14. Smooth quantile normalization.

    PubMed

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  15. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  16. The Impact of Normalization Methods on RNA-Seq Data Analysis

    PubMed Central

    Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.

    2015-01-01

    High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014

  17. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    PubMed

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p < 0.05). In a real dataset, SAN had the lowest SDM and Kolmogorov-Smirnov values for blood urea nitrogen, hematocrit, hemoglobin, and serum potassium, and the lowest SDM for serum creatinine (p < 0.05). Subgroup-adjusted normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  18. [Application of the life table method to the estimation of late complications of normal tissues after radiotherapy].

    PubMed

    Morita, K; Uchiyama, Y; Tominaga, S

    1987-06-01

    In order to evaluate the treatment results of radiotherapy it is important to estimate the degree of complications of the surrounding normal tissues as well as the frequency of tumor control. In this report, the cumulative incidence rate of the late radiation injuries of the normal tissues was calculated using the modified actuarial method (Cutler-Ederer's method) or Kaplan-Meier's method, which is usually applied to the calculation of the survival rate. By the use of this method of calculation, an accurate cumulative incidence rate over time can be easily obtained and applied to the statistical evaluation of the late radiation injuries.

  19. A comparison of three methods to evaluate the position of an artificial ear on the deficient side of the face from a three-dimensional surface scan of patients with hemifacial microsomia.

    PubMed

    Coward, Trevor J; Watson, Roger M; Richards, Robin; Scott, Brendan J J

    2012-01-01

    Patients with hemifacial microsomia may have a missing ear on the deficient side of the face. The fabrication of an ear for such individuals usually has been accomplished by directly measuring the ear on the normal side to construct a prosthesis based on these dimensions, and the positioning has been, to a large extent, primarily operator-dependent. The aim of the present study was to compare three methods, developed from the identification of landmarks plotted on three-dimensional surface scans, to evaluate the position of an artificial ear on the deficient side of the face compared with the position of the natural ear on the normally developed side. Laser scans were undertaken of the faces of 14 subjects with hemifacial microsomia. Landmarks on the ear and face on the normal side were identified. Three methods of mirroring the normal ear on the deficient side of the face were investigated, which used either facial landmarks from the orbital area or a zero reference point generated from the intersection of three orthogonal planes on a frame of reference. To assess the methods, landmarks were identified on the ear situated on the normal side as well as on the face. These landmarks yielded paired dimensional measurements that could be compared between the normal and deficient sides. Mean differences and 95% confidence intervals were calculated. It was possible to mirror the normal ear image on to the deficient side of the face using all three methods. Generally only small differences between the dimensional measurements on the normal and deficient sides were observed. However, two-way analysis of variance revealed statistically significant differences between the three methods (P = .005). The method of mirroring using the outer canthi was found to result in the smallest dimensional differences between the anthropometric points on the ear and face between the normally developed and deficient sides. However, the effects of the deformity can result in limitations in relation to achieving a precise alignment of the ear to the facial tissues. This requires further study.

  20. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  1. A systematic assessment of normalization approaches for the Infinium 450K methylation platform.

    PubMed

    Wu, Michael C; Joubert, Bonnie R; Kuan, Pei-fen; Håberg, Siri E; Nystad, Wenche; Peddada, Shyamal D; London, Stephanie J

    2014-02-01

    The Illumina Infinium HumanMethylation450 BeadChip has emerged as one of the most popular platforms for genome wide profiling of DNA methylation. While the technology is wide-spread, systematic technical biases are believed to be present in the data. For example, this array incorporates two different chemical assays, i.e., Type I and Type II probes, which exhibit different technical characteristics and potentially complicate the computational and statistical analysis. Several normalization methods have been introduced recently to adjust for possible biases. However, there is considerable debate within the field on which normalization procedure should be used and indeed whether normalization is even necessary. Yet despite the importance of the question, there has been little comprehensive comparison of normalization methods. We sought to systematically compare several popular normalization approaches using the Norwegian Mother and Child Cohort Study (MoBa) methylation data set and the technical replicates analyzed with it as a case study. We assessed both the reproducibility between technical replicates following normalization and the effect of normalization on association analysis. Results indicate that the raw data are already highly reproducible, some normalization approaches can slightly improve reproducibility, but other normalization approaches may introduce more variability into the data. Results also suggest that differences in association analysis after applying different normalizations are not large when the signal is strong, but when the signal is more modest, different normalizations can yield very different numbers of findings that meet a weaker statistical significance threshold. Overall, our work provides useful, objective assessment of the effectiveness of key normalization methods.

  2. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  3. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  4. Normalized Temperature Contrast Processing in Flash Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.

  5. A stage-normalized function for the synthesis of stage-discharge relations for the Colorado River in Grand Canyon, Arizona

    USGS Publications Warehouse

    Wiele, Stephen M.; Torizzo, Margaret

    2003-01-01

    A method was developed to construct stage-discharge rating curves for the Colorado River in Grand Canyon, Arizona, using two stage-discharge pairs and a stage-normalized rating curve. Stage-discharge rating curves formulated with the stage-normalized curve method are compared to (1) stage-discharge rating curves for six temporary stage gages and two streamflow-gaging stations developed by combining stage records with modeled unsteady flow; (2) stage-discharge rating curves developed from stage records and discharge measurements at three streamflow-gaging stations; and (3) stages surveyed at known discharges at the Northern Arizona Sand Bar Studies sites. The stage-normalized curve method shows good agreement with field data when the discharges used in the construction of the rating curves are at least 200 cubic meters per second apart. Predictions of stage using the stage-normalized curve method are also compared to predictions of stage from a steady-flow model.

  6. A Method for Approximating the Bivariate Normal Correlation Coefficient.

    ERIC Educational Resources Information Center

    Kirk, David B.

    Improvements of the Gaussian quadrature in conjunction with the Newton-Raphson iteration technique (TM 000 789) are discussed as effective methods of calculating the bivariate normal correlation coefficient. (CK)

  7. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  8. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    PubMed

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  9. 3D surface voxel tracing corrector for accurate bone segmentation.

    PubMed

    Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi

    2018-06-18

    For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two state-of-the-art methods.

  10. Normalization to specific gravity prior to analysis improves information recovery from high resolution mass spectrometry metabolomic profiles of human urine.

    PubMed

    Edmands, William M B; Ferrari, Pietro; Scalbert, Augustin

    2014-11-04

    Extraction of meaningful biological information from urinary metabolomic profiles obtained by liquid-chromatography coupled to mass spectrometry (MS) necessitates the control of unwanted sources of variability associated with large differences in urine sample concentrations. Different methods of normalization either before analysis (preacquisition normalization) through dilution of urine samples to the lowest specific gravity measured by refractometry, or after analysis (postacquisition normalization) to urine volume, specific gravity and median fold change are compared for their capacity to recover lead metabolites for a potential future use as dietary biomarkers. Twenty-four urine samples of 19 subjects from the European Prospective Investigation into Cancer and nutrition (EPIC) cohort were selected based on their high and low/nonconsumption of six polyphenol-rich foods as assessed with a 24 h dietary recall. MS features selected on the basis of minimum discriminant selection criteria were related to each dietary item by means of orthogonal partial least-squares discriminant analysis models. Normalization methods ranked in the following decreasing order when comparing the number of total discriminant MS features recovered to that obtained in the absence of normalization: preacquisition normalization to specific gravity (4.2-fold), postacquisition normalization to specific gravity (2.3-fold), postacquisition median fold change normalization (1.8-fold increase), postacquisition normalization to urinary volume (0.79-fold). A preventative preacquisition normalization based on urine specific gravity was found to be superior to all curative postacquisition normalization methods tested for discovery of MS features discriminant of dietary intake in these urinary metabolomic datasets.

  11. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  12. Verification of three-microphone impedance tube method for measurement of transmission loss in aerogels

    NASA Astrophysics Data System (ADS)

    Connick, Robert J.

    Accurate measurement of normal incident transmission loss is essential for the acoustic characterization of building materials. In this research, a method of measuring normal incidence sound transmission loss proposed by Salissou et al. as a complement to standard E2611-09 of the American Society for Testing and Materials [Standard Test Method for Measurement of Normal Incidence Sound Transmission of Acoustical Materials Based on the Transfer Matrix Method (American Society for Testing and Materials, New York, 2009)] is verified. Two sam- ples from the original literature are used to verify the method as well as a Filtros RTM sample. Following the verification, several nano-material Aerogel samples are measured.

  13. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  14. The perception of syllable affiliation of singleton stops in repetitive speech.

    PubMed

    de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko

    2004-01-01

    Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.

  15. Communication failure: basic components, contributing factors, and the call for structure.

    PubMed

    Dayton, Elizabeth; Henriksen, Kerm

    2007-01-01

    Communication is a taken-for-granted human activity that is recognized as important once it has failed. Communication failures are a major contributor to adverse events in health care. The components and processes of communication converge in an intricate manner, creating opportunities for misunderstanding along the way. When a patient's safety is at risk, providers should speak up (that is, initiate a message) to draw attention to the situation before harm is caused. They should also clearly explain (encode) and understand (decode) each other's diagnosis and recommendations to ensure well coordinated delivery of care. Beyond basic dyadic communication exchanges, an intricate web of individual, group, and organizational factors--more specifically, cognitive workload, implicit assumptions, authority gradients, diffusion of responsibility, and transitions of care--complicate communication. More structured and explicitly designed forms of communication have been recommended to reduce ambiguity, enhance clarity, and send an unequivocal signal, when needed, that a different action is required. Read-backs, Situation-Background-Assessment-Recommendation, critical assertions, briefings, and debriefings are seeing increasing use in health care. CODA: Although structured forms of communication have good potential to enhance clarity, they are not fail-safe. Providers need to be sensitive to unexpected consequences regarding their use.

  16. Application of deconvolution interferometry with both Hi-net and KiK-net data

    NASA Astrophysics Data System (ADS)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  17. The function of male sperm whale slow clicks in a high latitude habitat: communication, echolocation, or prey debilitation?

    PubMed

    Oliveira, Cláudia; Wahlberg, Magnus; Johnson, Mark; Miller, Patrick J O; Madsen, Peter T

    2013-05-01

    Sperm whales produce different click types for echolocation and communication. Usual clicks and buzzes appear to be used primarily in foraging while codas are thought to function in social communication. The function of slow clicks is less clear, but they appear to be produced by males at higher latitudes, where they primarily forage solitarily, and on the breeding grounds, where they roam between groups of females. Here the behavioral context in which these vocalizations are produced and the function they may serve was investigated. Ninety-nine hours of acoustic and diving data were analyzed from sound recording tags on six male sperm whales in Northern Norway. The 755 slow clicks detected were produced by tagged animals at the surface (52%), ascending from a dive (37%), and during the bottom phase (11%), but never during the descent. Slow clicks were not associated with the production of buzzes, other echolocation clicks, or fast maneuvering that would indicate foraging. Some slow clicks were emitted in seemingly repetitive temporal patterns supporting the hypothesis that the function for slow clicks on the feeding grounds is long range communication between males, possibly relaying information about individual identity or behavioral states.

  18. Diffuse ultrasound monitoring of stress and damage development on a 15-ton concrete beam.

    PubMed

    Zhang, Yuxiang; Planès, Thomas; Larose, Eric; Obermann, Anne; Rospars, Claude; Moreau, Gautier

    2016-04-01

    This paper describes the use of an ultrasonic imaging technique (Locadiff) for the Non-Destructive Testing & Evaluation of a concrete structure. By combining coda wave interferometry and a sensitivity kernel for diffuse waves, Locadiff can monitor the elastic and structural properties of a heterogeneous material with a high sensitivity, and can map changes of these properties over time when a perturbation occurs in the bulk of the material. The applicability of the technique to life-size concrete structures is demonstrated through the monitoring of a 15-ton reinforced concrete beam subject to a four-point bending test causing cracking. The experimental results show that Locadiff achieved to (1) detect and locate the cracking zones in the core of the concrete beam at an early stage by mapping the changes in the concrete's micro-structure; (2) monitor the internal stress level in both temporal and spatial domains by mapping the variation in velocity caused by the acousto-elastic effect. The mechanical behavior of the concrete structure is also studied using conventional techniques such as acoustic emission, vibrating wire extensometers, and digital image correlation. The performances of the Locadiff technique in the detection of early stage cracking are assessed and discussed.

  19. Seismic Characterization of the Newberry and Cooper Basin EGS Sites

    NASA Astrophysics Data System (ADS)

    Templeton, D. C.; Wang, J.; Goebel, M.; Johannesson, G.; Myers, S. C.; Harris, D.; Cladouhos, T. T.

    2015-12-01

    To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance traditional microearthquake detection and location methodologies at two EGS systems: the Newberry EGS site and the Habanero EGS site in the Cooper Basin of South Australia. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP typically have smaller magnitudes or occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation is real, or simply within the anticipated error range. At the Newberry EGS site, 235 events were reported in the original catalog. MFP identified 164 additional events (an increase of over 70% more events). For the relocated events in the Newberry catalog, we can distinguish two distinct seismic swarms that fall outside of one another's 95% probability error ellipsoids.This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  20. Numerical optimization of the ramp-down phase with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, Anna; Sauter, Olivier; Felici, Federico; The Tcv Team; The ASDEX-Upgrade Team; The Eurofusion Mst1 Team

    2017-10-01

    The ramp-down optimization goal in this work is defined as the fastest possible decrease of a plasma current while avoiding any disruptions caused by reaching physical or technical limits. Numerical simulations and preliminary experiments on TCV and AUG have shown that a fast decrease of plasma elongation and an adequate timing of the H-L transition during current ramp-down can help to avoid reaching high values of the plasma internal inductance. The RAPTOR code (F. Felici et al., 2012 PPCF 54; F. Felici, 2011 EPFL PhD thesis), developed for real-time plasma control, has been used for an optimization problem solving. Recently the transport model has been extended to include the ion temperature and electron density transport equations in addition to the electron temperature and current density transport equations, increasing the physical applications of the code. The gradient-based models for the transport coefficients (O. Sauter et al., 2014 PPCF 21; D. Kim et al., 2016 PPCF 58) have been implemented to RAPTOR and tested during this work. Simulations of the AUG and TCV entire plasma discharges will be presented. See the author list of S. Coda et al., Nucl. Fusion 57 2017 102011.

Top