Haney, Matthew M.; Mikesell, T. Dylan; van Wijk, Kasper; Nakahara, Hisashi
2012-01-01
Using ambient seismic noise for imaging subsurface structure dates back to the development of the spatial autocorrelation (SPAC) method in the 1950s. We present a theoretical analysis of the SPAC method for multicomponent recordings of surface waves to determine the complete 3 × 3 matrix of correlations between all pairs of three-component motions, called the correlation matrix. In the case of isotropic incidence, when either Rayleigh or Love waves arrive from all directions with equal power, the only non-zero off-diagonal terms in the matrix are the vertical–radial (ZR) and radial–vertical (RZ) correlations in the presence of Rayleigh waves. Such combinations were not considered in the development of the SPAC method. The method originally addressed the vertical–vertical (ZZ), RR and TT correlations, hence the name spatial autocorrelation. The theoretical expressions we derive for the ZR and RZ correlations offer additional ways to measure Rayleigh wave dispersion within the SPAC framework. Expanding on the results for isotropic incidence, we derive the complete correlation matrix in the case of generally anisotropic incidence. We show that the ZR and RZ correlations have advantageous properties in the presence of an out-of-plane directional wavefield compared to ZZ and RR correlations. We apply the results for mixed-component correlations to a data set from Akutan Volcano, Alaska and find consistent estimates of Rayleigh wave phase velocity from ZR compared to ZZ correlations. This work together with the recently discovered connections between the SPAC method and time-domain correlations of ambient noise provide further insights into the retrieval of surface wave Green’s functions from seismic noise.
On the reliability and limitations of the SPAC method with a directional wavefield
NASA Astrophysics Data System (ADS)
Luo, Song; Luo, Yinhe; Zhu, Lupei; Xu, Yixian
2016-03-01
The spatial autocorrelation (SPAC) method is one of the most efficient ways to extract phase velocities of surface waves from ambient seismic noise. Most studies apply the method based on the assumption that the wavefield of ambient noise is diffuse. However, the actual distribution of sources is neither diffuse nor stationary. In this study, we examined the reliability and limitations of the SPAC method with a directional wavefield. We calculated the SPAC coefficients and phase velocities from a directional wavefield for a four-layer model and characterized the limitations of the SPAC. We then applied the SPAC method to real data in Karamay, China. Our results show that, 1) the SPAC method can accurately measure surface wave phase velocities from a square array with a directional wavefield down to a wavelength of twice the shortest interstation distance; and 2) phase velocities obtained from real data by the SPAC method are stable and reliable, which demonstrates that this method can be applied to measure phase velocities in a square array with a directional wavefield.
NASA Astrophysics Data System (ADS)
Asten, M. W.; Hayashi, K.
2018-07-01
Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an ideal regular array. We find that sparse nested triangular arrays are generally sufficient, and the use of high-density circular arrays is unlikely to be cost-effective in routine applications. We recommend that passive seismic arrays should be the method of first choice when characterizing average S-wave velocity to a depth of 30 m ( V s30) and deeper, with active seismic methods such as multichannel analysis of surface waves (MASW) being a complementary method for use if and when conditions so require. The use of computer inversion methodology allows estimation of not only the S-wave velocity profile but also parameter uncertainties in terms of layer thickness and velocity. The coupling of SPAC methods with horizontal/vertical particle motion spectral ratio analysis generally allows use of lower frequency data, with consequent resolution of deeper layers than is possible with SPAC alone. Considering its non-invasive methodology, logistical flexibility, simplicity, applicability, and stability, the SPAC method and its various modified extensions will play an increasingly important role in site effect evaluation. The paper summarizes the fundamental theory of the SPAC method, reviews recent developments, and offers recommendations for future blind studies.
NASA Astrophysics Data System (ADS)
Asten, M. W.; Hayashi, K.
2018-05-01
Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an ideal regular array. We find that sparse nested triangular arrays are generally sufficient, and the use of high-density circular arrays is unlikely to be cost-effective in routine applications. We recommend that passive seismic arrays should be the method of first choice when characterizing average S-wave velocity to a depth of 30 m (V s30) and deeper, with active seismic methods such as multichannel analysis of surface waves (MASW) being a complementary method for use if and when conditions so require. The use of computer inversion methodology allows estimation of not only the S-wave velocity profile but also parameter uncertainties in terms of layer thickness and velocity. The coupling of SPAC methods with horizontal/vertical particle motion spectral ratio analysis generally allows use of lower frequency data, with consequent resolution of deeper layers than is possible with SPAC alone. Considering its non-invasive methodology, logistical flexibility, simplicity, applicability, and stability, the SPAC method and its various modified extensions will play an increasingly important role in site effect evaluation. The paper summarizes the fundamental theory of the SPAC method, reviews recent developments, and offers recommendations for future blind studies.
Surface‐wave Green’s tensors in the near field
Haney, Matt; Nakahara, Hisashi
2014-01-01
We demonstrate the connection between theoretical expressions for the correlation of ambient noise Rayleigh and Love waves and the exact surface‐wave Green’s tensors for a point force. The surface‐wave Green’s tensors are well known in the far‐field limit. On the other hand, the imaginary part of the exact Green’s tensors, including near‐field effects, arises in correlation techniques such as the spatial autocorrelation (SPAC) method. Using the imaginary part of the exact Green’s tensors from the SPAC method, we find the associated real part using the Kramers–Kronig relations. The application of the Kramers–Kronig relations is not straightforward, however, because the causality properties of the different tensor components vary. In addition to the Green’s tensors for a point force, we also derive expressions for a general point moment tensor source.
Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m
NASA Astrophysics Data System (ADS)
Czarnota, Karol; Gorbatov, Alexei
2016-04-01
In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.
NASA Astrophysics Data System (ADS)
Flores-Estrella, H.; Aguirre, J.; Boore, D.; Yussim, S.
2001-12-01
Microtremor recordings have become a useful tool for microzonation studies in countries with low to moderate seismicity and also in countries where there are few seismographs or the recurrence time for an earthquake is quite long. Microtremor recordings can be made at almost any time and any place without needing to wait for an earthquake. The measurements can be made using one station or an array of stations. Microtremor recordings can be used to estimate site response directly (e.g. by using Nakamura's technique), or they can be used to estimate shear-wave velocities, from which site response can be calculated. A number of studies have found that the direct estimation of site response may be unreliable, except for identifying the fundamental resonant period of a site. Obtaining shear-wave velocities requires inverting measurements of Rayleigh wave phase velocities from microtremors, which are obtained by using the Spatial Autocorrelation (SPAC) (Aki, 1957) or the Frequency-Wave Number (F-K) (Horike, 1985) methods. Estimating shear-wave velocities from microtremor recordings is a cheaper alternative than direct methods, such as the logging of boreholes. In this work we use simultaneous microtremor recordings from triangular arrays located at two sites in Mexico City, Mexico, one ("Texcoco") with a lacustrine sediment layer of about 200 m depth, and the other one ("Ciudad Universitaria") underlain by 2,000 year old basaltic flows from Xitle volcano. The data are analyzed using both the SPAC method and by the standard F-K method. The results obtained from the SPAC method are more consistent with expectations from the geological conditions and an empirical transfer function (Montalvo et al., 2001) than those from F-K method. We also analyze data from the Hollister Municipal Airport in California. The triangular array at this site is located near a borehole from which seismic velocities have been obtained using a downhole logging method (Liu et al., 2000). We compare results from the microtremor recordings analyzed using both the SPAC and F-K methods with those obtained from the downhole logging.
Microtremors study applying the SPAC method in Colima state, Mexico.
NASA Astrophysics Data System (ADS)
Vázquez Rosas, R.; Aguirre González, J.; Mijares Arellano, H.
2007-05-01
One of the main parts of seismic risk studies is to determine the site effect. This can be estimated by means of the microtremors measurements. From the H/V spectral ratio (Nakamura, 1989), the predominant period of the site can be estimated. Although the predominant period by itself can not represent the site effect in a wide range of frequencies and doesn't provide information of the stratigraphy. The SPAC method (Spatial Auto-Correlation Method, Aki 1957), on the other hand, is useful to estimate the stratigraphy of the site. It is based on the simultaneous recording of microtremors in several stations deployed in an instrumental array. Through the spatial autocorrelation coefficient computation, the Rayleigh wave dispersion curve can be cleared. Finally the stratigraphy model (thickness, S and P wave velocity, and density of each layer) is estimated by fitting the theoretical dispersion curve with the observed one. The theoretical dispersion curve is initially computed using a proposed model. That model is modified several times until the theoretical curve fit the observations. This method requires of a minimum of three stations where the microtremors are observed simultaneously in all the stations. We applied the SPAC method to six sites in Colima state, Mexico. Those sites are Santa Barbara, Cerro de Ortega, Tecoman, Manzanillo and two in Colima city. Totally 16 arrays were carried out using equilateral triangles with different apertures with a minimum of 5 m and a maximum of 60 m. For recording microtremors we used short period (5 seconds) velocity type vertical sensors connected to a K2 (Kinemetrics) acquisition system. We could estimate the velocities of the most superficial layers reaching different depths in each site. For Santa Bárbara site the exploration depth was about 30 m, for Tecoman 12 m, for Manzanillo 35 m, for Cerro de Ortega 68 m, and the deepest site exploration was obtained in Colima city with a depth of around 73 m. The S wave velocities fluctuate between 230 m/s and 420 m/s for the most superficial layer. It means that, in general, the most superficial layers are quite competent. The superficial layer with smaller S wave velocity was observed in Tecoman, while that of largest S wave velocity was observed in Cerro de Ortega. Our estimations are consistent with down-hole velocity records obtained in Santa Barbara by previous studies.
NASA Astrophysics Data System (ADS)
Morales, L. E. A. P.; Aguirre, J.; Vazquez Rosas, R.; Suarez, G.; Contreras Ruiz-Esparza, M. G.; Farraz, I.
2014-12-01
Methods that use seismic noise or microtremors have become very useful tools worldwide due to its low costs, the relative simplicity in collecting data, the fact that these are non-invasive methods hence there is no need to alter or even perforate the study site, and also these methods require a relatively simple analysis procedure. Nevertheless the geological structures estimated by this methods are assumed to be parallel, isotropic and homogeneous layers. Consequently precision of the estimated structure is lower than that from conventional seismic methods. In the light of these facts this study aimed towards searching a new way to interpret the results obtained from seismic noise methods. In this study, seven triangular SPAC (Aki, 1957) arrays were performed in the city of Coatzacoalcos, Veracruz, varying in sizes from 10 to 100 meters. From the autocorrelation between the stations of each array, a Rayleigh wave phase velocity dispersion curve was calculated. Such dispersion curve was used to obtain a S wave parallel layers velocity (VS) structure for the study site. Subsequently the horizontal to vertical ratio of the spectrum of microtremors H/V (Nogoshi and Igarashi, 1971; Nakamura, 1989, 2000) was calculated for each vertex of the SPAC triangular arrays, and from the H/V spectrum the fundamental frequency was estimated for each vertex. By using the H/V spectral ratio curves interpreted as a proxy to the Rayleigh wave ellipticity curve, a series of VS structures were inverted for each vertex of the SPAC array. Lastly each VS structure was employed to calculate a 3D velocity model, in which the exploration depth was approximately 100 meters, and had a velocity range in between 206 (m/s) to 920 (m/s). The 3D model revealed a thinning of the low velocity layers. This proved to be in good agreement with the variation of the fundamental frequencies observed at each vertex. With the previous kind of analysis a preliminary model can be obtained as a first approximation, so that more careful studies can be conducted to assess a detailed geological characterization of a specific site. The continuous development of the methods that use microtremors, create many areas of interest in the seismic engineering study field. This and other reasons are why these methods have acquired more presence all over the globe.
Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr; Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr
2016-04-18
Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized formore » deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.« less
Comparing approaches to screening for angle closure in older Chinese adults
Andrews, J; Chang, D S; Jiang, Y; He, M; Foster, P J; Munoz, B; Kashiwagi, K; Friedman, D S
2012-01-01
Aims Primary angle-closure glaucoma is expected to account for nearly 50% of bilateral glaucoma blindness by 2020. This study was conducted to assess the performance of the scanning peripheral anterior chamber depth analyzer (SPAC) and limbal anterior chamber depth (LACD) as screening methods for angle closure. Methods This study assessed two clinical populations to compare SPAC, LACD, and gonioscopy: the Zhongshan Angle-closure Prevention Trial, from which 370 patients were eligible as closed-angle participants and the Liwan Eye Study, from which 72 patients were selected as open-angle controls. Eligible participants were assessed by SPAC, LACD, and gonioscopy. Results Angle status was defined by gonioscopy. Area under the receiver operating characteristic curve (AUROC) for SPAC was 0.92 (0.89–0.95) whereas AUROC for LACD was 0.94 (0.92–0.97). Using conventional cutoff points, sensitivity/specificity was 93.0%/70.8% for SPAC and 94.1%/87.5% for LACD. Sequential testing using both SPAC and LACD increased the specificity to 94.4% and decreased the sensitivity to 87.0%. Conclusion SPAC has significantly lower specificity than LACD measurement using conventional cutoffs but interpretation of the findings can be performed by modestly trained personnel. PMID:21997356
Estimation of Vs30 Soil Profile Structure of Singapore from Microtremor Records
NASA Astrophysics Data System (ADS)
Walling, M. Y.; Megawati, K.; Zhu, C.
2012-04-01
Singapore lies at the southern tip of the Malay Peninsula, covering a land area of 600 km2 and with a population exceeding 5 million. Array microtremor recording were carried out in Singapore for 40 sites that encompasses the sites of all the major geological formations. The Spatial Autocorrelation (SPAC) method is employed to determine the phase velocity dispersion curves and subsequently inverted to determine the shallow shear-wave velocity (V s) and soil stratigraphy. The depth of penetration is generally about 30 m - 40 m for most of the sites. For the present study, the V s estimation is restricted to the upper 30 m of the soil (V s30), confirming with the IBC (2006). The Reclaimed Land and the young Quaternary soft soil deposit of Kallang Formation show low V s30 values ranging from 207 m/s - 247 m/s, belonging to site E and at the boundary of site E and D. The Old Alluvium formation shows higher V s30 values ranging from 362 m/s - 563 m/s and can be classified under site C. The estimated V s30 for the sedimentary sequence of Jurong Formation reveal site C classification, with the V s30 range from 317 m/s - 712 m/s. On the other hand, the Bukit Timah Granite body shows low V s30 ranging from 225 m/s - 387 m/s, with most of the sites concentrated under site D classification and few sites at the boundary of sites D and C, for the upper 30 m. This low V s30 value of the granitic body can be explained in the light of intense weathering that the granite body has undergone for the upper layer, which is also supported from borehole records. The SPAC results are compared with nearby borehole data and they show a good correlation for sites that have soft soil formation and for the weathered granite body. The correlation confirms the reliability of SPAC method that can be applied for highly populated urbanized places like Singapore. The present research finding will be useful for further studies of site response analysis, site characterization and ground motion simulation.
Nakazawa, Yoshifumi; Matsui, Yoshihiko; Hanamura, Yusuke; Shinno, Koki; Shirasaki, Nobutaka; Matsushita, Taku
2018-07-01
Superfine powdered activated carbon (SPAC; particle diameter ∼1 μm) has greater adsorptivity for organic molecules than conventionally sized powdered activated carbon (PAC). Although SPAC is currently used in the pretreatment to membrane filtration at drinking water purification plants, it is not used in conventional water treatment consisting of coagulation-flocculation, sedimentation, and rapid sand filtration (CSF), because it is unclear whether CSF can adequately remove SPAC from the water. In this study, we therefore investigated the residual SPAC particles in water after CSF treatment. First, we developed a method to detect and quantify trace concentration of carbon particles in the sand filtrate. This method consisted of 1) sampling particles with a membrane filter and then 2) using image analysis software to manipulate a photomicrograph of the filter so that black spots with a diameter >0.2 μm (considered to be carbon particles) could be visualized. Use of this method revealed that CSF removed a very high percentage of SPAC: approximately 5-log in terms of particle number concentrations and approximately 6-log in terms of particle volume concentrations. When waters containing 7.5-mg/L SPAC and 30-mg/L PAC, concentrations that achieved the same adsorption performance, were treated, the removal rate of SPAC was somewhat superior to that of PAC, and the residual particle number concentrations for SPAC and PAC were at the same low level (100-200 particles/mL). Together, these results suggest that SPAC can be used in place of PAC in CSF treatment without compromising the quality of the filtered water in terms of particulate matter contamination. However, it should be noted that the activated carbon particles after sand filtration were smaller in terms of particle size and were charge-neutralized to a lesser extent than the activated carbon particles before sand filtration. Therefore, the tendency of small particles to escape in the filtrate would appear to be related to the fact that their small size leads to a low destabilization rate during the coagulation process and a low collision rate during the flocculation and filtration processes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Amaral, Pauline; Partlan, Erin; Li, Mengfei; Lapolli, Flavio; Mefford, O Thompson; Karanfil, Tanju; Ladner, David A
2016-09-01
In microfiltration processes for drinking water treatment, one method of removing trace contaminants is to add powdered activated carbon (PAC). Recently, a version of PAC called superfine PAC (S-PAC) has been under development. S-PAC has a smaller particle size and thus faster adsorption kinetics than conventionally sized PAC. Membrane coating performance of various S-PAC samples was evaluated by measuring adsorption of atrazine, a model micropollutant. S-PACs were created in-house from PACs of three different materials: coal, wood, and coconut shell. Milling time was varied to produce S-PACs pulverized with different amounts of energy. These had different particles sizes, but other properties (e.g. oxygen content), also differed. In pure water the coal based S-PACs showed superior atrazine adsorption; all milled carbons had over 90% removal while the PAC had only 45% removal. With addition of calcium and/or NOM, removal rates decreased, but milled carbons still removed more atrazine than PAC. Oxygen content and specific external surface area (both of which increased with longer milling times) were the most significant predictors of atrazine removal. S-PAC coatings resulted in loss of filtration flux compared to an uncoated membrane and smaller particles caused more flux decline than larger particles; however, the data suggest that NOM fouling is still more of a concern than S-PAC fouling. The addition of calcium improved the flux, especially for the longer-milled carbons. Overall the data show that when milling S-PAC with different levels of energy there is a tradeoff: smaller particles adsorb contaminants better, but cause greater flux decline. Fortunately, an acceptable balance may be possible; for example, in these experiments the coal-based S-PAC after 30 min of milling achieved a fairly high atrazine removal (overall 80%) with a fairly low flux reduction (under 30%) even in the presence of NOM. This suggests that relatively short duration (low energy) milling is viable for creating useful S-PAC materials applied in tandem with microfiltration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Use of Microtremor Array Recordings for Mapping Subsurface Soil Structure, Singapore
NASA Astrophysics Data System (ADS)
Walling, M.
2012-12-01
Microtremor array recordings are carried out in Singapore, for different geology, to study the influence of each site in modeling the subsurface structure. The Spatial Autocorrelation (SPAC) method is utilized for the computation of the soil profiles. The array configuration of the recording consists of 7 seismometers, recording the vertical component of the ground motion, and the recording at each site is carried out for 30 minutes. The results from the analysis show that the soil structure modeled for the young alluvial of Kallang Formation (KF), in terms of shear wave velocity (Vs), gives a good correlation with borehole information, while for the older geological formation of Jurong Formation (JF) (sedimentary rock sequence) and Old Alluvial (OA) (dense alluvium formation), the correlation is not very clear due to the lack of impedance contrast. The older formation of Bukit Timah Granite (BTG) show contrasting results within the formation, with the northern BTG suggesting a low Vs upper layer of about 20m - 30m while the southern BTG reveals a dense formation. The discrepancy in the variation within BTG is confirmed from borehole data that reveals the northern BTG to have undergone intense weathering while the southern BTG have not undergone noticeable weathering. Few sites with bad recording quality could not resolve the soil structure. Microtremor array recording is good for mapping sites with soft soil formation and weathered rock formation but can be limited in the absence of subsurface velocity contrast and bad quality of microtremor records.; The correlation between the Vs30 estimated from SPAC method and borehole data for the four major geological formations of Singapore. The encircled sites are the sites with recording error.
Kamioka, Hiroharu; Tsutani, Kiichiro; Maeda, Masaharu; Hayasaka, Shinya; Okuizum, Hiroyasu; Goto, Yasuaki; Okada, Shinpei; Kitayuguchi, Jun; Abe, Takafumi
2014-11-01
The purpose of this study was to assess the quality of study reports on spa therapy based on randomized controlled trials by the spa therapy and balneotherapy checklist (SPAC), and to show the relationship between SPAC score and the characteristics of publication. We searched the following databases from 1990 up to September 30, 2013: MEDLINE via PubMed, CINAHL, Web of Science, Ichushi Web, Global Health Library, the Western Pacific Region Index Medicus, PsycINFO, and the Cochrane Database of Systematic Reviews. We used the SPAC to assess the quality of reports on spa therapy and balneotherapy trials (SPAC) that was developed using the Delphi consensus method. Fifty-one studies met all inclusion criteria. Forty studies (78%) were about "Diseases of the musculoskeletal system and connective)". The total SPAC score (full-mark; 19 pts) was 10.8 ± 2.3 pts (mean ± SD). The items for which a description was lacking (very poor; <50%) in many studies were as follows: "locations of spa facility where the data were collected"; "pH"; "scale of bathtub"; "presence of other facility and exposure than bathing (sauna, steam bath, etc.)"; "qualification and experience of care provider"; "Instructions about daily life" and "adherence". We clarified that there was no relationship between the publish period, languages, and the impact factor (IF) for the SPAC score. In order to prevent flawed description, SPAC could provide indispensable information for researchers who are going to design a research protocol according to each disease. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Zijuan; Guan, Huade; Hutson, John; Forster, Michael A.; Wang, Yunquan; Simmons, Craig T.
2017-06-01
A novel simple soil-plant-atmospheric continuum model that emphasizes the vegetation's role in controlling water transfer (v-SPAC) has been developed in this study. The v-SPAC model aims to incorporate both plant and soil hydrological measurements into plant water transfer modeling. The model is different from previous SPAC models in which v-SPAC uses (1) a dynamic plant resistance system in the form of a vulnerability curve that can be easily obtained from sap flow and stem xylem water potential time series and (2) a plant capacitance parameter to buffer the effects of transpiration on root water uptake. The unique representation of root resistance and capacitance allows the model to embrace SPAC hydraulic pathway from bulk soil, to soil-root interface, to root xylem, and finally to stem xylem where the xylem water potential is measured. The v-SPAC model was tested on a native tree species in Australia, Eucalyptus crenulata saplings, with controlled drought treatment. To further validate the robustness of the v-SPAC model, it was compared against a soil-focused SPAC model, LEACHM. The v-SPAC model simulation results closely matched the observed sap flow and stem water potential time series, as well as the soil moisture variation of the experiment. The v-SPAC model was found to be more accurate in predicting measured data than the LEACHM model, underscoring the importance of incorporating root resistance into SPAC models and the benefit of integrating plant measurements to constrain SPAC modeling.
Asten, M.W.; Stephenson, William J.; Hartzell, Stephen
2015-01-01
The SPAC method of processing microtremor noise observations for estimation of Vs profiles has a limitation that the array has circular or triangular symmetry in order to allow spatial (azimuthal) averaging of inter-station coherencies over a constant station separation. Common processing methods allow for station separations to vary by typically ±10% in the azimuthal averaging before degradation of the SPAC spectrum is excessive. A limitation on use of high-wavenumbers in inversions of SPAC spectra to Vs profiles has been the requirement for exact array symmetry to avoid loss of information in the azimuthal averaging step. In this paper we develop a new wavenumber-normalised SPAC method (KRSPAC) where instead of performing averaging of sets of coherency versus frequency spectra and then fitting to a model SPAC spectrum, we interpolate each spectrum to coherency versus k.r, where k and r are wavenumber and station separation respectively, and r may be different for each pair of stations. For fundamental mode Rayleigh-wave energy the model SPAC spectrum to be fitted reduces to Jo(kr). The normalization process changes with each iteration since k is a function of frequency and phase velocity and hence is updated each iteration. The method proves robust and is demonstrated on data acquired in the Santa Clara Valley, CA, (Site STGA) where an asymmetric array having station separations varying by a factor of 2 is compared with a conventional triangular array; a 300-mdeep borehole with a downhole Vs log provides nearby ground truth. The method is also demonstrated on data from the Pleasanton array, CA, where station spacings are irregular and vary from 400 to 1200 m. The KRSPAC method allows inversion of data using kr (unitless) values routinely up to 30, and occasionally up to 60. Thus despite the large and irregular station spacings, this array permits resolution of Vs as fine as 15 m for the near-surface sediments, and down to a maximum depth of 2.5 km.
Ganguli, Kriston; Collado, Maria Carmen; Rautava, Jaana; Lu, Lei; Satokari, Reetta; von Ossowski, Ingemar; Reunanen, Justus; de Vos, Willem M.; Palva, Airi; Isolauri, Erika; Salminen, Seppo; Walker, W. Allan; Rautava, Samuli
2015-01-01
Background Bacterial contact in utero modulates fetal and neonatal immune responses. Maternal probiotic supplementation reduces the risk of immune-mediated disease in the infant. We investigated the immunomodulatory properties of live Lactobacillus rhamnosus GG and its SpaC pilus adhesin in human fetal intestinal models. Methods TNF-α mRNA expression was measured by qPCR in a human fetal intestinal organ culture model exposed to live L. rhamnosus GG and proinflammatory stimuli. Binding of recombinant SpaC pilus protein to intestinal epithelial cells was assessed in human fetal intestinal organ culture and the human fetal intestinal epithelial cell line H4 by immunohistochemistry and immunofluorescence, respectively. TLR-related gene expression in fetal ileal organ culture after exposure to recombinant SpaC was assessed by qPCR. Results Live L. rhamnosus GG significantly attenuates pathogen-induced TNF-α mRNA expression in the human fetal gut. Recombinant SpaC protein was found to adhere to the fetal gut and to modulate varying levels of TLR-related gene expression. Conclusion The human fetal gut is responsive to luminal microbes. L. rhamnosus GG significantly attenuates fetal intestinal inflammatory responses to pathogenic bacteria. The L. rhamnosus GG pilus adhesin SpaC binds to immature human intestinal epithelial cells and directly modulates intestinal epithelial cell innate immune gene expression. PMID:25580735
Wong, Hon-Tym; Chua, Jocelyn L L; Sakata, Lisandro M; Wong, Melissa H Y; Aung, Han T; Aung, Tin
2009-05-01
To evaluate the effectiveness of slitlamp optical coherence tomography (SL-OCT) and Scanning Peripheral Anterior Chamber depth analyzer (SPAC) in detecting angle closure, using gonioscopy as the reference standard. A total of 153 subjects underwent gonioscopy, SL-OCT, and SPAC. The anterior chamber angle (ACA) was classified as closed on gonioscopy if the posterior trabecular meshwork could not be seen; with SL-OCT, closure was determined by contact between the iris and angle wall anterior to the scleral spur; and with SPAC by a numerical grade of 5 or fewer and/or a categorical grade of suspect or potential. A closed ACA was identified in 51 eyes with gonioscopy, 86 eyes with SL-OCT, and 61 eyes with SPAC (gonioscopy vs SL-OCT, P < .001; gonioscopy vs SPAC, P = .10; SL-OCT vs SPAC, P < .001; McNemar test). Of the 51 eyes with a closed ACA on gonioscopy, SL-OCT detected a closed ACA in 43, whereas SPAC identified 41 (P = .79). An open angle in all 4 quadrants was observed in 102 eyes with gonioscopy, but SL-OCT and SPAC identified 43 and 20 of these eyes, respectively, as having angle closure. The overall sensitivity and specificity for SL-OCT were 84% and 58% vs 80% and 80% for SPAC. Using gonioscopy as the reference, SL-OCT and SPAC showed good sensitivity for detecting eyes at risk of angle closure.
Korb, Werner; Bohn, Stefan; Burgert, Oliver; Dietz, Andreas; Jacobs, Stephan; Falk, Volkmar; Meixensberger, Jürgen; Strauss, Gero; Trantakis, Christos; Lemke, Heinz U
2006-01-01
For better integration of surgical assist systems into the operating room, a common communication and processing plattform that is based on the users needs is needed. The development of such a system, a Surgical Picture Aquisition and Communication System (S-PACS), according the systems engineering cycle is oulined in this paper. The first two steps (concept and specification) for the engineering of the S-PACS are discussed.A method for the systematic integration of the users needs', the Quality Function Deployment (QFD), is presented. The properties of QFD for the underlying problem and first results are discussed. Finally, this leads to a first definition of an S-PACS system.
Volti, Theodora; Burbidge, David; Collins, Clive; Asten, Michael W.; Odum, Jackson K.; Stephenson, William J.; Pascal, Chris; Holzschuh, Josef
2016-01-01
Although the time‐averaged shear‐wave velocity down to 30 m depth (VS30) can be a proxy for estimating earthquake ground‐motion amplification, significant controversy exists about its limitations when used as a single parameter for the prediction of amplification. To examine this question in absence of relevant strong‐motion records, we use a range of different methods to measure the shear‐wave velocity profiles and the resulting theoretical site amplification factors (AFs) for 30 sites in the Newcastle area, Australia, in a series of blind comparison studies. The multimethod approach used here combines past seismic cone penetrometer and spectral analysis of surface‐wave data, with newly acquired horizontal‐to‐vertical spectral ratio, passive‐source surface‐wave spatial autocorrelation (SPAC), refraction microtremor (ReMi), and multichannel analysis of surface‐wave data. The various measurement techniques predicted a range of different AFs. The SPAC and ReMi techniques have the smallest overall deviation from the median AF for the majority of sites. We show that VS30 can be related to spectral response above a period T of 0.5 s but not necessarily with the maximum amplification according to the modeling done based on the measured shear‐wave velocity profiles. Both VS30 and AF values are influenced by the velocity ratio between bedrock and overlying sediments and the presence of surficial thin low‐velocity layers (<2 m thick and <150 m/s), but the velocity ratio is what mostly affects the AF. At 0.2
Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure
NASA Astrophysics Data System (ADS)
Jia, Tianxia
2011-12-01
This thesis includes three major parts, (1) Body wave analysis of mantle structure under the Calabria slab, (2) Spatial Average Coherency (SPAC) analysis of microtremor to characterize the subsurface structure in urban areas, and (3) Surface wave dispersion inversion for shear wave velocity structure. Although these three projects apply different techniques and investigate different parts of the Earth, their aims are the same, which is to better understand and characterize the subsurface Earth structure by analyzing complex seismic waveforms that are recorded on the Earth surface. My first project is body wave analysis of mantle structure under the Calabria slab. Its aim is to better understand the subduction structure of the Calabria slab by analyzing seismograms generated by natural earthquakes. The rollback and subduction of the Calabrian Arc beneath the southern Tyrrhenian Sea is a case study of slab morphology and slab-mantle interactions at short spatial scale. I analyzed the seismograms traversing the Calabrian slab and upper mantle wedge under the southern Tyrrhenian Sea through body wave dispersion, scattering and attenuation, which are recorded during the PASSCAL CAT/SCAN experiment. Compressional body waves exhibit dispersion correlating with slab paths, which is high-frequency components arrivals being delayed relative to low-frequency components. Body wave scattering and attenuation are also spatially correlated with slab paths. I used this correlation to estimate the positions of slab boundaries, and further suggested that the observed spatial variation in near-slab attenuation could be ascribed to mantle flow patterns around the slab. My second project is Spatial Average Coherency (SPAC) analysis of microtremors for subsurface structure characterization. Shear-wave velocity (Vs) information in soil and rock has been recognized as a critical parameter for site-specific ground motion prediction study, which is highly necessary for urban areas located in seismic active zones. SPAC analysis of microtremors provides an efficient way to estimate Vs structure. Compared with other Vs estimating methods, SPAC is noninvasive and does not require any active sources, and therefore, it is especially useful in big cities. I applied SPAC method in two urban areas. The first is the historic city, Charleston, South Carolina, where high levels of seismic hazard lead to great public concern. Accurate Vs information, therefore, is critical for seismic site classification and site response studies. The second SPAC study is in Manhattan, New York City, where depths of high velocity contrast and soil-to-bedrock are different along the island. The two experiments show that Vs structure could be estimated with good accuracy using SPAC method compared with borehole and other techniques. SPAC is proved to be an effective technique for Vs estimation in urban areas. One important issue in seismology is the inversion of subsurface structures from surface recordings of seismograms. My third project focuses on solving this complex geophysical inverse problems, specifically, surface wave phase velocity dispersion curve inversion for shear wave velocity. In addition to standard linear inversion, I developed advanced inversion techniques including joint inversion using borehole data as constrains, nonlinear inversion using Monte Carlo, and Simulated Annealing algorithms. One innovative way of solving the inverse problem is to make inference from the ensemble of all acceptable models. The statistical features of the ensemble provide a better way to characterize the Earth model.
Weiss, Michael; Denger, Karin; Huhn, Thomas
2012-01-01
Complete biodegradation of the surfactant linear alkylbenzenesulfonate (LAS) is accomplished by complex bacterial communities in two steps. First, all LAS congeners are degraded into about 50 sulfophenylcarboxylates (SPC), one of which is 3-(4-sulfophenyl)butyrate (3-C4-SPC). Second, these SPCs are mineralized. 3-C4-SPC is mineralized by Comamonas testosteroni KF-1 in a process involving 4-sulfoacetophenone (SAP) as a metabolite and an unknown inducible Baeyer-Villiger monooxygenase (BVMO) to yield 4-sulfophenyl acetate (SPAc) from SAP (SAPMO enzyme); hydrolysis of SPAc to 4-sulfophenol and acetate is catalyzed by an unknown inducible esterase (SPAc esterase). Transcriptional analysis showed that one of four candidate genes for BVMOs in the genome of strain KF-1, as well as an SPAc esterase candidate gene directly upstream, was inducibly transcribed during growth with 3-C4-SPC. The same genes were identified by enzyme purification and peptide fingerprinting-mass spectrometry when SAPMO was enriched and SPAc esterase purified to homogeneity by protein chromatography. Heterologously overproduced pure SAPMO converted SAP to SPAc and was active with phenylacetone and 4-hydroxyacetophenone but not with cyclohexanone and progesterone. SAPMO showed the highest sequence homology to the archetypal phenylacetone BVMO (57%), followed by steroid BVMO (55%) and 4-hydroxyacetophenone BVMO (30%). Finally, the two pure enzymes added sequentially, SAPMO with NADPH and SAP, and then SPAc esterase, catalyzed the conversion of SAP via SPAc to 4-sulfophenol and acetate in a 1:1:1:1 molar ratio. Hence, the first two enzymes of a complete LAS degradation pathway were identified, giving evidence for the recruitment of members of the very versatile type I BVMO and carboxylester hydrolase enzyme families for the utilization of a xenobiotic compound by bacteria. PMID:23001656
Development of a 3D Soil-Plant-Atmosphere Continuum (SPAC) coupled to a Land Surface Model
NASA Astrophysics Data System (ADS)
Bisht, G.; Riley, W. J.; Lorenzetti, D.; Tang, J.
2015-12-01
Exchange of water between the atmosphere and biosphere via evapotranspiration (ET) influences global hydrological, energy, and biogeochemical cycles. Isotopic analysis has shown that evapotranspiration over the continents is largely dominated by transpiration. Water is taken up from soil by plant roots, transported through the plant's vascular system, and evaporated from the leaves. Yet current Land Surface Models (LSMs) integrated into Earth System Models (ESMs) treat plant roots as passive components. These models distribute the ET sink vertically over the soil column, neglect the vertical pressure distribution along the plant vascular system, and assume that leaves can directly access water from any soil layer within the root zone. Numerous studies have suggested that increased warming due to climate change will lead drought and heat-induced tree mortality. A more mechanistic treatment of water dynamics in the soil-plant-atmosphere continuum (SPAC) is essential for investigating the fate of ecosystems under a warmer climate. In this work, we describe a 3D SPAC model that can be coupled to a LSM. The SPAC model uses the variably saturated Richards equations to simulate water transport. The model uses individual governing equations and constitutive relationships for the various SPAC components (i.e., soil, root, and xylem). Finite volume spatial discretization and backward Euler temporal discretization is used to solve the SPAC model. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is used to numerically integrate the discretized system of equations. Furthermore, PETSc's multi-physics coupling capability (DMComposite) is used to solve the tightly coupled system of equations of the SPAC model. Numerical results are presented for multiple test problems.
Partlan, Erin; Davis, Kathleen; Ren, Yiran; Apul, Onur Guven; Mefford, O Thompson; Karanfil, Tanju; Ladner, David A
2016-02-01
Superfine powdered activated carbon (S-PAC) is an adsorbent material with particle size between roughly 0.1-1 μm. This is about an order of magnitude smaller than conventional powdered activated carbon (PAC), typically 10-50 μm. S-PAC has been shown to outperform PAC for adsorption of various drinking water contaminants. However, variation in S-PAC production methods and limited material characterization in prior studies lead to questions of how S-PAC characteristics deviate from that of its parent PAC. In this study, a wet mill filled with 0.3-0.5 mm yttrium-stabilized zirconium oxide grinding beads was used to produce S-PAC from seven commercially available activated carbons of various source materials, including two coal types, coconut shell, and wood. Particle sizes were varied by changing the milling time, keeping mill power, batch volume, and recirculation rate constant. As expected, mean particle size decreased with longer milling. A lignite coal-based carbon had the smallest mean particle diameter at 169 nm, while the wood-based carbon had the largest at 440 nm. The wood and coconut-shell based carbons had the highest resistance to milling. Specific surface area and pore volume distributions were generally unchanged with increased milling time. Changes in the point of zero charge (pH(PZC)) and oxygen content of the milled carbons were found to correlate with an increasing specific external surface area. However, the isoelectric point (pH(IEP)), which measures only external surfaces, was unchanged with milling and also much lower in value than pH(PZC). It is likely that the outer surface is easily oxidized while internal surfaces remain largely unchanged, which results in a lower average pH as measured by pH(PZC). Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tün, M.; Pekkan, E.; Özel, O.; Guney, Y.
2016-10-01
Amplification can occur in a graben as a result of strong earthquake-induced ground motion. Thus, in seismic hazard and seismic site response studies, it is of the utmost importance to determine the geometry of the bedrock depth. The main objectives of this study were to determine the bedrock depth and map the depth-to-bedrock ratio for use in land use planning in regard to the mitigation of earthquake hazards in the Eskişehir Basin. The fundamental resonance frequencies (fr) of 318 investigation sites in the Eskişehir Basin were determined through case studies, and the 2-D S-wave velocity structure down to the bedrock depth was explored. Single-station microtremor data were collected from the 318 sites, as well as microtremor array data from nine sites, seismic reflection data from six sites, deep-drilling log data from three sites and shallow drilling log data from ten sites in the Eskişehir Graben. The fundamental resonance frequencies of the Eskişehir Basin sites were obtained from the microtremor data using the horizontal-to vertical (H/V) spectral ratio (HVSR) method. The phase velocities of the Rayleigh waves were estimated from the microtremor data using the spatial autocorrelation (SPAC) method. The fundamental resonance frequency range at the deepest point of the Eskişehir Basin was found to be 0.23-0.35 Hz. Based on the microtremor array measurements and the 2-D S-wave velocity profiles obtained using the SPAC method, a bedrock level with an average velocity of 1300 m s-1 was accepted as the bedrock depth limit in the region. The log data from a deep borehole and a seismic reflection cross-section of the basement rocks of the Eskişehir Basin were obtained and permitted a comparison of bedrock levels. Tests carried out using a multichannel walk-away technique permitted a seismic reflection cross-section to be obtained up to a depth of 1500-2000 m using an explosive energy source. The relationship between the fundamental resonance frequency in the Eskişehir Basin and the results of deep drilling, shallow drilling, shear wave velocity measurement and sedimentary cover depth measurement obtained from the seismic reflection section was expressed in the form of a nonlinear regression equation. An empirical relationship between fr, the thickness of sediments and the bedrock depth is suggested for use in future microzonation studies of sites in the region. The results revealed a maximum basin depth of 1000 m, located in the northeast of the Eskişehir Basin, and the SPAC and HVSR results indicated that within the study area the basin is characterized by a thin local sedimentary cover with low shear wave velocity overlying stiff materials, resulting in a sharp velocity contrast. The thicknesses of the old Quaternary and Tertiary fluvial sediments within the basin serve as the primary data sources in seismic hazard and seismic site response studies, and these results add to the body of available seismic hazard data contributing to a seismic microzonation of the Eskişehir Graben in advance of the severe earthquakes expected in the Anatolian Region.
- spac0117 TIROS I satellite on test stand during preliminary test stage. In: "Weather Analysis from Satellite Observations, " U.S. Navy Research Facility, December 1960. Figure 1.4. Image ID: spac0117 , NOAA In Space Collection Photo Date: 1960 Circa Category: Space/Satellite/Vehicle/ * High Resolution
- spac0114 Securing cover for TIROS V satellite prior to launching. Lettering on nose cone reads: CAUTION protected the satellite during its ride through the atmosphere into space. Image ID: spac0114, NOAA In Space Collection Category: Space/Satellite/ Vehicle/ * High Resolution Photo Available Publication of the U.S
Xu, Xiao Wu; Yu, Xin Xiao; Jia, Guo Dong; Li, Han Zhi; Lu, Wei Wei; Liu, Zi Qiang
2017-07-18
Soil-vegetation-atmosphere continuum (SPAC) is one of the important research objects in the field of terrestrial hydrology, ecology and global change. The process of water and carbon cycling, and their coupling mechanism are frontier issues. With characteristics of tracing, integration and indication, stable isotope techniques contribute to the estimation of the relationship between carbon sequestration and water consumption in ecosystems. In this review, based on a brief introduction of stable isotope principles and techniques, the applications of stable isotope techniques to water and carbon exchange in SPAC using optical stable isotope techniques were mainly explained, including: partitioning of net carbon exchange into photosynthesis and respiration; partitioning of evapotranspiration into transpiration and evaporation; coupling of water and carbon cycle at the ecosystem scale. Advanced techniques and methods provided long-term and high frequency measurements for isotope signals at the ecosystem scale, but the issues about the precision and accuracy for measurements, partitioning of ecosystem respiration, adaptability for models under non-steady state, scaling up, coupling mechanism of water and carbon cycles, were challenging. The main existing research findings, limitations and future research prospects were discussed, which might help new research and technology development in the field of stable isotope ecology.
Youth Needs Survey: Fall 1983.
ERIC Educational Resources Information Center
Austin Independent School District, TX. Office of Research and Evaluation.
In October 1983, 1,275 Austin Independent School District (AISD) secondary students completed a survey of their needs for social services. The survey was approved by the Board of Trustees in response to a request by the Social Policy Advisory Committee (SPAC). The purpose of the survey was to aid the SPAC in planning social services to meet the…
NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image - spac0020 ESSA I, a TIROS cartwheel satellite launched on February 3, 1966. Image ID: spac0020, NOAA In
Cooling efficiency of a spot-type personalized air-conditioner
Zhu, Shengwei; Dalgo, Daniel; Srebric, Jelena; ...
2017-08-01
Here, this study defined Cooling Efficiency ( CE) of a Spot-type Personalized Air-Conditioning (SPAC) device as the ratio of the additional sensible heat removal from human body induced by SPAC and the device's cooling capacity. CE enabled the investigation of SPAC performance on the occupant's sensible heat loss (Q s) and thermal sensation by its quantitative relation with the change of PMV level ( ΔPMV). Three round nozzles with the diameter of 0.08 m, 0.105 m, and 0.128 m, respectively, discharged air jets at airflow rates from 11.8 L s –1 to 59.0 L s –1, toward the chest ofmore » a seated or standing human body with a clothing of 0.48 clo. This study developed a validated CFD model coupled with the Fanger's thermoregulation model, to calculate Q s in a room of 26 °C ventilated at a rate of 3 ACH. According to the results, Q s, CE and draft risk ( DR) at face had significant positive linear correlation with the SPAC device's supply airflow rates (R2 >0.96), and a negative linear correlation for ΔPMV. With DR = 20% at face, CE was always under 0.3, and ΔPMV was around -1.0–1.1. Interestingly, both CE and ΔPMV had the least favorable values for the air jet produced by the nozzle with the diameter of 0.105 m independent of body posture. In conclusion, although SPAC could lead to additional Q s by sending air at a higher airflow rate from a smaller nozzle, the improvement in cooling efficiency and thermal sensation had a limit due to draft risk.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Shengwei; Dalgo, Daniel; Srebric, Jelena
Here, this study defined Cooling Efficiency ( CE) of a Spot-type Personalized Air-Conditioning (SPAC) device as the ratio of the additional sensible heat removal from human body induced by SPAC and the device's cooling capacity. CE enabled the investigation of SPAC performance on the occupant's sensible heat loss (Q s) and thermal sensation by its quantitative relation with the change of PMV level ( ΔPMV). Three round nozzles with the diameter of 0.08 m, 0.105 m, and 0.128 m, respectively, discharged air jets at airflow rates from 11.8 L s –1 to 59.0 L s –1, toward the chest ofmore » a seated or standing human body with a clothing of 0.48 clo. This study developed a validated CFD model coupled with the Fanger's thermoregulation model, to calculate Q s in a room of 26 °C ventilated at a rate of 3 ACH. According to the results, Q s, CE and draft risk ( DR) at face had significant positive linear correlation with the SPAC device's supply airflow rates (R2 >0.96), and a negative linear correlation for ΔPMV. With DR = 20% at face, CE was always under 0.3, and ΔPMV was around -1.0–1.1. Interestingly, both CE and ΔPMV had the least favorable values for the air jet produced by the nozzle with the diameter of 0.105 m independent of body posture. In conclusion, although SPAC could lead to additional Q s by sending air at a higher airflow rate from a smaller nozzle, the improvement in cooling efficiency and thermal sensation had a limit due to draft risk.« less
Deep Shear Wave Velocity of Southern Bangkok and Vicinity
NASA Astrophysics Data System (ADS)
Wongpanit, T.; Hayashi, K.; Pananont, P.
2017-09-01
Bangkok is located on the soft marine clay in the Lower Chao Phraya Basin which can amplify seismic wave and can affect the shaking of buildings during an earthquake. Deep shear wave velocity of the sediment in the basin are useful for study the effect of the soft sediment on the seismic wave and can be used for earthquake engineering design and ground shaking estimation, especially for a deep basin. This study aims to measure deep shear wave velocity and create 2D shear wave velocity profile down to a bedrock in the southern Bangkok by the Microtremor measurements with 2 seismographs using Spatial Autocorrelation (2-SPAC) technique. The data was collected during a day time on linear array geometry with offsets varying between 5-2,000 m. Low frequency of natural tremor (0.2-0.6 Hz) was detected at many sites, however, very deep shear wave data at many sites are ambiguous due to man-made vibration noises in the city. The results show that shear wave velocity of the sediment in the southern Bangkok is between 100-2,000 ms-1 and indicate that the bedrock depth is about 600-800 m, except at Bang Krachao where bedrock depth is unclear.
NASA Astrophysics Data System (ADS)
Zhentao, Y.; Xiaofei, C.; Jiannan, W.
2016-12-01
The fundamental mode is the primary component of surface wave derived from ambient noise. It is the basis of the method of structure imaging from ambient noise (e.g. SPAC, Aki 1957; F-K, Lascoss 1968; MUSIC, Schmidt 1986). It is well known, however, that if the higher modes of surface wave can be identified from data and are incorporated in the inversion of dispersion curves, the uncertainty in inversion results will be greatly reduced (e.g., Tokimastu,1997). Actually, the ambient noise indeed contains the higher modes as well in its raw data of ambient noise. If we could extract the higher modes from ambient noise, the structure inversion method of ambient noise would be greatly improved. In the past decade, there are many studies to improve SPAC and analyses the relationship of fundamental mode and higher mode (Ohri et al 2002; Asten et al. 2006; Tashiaki Ykoi 2010 ;Tatsunori Ikeda 2012). In this study, we will present a new method of identifying higher modes from ambient noise data by reprocessing the "surface waves' phases" derived from the ambient noise through cross-correlation analysis, and show preliminary application in structure inversion.
Evapotranspiration flux partitioning using an Iso-SPAC model in a temperate grassland ecosystem
NASA Astrophysics Data System (ADS)
Wang, P.
2014-12-01
To partition evapotranspiration (ET) into soil evaporation and vegetation transpiration (T), a new numerical Iso-SPAC (coupled heat, water with isotopic tracer in Soil-Plant-Atmosphere-Continuum) model was developed and applied to a temperate-grassland ecosystem in central Japan. Several models of varying complexity have been tested with the aim of obtaining the close to true value for the isotope composition of leaf water and transpiration flux. The agreement between the model predictions and observations demonstrates that the Iso-SPAC model with a steady-state assumption for transpiration flux can reproduce seasonal variations of all the surface energy balance components,leaf and ground surface temperature as well as isotope data (canopy foliage and ET flux). This good performance was confirmed not only at diurnal timescale but also at seasonal timescale. Thus, although the non-steady-state behavior of isotope budget in a leaf and isotopic diffusion between leaf and stem or root is exactly important, the steady-state assumption is practically acceptable for seasonal timescale as a first order approximation. Sensitivity analysis both in physical flux part and isotope part suggested that T/ET is relatively insensitive to uncertainties/errors in assigned model parameters and measured input variables, which illustrated the partitioning validity. Estimated transpiration fractions using isotope composition in ET flux by Iso-SPAC model and Keeling plot are generally in good agreement, further proving validity of the both approaches. However, Keeling plot approach tended to overestimate the fraction during an early stage of glowing season and a period just after clear cutting. This overestimation is probably due to insufficient fetch and influence of transpiration from upwind forest. Consequently, Iso-SPAC model is more reliable than Keeling plot approach in most cases.The T/ET increased with grass growth, and the sharp reduction caused by clear cutting was well reflected. The transpiration fraction ranges from 0.02 to 0.99 during growing seasons, and the mean value was 0.75 with a standard deviation of 0.24.
Xu, Ping; Kang, Leilei; Mack, Nathan H.; ...
2013-10-21
We investigate surface plasmon assisted catalysis (SPAC) reactions of 4-aminothiophenol (4ATP) to and back from 4,4'-dimercaptoazobenzene (DMAB) by single particle surface enhanced Raman spectroscopy, using a self-designed gas flow cell to control the reductive/oxidative environment over the reactions. Conversion of 4ATP into DMAB is induced by energy transfer (plasmonic heating) from surface plasmon resonance to 4ATP, where O 2 (as an electron acceptor) is essential and H 2O (as a base) can accelerate the reaction. In contrast, hot electron (from surface plasmon decay) induction drives the reverse reaction of DMAB to 4ATP, where H 2O (or H 2) acts asmore » the hydrogen source. More interestingly, the cyclic redox between 4ATP and DMAB by SPAC approach has been demonstrated. Finally, this SPAC methodology presents a unique platform for studying chemical reactions that are not possible under standard synthetic conditions.« less
Statistical Characteristic of Global Tropical Cyclone Looping Motion
NASA Astrophysics Data System (ADS)
Shen, W.; Song, J.; Wang, Y.
2016-12-01
Statistical characteristic of looping motion of tropical cyclones (TCs) in the Western North Pacific (WPAC), North Atlantic (NATL), Eastern North Pacific (EPAC), Northern Indian Ocean (NIO), Southern Indian Ocean (SIO) and South Pacific (SPAC) basins are investigated by using IBTrACS archive maintained by NOAA. From global perspective, about ten percent TCs experience a looping motion in the above six basins. The southern hemisphere (SH) including SIO and SPAC basins have higher looping percentage than the northern hemisphere (NH), while the EPAC basin has the least looping percentage. The interannual variation of the number of looping TCs are significantly correlated with that of total TCs in the NATL, SIO and SPAC basins, while there are no correlations between the EPAC and NIO basins. The numbers of looping TCs have a higher percentage in the early and late cyclone season in the NH rather than the peak period of cyclone season, while the SIO and SPAC basins have the higher looping percentage in the early and late cyclone season, respectively. The looping motion of TCs mainly concentrates on the scale of tropical depression to category 2 and has its peak value on the scale of tropical storm. The looping motion appears more frequently and has a higher percentage at the pre-mature stage than the post-mature stage of TCs in most basins except EPAC. Comparing the intensity and intensity variation caused by the looping motion, the weaker TCs tend to intensify after looping, while the more intense ones weaken.
The Soil-Plant-Atmosphere Continuum of Mangroves: A Simple Ecohydrological model
NASA Astrophysics Data System (ADS)
Perri, Saverio; Viola, Francesco; Valerio Noto, Leonardo; Molini, Annalisa
2016-04-01
Mangroves represent the only forest able to grow at the interface between a terrestrial and a marine habitat. Although globally they have been estimated to account only for 1% of carbon sequestration from forests, as coastal ecosystems they account for about 14% of carbon sequestration by the global ocean. Despite the continuously increasing number of hydrological and ecological field observations, the ecohydrology of mangroves remains largely understudied. Modeling mangrove response to variations in environmental conditions needs to take into account the effect of waterlogging and salinity on transpiration and CO2 assimilation. However, similar ecohydrological models for halophytes are not yet documented in the literature. In this contribution we adapt a Soil-Plant-Atmosphere Continuum (SPAC) model to the mangrove ecosystems. Such SPAC model is based on a macroscopic approach and the transpiration rate is hence obtained by solving the plant and leaf water balance and the leaf energy balance, taking explicitly into account the role of osmotic water potential and salinity in governing plant resistance to water fluxes. Exploiting the well-known coupling of transpiration and CO2 exchange through the stomatal conductance, we also estimate the CO2 assimilation rate. The SPAC is hence tested against experimental data obtained from the literature, showing the reliability and effectiveness of this minimalist approach in reproducing observed processes. Results show that the developed SPAC model is able to realistically simulate the main ecohydrological traits of mangroves, indicating the salinity as a crucial limiting factor for mangrove trees transpiration and CO2 assimilation.
Modeling Radiation Effects on a Triple Junction Solar Cell using Silvaco ATLAS
2012-06-01
circuit voltage can then be calculated from ln 1 Loc t S IV V I (4.3) where IS is the reverse saturation current, and Vt is the...orbiting electronic equipment. The first orbit of interest is the low Earth orbit ( LEO ). LEO encompasses any orbit within 650 kilometers of the...Light Beams #Solving #Meshing mesh width=200000 #X-Mesh: Surface=500 um2 = 1/200000 cm2 x.mesh loc =-250 spac=50 x.mesh loc =0 spac=10
Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach
NASA Astrophysics Data System (ADS)
Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.
2018-04-01
Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.
NASA Astrophysics Data System (ADS)
Upegui Botero, F. M.; Rojas Mercedes, N.; Huerta-Lopez, C.; Martinez-Cruzado, J. A.; Suárez, L.; Lopez, A. M.; Huerfano Moreno, V.
2013-12-01
Earthquake effects are frequently quantified by the energy liberated at the source, and the degree of damage produced in urban areas. The damage of historic events such as the Mw=8.3, September 19, 1985 Mexico City Earthquake was dominated by the amplification of seismic waves due to local site conditions. The assessment of local site effects can be carried out with site response analyses in order to determine the properties of the subsoil such as the dominant period, and the Vs30. The evaluation of the aforementioned properties is through the analysis of ground motion. However, in locations with low seismicity, the most convenient method to assess the site effect is the analysis of ambient vibration measurements. The Spatial Auto Correlation method (SPAC) can be used to determine a Vs30 model from ambient vibration measurements using a triangular array of sensors. Refraction Microtremor (ReMi) considers the phase velocity of the Rayleigh waves can be separated of apparent velocities; the aim of the ReMI method is to obtain the Vs30 model. The HVSR technique or Nakamura's method has been adopted to obtain the resonant frequency of the site from the calculation of ratio between the Fourier amplitude spectra or PSD spectrum of the horizontal and vertical components of ambient vibration. The aim of this work is to compare the results using different techniques to assess local site conditions in the urban area of Santo Domingo, Dominican Republic. The data used was collected during the Pan-American Advance Studies Institute (PASI), Workshop held in Santo Domingo, Dominican Republic from July 14 to 25, 2013. The PASI was sponsored by IRIS Consortium, NSF and DOE. Results obtained using SPAC, and ReMi, show a comparable model of surface waves velocities. In addition to the above, the HVSR method is combined with the stiffness matrices method for layered soils to calculate a model of velocities and the predominant period on the site. As part of this work a comparison with geological and geotechnical data on the studied sites was carried out. The advantages and limitations of each procedure are discussed in detail. HVSR Results
Pan, Long; Takagi, Yuichi; Matsui, Yoshihiko; Matsushita, Taku; Shirasaki, Nobutaka
2017-05-01
We milled granular activated carbons (GACs) that had been used for 0-9 years in water treatment plants and produced carbon particles with different sizes and ages: powdered activated carbons (PAC, median diameter 12-42 μm), superfine PAC (SPAC, 0.9-3.5 μm), and submicron-sized SPAC (SSPAC, 220-290 nm). The fact that SPAC produced from 1-year-old GAC and SSPAC from 2-year-old GAC removed 2-methylisoborneol (MIB) from water with an efficiency similar to that of virgin PAC after a carbon contact time of 30 min suggests that spent GAC could be reused for water treatment after being milled. This potential for reuse was created by increasing the equilibrium adsorption capacity via reduction of the carbon particle size and improving the adsorption kinetics. During long-term (>1 year) use in GAC beds, the volume of pores in the carbon, particularly pores with widths of 0.6-0.9 nm, was greatly reduced. The equilibrium adsorption capacities of the carbon for compounds with molecular sizes in this range could therefore decrease with increasing carbon age. Among these compounds, the decreases of capacities were prominent for hydrophobic compounds, including MIB. For hydrophobic compounds, however, the equilibrium adsorption capacities could be increased with decreasing carbon particle size. The iodine number, among other indices, was best correlated with the equilibrium adsorption capacity of the MIB and would be a good index to assess the remaining MIB adsorption capacity of spent carbon. Spent GAC can possibly be reused as SPAC or SSPAC if its iodine number is ≥ 600 mg/g. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of Microtremor Array Analysis to Estimate the Bedrock Depth in the Beijing Plain area
NASA Astrophysics Data System (ADS)
Xu, P.; Ling, S.; Liu, J.; Su, W.
2013-12-01
With the rapid expansion of large cities around the world, urban geological survey provides key information regarding resource development and urban construction. Among the major cities of the world, China's capital city Beijing is among the largest cities possessing complex geological structures. The urban geological survey and study in Beijing involves the following aspects: (1) estimating the thickness of the Cenozoic deposit; (2) mapping the three-dimensional structure of the underlying bedrock, as well as its relations to faults and tectonic settings; and (3) assessing the capacity of the city's geological resources in order to support its urban development and operation safety. The geological study of Beijing in general was also intended to provide basic data regarding the urban development and appraisal of engineering and environment geological conditions, as well as underground space resources. In this work, we utilized the microtremor exploration method to estimate the thickness of the bedrock depth, in order to delineate the geological interfaces and improve the accuracy of the bedrock depth map. The microtremor observation sites were located in the Beijing Plain area. Traditional geophysical or geological survey methods were not effective in these areas due to the heavy traffic and dense buildings in the highly-populated urban area. The microtremor exploration method is a Rayleigh-wave inversion technique which extracts its phase velocity dispersion curve from the vertical component of the microtremor array records using the spatial autocorrelation (SPAC) method, then inverts the shear-wave velocity structure. A triple-circular array was adopted for acquiring microtremor data, with the observation radius in ranging from 40 to 300 m, properly adjusted depending on the geological conditions (depth of the bedrock). The collected microtremor data are used to: (1) estimation of phase velocities of Rayleigh-wave from the vertical components of the microtremor records using the SPAC method, and (2) inversion to establish the S-wave velocity structure. Our inversion results show a thick Cenozoic sedimentation in the Fengtai Sag. The bedrock depth is 1510 m at C04-1 and 1575 m at D04-1. In contrast, the Cenozoic sediments are only 193 m thick at E12-1 and 236 m thick at E12-3, indicating very thin Cenozoic sedimentation in the Laiguangying High structural unit. The bedrock at the Houshayu Sag with a depth of 691 m at E16-1 and 875 m at F16-1, respectively, seems to fall somewhere in the middle. The difference between the bedrock depth at the Fengtai Sag and that at the Laiguangying High is as high as 1300 m. This was interpreted as a resulting of a slip along the Taiyanggong fault. On the other hand, the Nankou-Sunhe faulting resulted in a bedrock depth difference of approximately 500 m between the Laiguangying High and Houshayu Sag to the northeast. These results of the bedrock surface depth and its difference in various tectonic units in the Beijing plain area outlined by this article are consistent with both the existing geological data and previous interpretations. The information is deemed very useful for understanding the geological structures, regional tectonics and practical geotechnical problems involved in civil geological engineering in and around Beijing City.
Mandlik, Anjali; Swierczynski, Arlene; Das, Asis; Ton-That, Hung
2010-01-01
Summary Adherence to host tissues mediated by pili is pivotal in the establishment of infection by many bacterial pathogens. Corynebacterium diphtheriae assembles on its surface three distinct pilus structures. The function and the mechanism of how various pili mediate adherence, however, have remained poorly understood. Here we show that the SpaA-type pilus is sufficient for the specific adherence of corynebacteria to human pharyngeal epithelial cells. The deletion of the spaA gene, which encodes the major pilin forming the pilus shaft, abolishes pilus assembly but not adherence to pharyngeal cells. In contrast, adherence is greatly diminished when either minor pilin SpaB or SpaC is absent. Antibodies directed against either SpaB or SpaC block bacterial adherence. Consistent with a direct role of the minor pilins, latex beads coated with SpaB or SpaC protein bind specifically to pharyngeal cells. Therefore, tissue tropism of corynebacteria for pharyngeal cells is governed by specific minor pilins. Importantly, immunoelectron microscopy and immunofluorescence studies reveal clusters of minor pilins that are anchored to cell surface in the absence of a pilus shaft. Thus, the minor pilins may also be cell wall anchored in addition to their incorporation into pilus structures that could facilitate tight binding to host cells during bacterial infection. PMID:17376076
Li, Xiaochun; Yang, Fan; Wong, Jessica X H; Yu, Hua-Zhong
2017-09-05
We demonstrate herein an integrated, smartphone-app-chip (SPAC) system for on-site quantitation of food toxins, as demonstrated with aflatoxin B1 (AFB1), at parts-per-billion (ppb) level in food products. The detection is based on an indirect competitive immunoassay fabricated on a transparent plastic chip with the assistance of a microfluidic channel plate. A 3D-printed optical accessory attached to a smartphone is adapted to align the assay chip and to provide uniform illumination for imaging, with which high-quality images of the assay chip are captured by the smartphone camera and directly processed using a custom-developed Android app. The performance of this smartphone-based detection system was tested using both spiked and moldy corn samples; consistent results with conventional enzyme-linked immunosorbent assay (ELISA) kits were obtained. The achieved detection limit (3 ± 1 ppb, equivalent to μg/kg) and dynamic response range (0.5-250 ppb) meet the requested testing standards set by authorities in China and North America. We envision that the integrated SPAC system promises to be a simple and accurate method of food toxin quantitation, bringing much benefit for rapid on-site screening.
Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M
2013-09-01
Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.
Autocorrelation of the susceptible-infected-susceptible process on networks
NASA Astrophysics Data System (ADS)
Liu, Qiang; Van Mieghem, Piet
2018-06-01
In this paper, we focus on the autocorrelation of the susceptible-infected-susceptible (SIS) process on networks. The N -intertwined mean-field approximation (NIMFA) is applied to calculate the autocorrelation properties of the exact SIS process. We derive the autocorrelation of the infection state of each node and the fraction of infected nodes both in the steady and transient states as functions of the infection probabilities of nodes. Moreover, we show that the autocorrelation can be used to estimate the infection and curing rates of the SIS process. The theoretical results are compared with the simulation of the exact SIS process. Our work fully utilizes the potential of the mean-field method and shows that NIMFA can indeed capture the autocorrelation properties of the exact SIS process.
New approaches for calculating Moran's index of spatial autocorrelation.
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
Patrick C. Tobin
2004-01-01
The estimation of spatial autocorrelation in spatially- and temporally-referenced data is fundamental to understanding an organism's population biology. I used four sets of census field data, and developed an idealized space-time dynamic system, to study the behavior of spatial autocorrelation estimates when a practical method of sampling is employed. Estimates...
The basis function approach for modeling autocorrelation in ecological data
Hefley, Trevor J.; Broms, Kristin M.; Brost, Brian M.; Buderman, Frances E.; Kay, Shannon L.; Scharf, Henry; Tipton, John; Williams, Perry J.; Hooten, Mevin B.
2017-01-01
Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data.
Site characterisation in north-western Turkey based on SPAC and HVSR analysis of microtremor noise
NASA Astrophysics Data System (ADS)
Asten, Michael W.; Askan, Aysegul; Ekincioglu, E. Ezgi; Sisman, F. Nurten; Ugurhan, Beliz
2014-02-01
The geology of the north-western Anatolia (Turkey) ranges from hard Mesozoic bedrock in mountainous areas to large sediment-filled, pull-apart basins formed by the North Anatolian Fault zone system. Düzce and Bolu city centres are located in major alluvial basins in the region, and both suffered from severe building damage during the 12 November 1999 Düzce earthquake (Mw = 7.2). In this study, a team consisting of geophysicists and civil engineers collected and interpreted passive array-based microtremor data in the cities of Bolu and Düzce, both of which are localities of urban development located on topographically flat, geologically young alluvial basins of Miocene age. Interpretation of the microtremor data under an assumption of dominant fundamental-mode Rayleigh-wave noise allowed derivation of the shear-wave velocity (Vs) profile. The depth of investigation was ~100 m from spatially-averaged coherency (SPAC) data alone. High-frequency microtremor array data to 25 Hz allows resolution of a surface layer with Vs < 200 m/s and thickness 5 m (Bolu) and 6 m (Düzce). Subsequent inclusion of spectral ratios between horizontal and vertical components of microtremor data (HVSR) in the curve fitting process extends useful frequencies up to a decade lower than those for SPAC alone. This allows resolution of two interfaces of moderate Vs contrasts in soft Miocene and Eocene sediments, first, at a depth in the range 136-209 m, and second, at a depth in the range 2000 to 2200 m.
Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor
2014-06-01
We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.
New Approaches for Calculating Moran’s Index of Spatial Autocorrelation
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran’s index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran’s index. Moran’s scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran’s index and Geary’s coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran’s index and Geary’s coefficient will be clarified and defined. One of theoretical findings is that Moran’s index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation. PMID:23874592
The basis function approach for modeling autocorrelation in ecological data.
Hefley, Trevor J; Broms, Kristin M; Brost, Brian M; Buderman, Frances E; Kay, Shannon L; Scharf, Henry R; Tipton, John R; Williams, Perry J; Hooten, Mevin B
2017-03-01
Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data. © 2016 by the Ecological Society of America.
Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.
2007-01-01
This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
Kume, Kazunori; Hashimoto, Tomoyo; Suzuki, Masashi; Mizunuma, Masaki; Toda, Takashi; Hirata, Dai
2017-09-30
Cell polarity is coordinately regulated with the cell cycle. Growth polarity of the fission yeast Schizosaccharomyces pombe transits from monopolar to bipolar during G2 phase, termed NETO (new end take off). Upon perturbation of DNA replication, the checkpoint kinase Cds1/CHK2 induces NETO delay through activation of Ca 2+ /calmodulin-dependent protein phosphatase calcineurin (CN). CN in turn regulates its downstream targets including the microtubule (MT) plus-end tracking CLIP170 homologue Tip1 and the Casein kinase 1γ Cki3. However, whether and which Ca 2+ signaling molecules are involved in the NETO delay remains elusive. Here we show that 3 genes (trp1322, vcx1 and SPAC6c3.06c encoding TRP channel, antiporter and P-type ATPase, respectively) play vital roles in the NETO delay. Upon perturbation of DNA replication, these 3 genes are required for not only the NETO delay but also for the maintenance of cell viability. Trp1322 and Vcx1 act downstream of Cds1 and upstream of CN for the NETO delay, whereas SPAC6c3.06c acts downstream of CN. Consistently, Trp1322 and Vcx1, but not SPAC6c3.06c, are essential for activation of CN. Interestingly, we have found that elevated extracellular Ca 2+ per se induces a NETO delay, which depends on CN and its downstream target genes. These findings imply that Ca 2+ -CN signaling plays a central role in cell polarity control by checkpoint activation. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Papuga, S. A.; Hamann, L.
2017-12-01
In semiarid regions, such as the desert southwest, water is a scarce resource that demands careful attention to its movement throughout the environment for accurate accounting in regional water budgets. Ephemeral snow pack in sky island ecosystems delivers a large fraction of the water resources to communities lower in the watershed. Because the major source of loss to those water resources is evapotranspiration (ET), any change in ET in these ecosystems will have major implications downstream. Climate scientists predict more intense and less frequent precipitation events in the desert southwest, which will alter the existing soil-plant-atmosphere continuum (SPAC). Therefore, understanding how water currently moves within that continuum is imperative in preparing for these predicted changes. This study used stable isotopes (δ18O and δD) to study the SPAC that exists in the Santa Catalina Mountain Critical Zone Observatory (SCM-CZO) to determine where the dominant tree species (Pseudotsuga menziesii, a.k.a., Douglas Fir) retrieves its water from and whether that source varies with season. We hypothesize that the Douglas Fir uses shallow soil water (< 40 cm) during the summer monsoon season and deeper soil water (> 40 cm) during the snowmelt season. The findings of this work will help to better account for water losses due to ET and the movement of water throughout the environment. With a shift in the SPAC dynamics, the Douglas Fir may become increasingly water stressed effecting its ability to survive in the desert southwest which will have important consequences for water resources in this region.
Asymmetric multiscale multifractal analysis of wind speed signals
NASA Astrophysics Data System (ADS)
Zhang, Xiaonei; Zeng, Ming; Meng, Qinghao
We develop a new method called asymmetric multiscale multifractal analysis (A-MMA) to explore the multifractality and asymmetric autocorrelations of the signals with a variable scale range. Three numerical experiments are provided to demonstrate the effectiveness of our approach. Then, the proposed method is applied to investigate multifractality and asymmetric autocorrelations of difference sequences between wind speed fluctuations with uptrends or downtrends. The results show that these sequences appear to be far more complex and contain abundant fractal dynamics information. Through analyzing the Hurst surfaces of nine difference sequences, we found that all series exhibit multifractal properties and multiscale structures. Meanwhile, the asymmetric autocorrelations are observed in all variable scale ranges and the asymmetry results are of good consistency within a certain spatial range. The sources of multifractality and asymmetry in nine difference series are further discussed using the corresponding shuffled series and surrogate series. We conclude that the multifractality of these series is due to both long-range autocorrelation and broad probability density function, but the major source of multifractality is long-range autocorrelation, and the source of asymmetry is affected by the spatial distance.
Intrinsic autocorrelation time of picoseconds for thermal noise in water.
Zhu, Zhi; Sheng, Nan; Wan, Rongzheng; Fang, Haiping
2014-10-02
Whether thermal noise is colored or white is of fundamental importance. In conventional theory, thermal noise is usually treated as white noise so that there are no directional transportations in the asymmetrical systems without external inputs, since only the colored fluctuations with appropriate autocorrelation time length can lead to directional transportations in the asymmetrical systems. Here, on the basis of molecular dynamics simulations, we show that the autocorrelation time length of thermal noise in water is ~10 ps at room temperature, which indicates that thermal noise is not white in the molecular scale while thermal noise can be reasonably assumed as white in macro- and meso-scale systems. The autocorrelation time length of thermal noise is intrinsic, since the value is almost unchanged for different temperature coupling methods. Interestingly, the autocorrelation time of thermal noise is correlated with the lifetime of hydrogen bonds, suggesting that the finite autocorrelation time length of thermal noise mainly comes from the finite lifetime of the interactions between neighboring water molecules.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Method and apparatus for in-situ characterization of energy storage and energy conversion devices
Christophersen, Jon P [Idaho Falls, ID; Motloch, Chester G [Idaho Falls, ID; Morrison, John L [Butte, MT; Albrecht, Weston [Layton, UT
2010-03-09
Disclosed are methods and apparatuses for determining an impedance of an energy-output device using a random noise stimulus applied to the energy-output device. A random noise signal is generated and converted to a random noise stimulus as a current source correlated to the random noise signal. A bias-reduced response of the energy-output device to the random noise stimulus is generated by comparing a voltage at the energy-output device terminal to an average voltage signal. The random noise stimulus and bias-reduced response may be periodically sampled to generate a time-varying current stimulus and a time-varying voltage response, which may be correlated to generate an autocorrelated stimulus, an autocorrelated response, and a cross-correlated response. Finally, the autocorrelated stimulus, the autocorrelated response, and the cross-correlated response may be combined to determine at least one of impedance amplitude, impedance phase, and complex impedance.
Data Analysis Methods for Synthetic Polymer Mass Spectrometry: Autocorrelation
Wallace, William E.; Guttman, Charles M.
2002-01-01
Autocorrelation is shown to be useful in describing the periodic patterns found in high- resolution mass spectra of synthetic polymers. Examples of this usefulness are described for a simple linear homopolymer to demonstrate the method fundamentals, a condensation polymer to demonstrate its utility in understanding complex spectra with multiple repeating patterns on different mass scales, and a condensation copolymer to demonstrate how it can elegantly and efficiently reveal unexpected phenomena. It is shown that using autocorrelation to determine where the signal devolves into noise can be useful in determining molecular mass distributions of synthetic polymers, a primary focus of the NIST synthetic polymer mass spectrometry effort. The appendices describe some of the effects of transformation from time to mass space when time-of-flight mass separation is used, as well as the effects of non-trivial baselines on the autocorrelation function. PMID:27446716
Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
2013-01-01
We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less
The Use of Time Series Analysis and t Tests with Serially Correlated Data Tests.
ERIC Educational Resources Information Center
Nicolich, Mark J.; Weinstein, Carol S.
1981-01-01
Results of three methods of analysis applied to simulated autocorrelated data sets with an intervention point (varying in autocorrelation degree, variance of error term, and magnitude of intervention effect) are compared and presented. The three methods are: t tests; maximum likelihood Box-Jenkins (ARIMA); and Bayesian Box Jenkins. (Author/AEF)
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
The method of trend analysis of parameters time series of gas-turbine engine state
NASA Astrophysics Data System (ADS)
Hvozdeva, I.; Myrhorod, V.; Derenh, Y.
2017-10-01
This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
Analysis of genes involved in biosynthesis of the lantibiotic subtilin.
Klein, C; Kaletta, C; Schnell, N; Entian, K D
1992-01-01
Lantibiotics are peptide-derived antibiotics with high antimicrobial activity against pathogenic gram-positive bacteria. They are ribosomally synthesized and posttranslationally modified (N. Schnell, K.-D. Entian, U. Schneider, F. Götz, H. Zähner, R. Kellner, and G. Jung, Nature [London] 333:276-278, 1988). The most important lantibiotics are subtilin and the food preservative nisin, which both have a very similar structure. By using a hybridization probe specific for the structural gene of subtilin, spaS, the DNA region adjacent to spaS was isolated from Bacillus subtilis. Sequence analysis of a 4.9-kb fragment revealed several open reading frames with the same orientation as spaS. Downstream of spaS, no reading frames were present on the isolated XbaI fragment. Upstream of spaS, three reading frames, spaB, spaC, and spaT, were identified which showed strong homology to genes identified near the structural gene of the lantibiotic epidermin. The SpaT protein derived from the spaT sequence was homologous to hemolysin B of Escherichia coli, which indicated its possible function in subtilin transport. Gene deletions within spaB and spaC revealed subtilin-negative mutants, whereas spaT gene disruption mutants still produced subtilin. Remarkably, the spaT mutant colonies revealed a clumpy surface morphology on solid media. After growth on liquid media, spaT mutant cells agglutinated in the mid-logarithmic growth phase, forming longitudinal 3- to 10-fold-enlarged cells which aggregated. Aggregate formation preceded subtilin production and cells lost their viability, possibly as a result of intracellular subtilin accumulation. Our results clearly proved that reading frames spaB and spaC are essential for subtilin biosynthesis whereas spaT mutants are probably deficient in subtilin transport. Images PMID:1539969
Fujii, Takahide; Nakano, Masanao; Yamashita, Ken; Konishi, Toshihiro; Izumi, Shintaro; Kawaguchi, Hiroshi; Yoshimoto, Masahiko
2013-01-01
This paper describes a robust method of Instantaneous Heart Rate (IHR) and R-peak detection from noisy electrocardiogram (ECG) signals. Generally, the IHR is calculated from the R-wave interval. Then, the R-waves are extracted from the ECG using a threshold. However, in wearable bio-signal monitoring systems, noise increases the incidence of misdetection and false detection of R-peaks. To prevent incorrect detection, we introduce a short-term autocorrelation (STAC) technique and a small-window autocorrelation (SWAC) technique, which leverages the similarity of QRS complex waveforms. Simulation results show that the proposed method improves the noise tolerance of R-peak detection.
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Liu, Li-Min; Qi, Hua; Luo, Xin-Lan; Zhang, Xuan
2008-09-01
Some important phenomena and behaviors concerned with the coordination effect between vapor water loss through plant stomata and liquid water supply in SPAC were discussed in this paper. A large amount of research results showed that plants show isohydric behavior when the plant hydraulic and chemical signals cooperate to promote the stomatal regulation of leaf water potential. The feedback response of stomata to the change of environmental humidity could be used to explain the midday depression of stomatal conductance and photosynthesis under drought condition, and also, to interpret the correlation between stomatal conductance and hydraulic conductance. The feed-forward response of stomata to the change of environmental humidity could be used to explain the hysteresis response of stomatal conductance to leaf-atmosphere vapor pressure deficit. The strategy for getting the most of xylem transport requires the rapid stomatal responses to avoid excess cavitation and the corresponding mechanisms for reversal of cavitation in short time.
Employment Implications of Informal Cancer Caregiving
de Moor, Janet S.; Dowling, Emily C.; Ekwueme, Donatus U.; Guy, Gery P.; Rodriguez, Juan; Virgo, Katherine S.; Han, Xuesong; Kent, Erin E.; Li, Chunyu; Litzelman, Kristen; McNeel, Timothy S.; Liu, Benmei; Yabroff, K. Robin
2016-01-01
Purpose Previous research describing how informal cancer caregiving impacts employment has been conducted in small samples or a single disease site. This paper provides population-based estimates of the effect of cancer caregiving on employment and characterizes the employment changes made by caregivers. Methods The sample comprised cancer survivors with a friend or family caregiver, participating in either the Medical Expenditure Panel Survey Experiences with Cancer Survivorship Survey (ECSS) (n=458) or the LIVESTRONG 2012 Survey for People Affected by Cancer (SPAC) (n=4,706). Descriptive statistics characterized the sample of survivors and their caregivers’ employment changes. Multivariable logistic regression identified predictors of caregivers’ extended employment changes, comprising time off and changes to hours, duties or employment status. Results Among survivors with an informal caregiver, 25% from the ECSS and 29% from the SPAC reported their caregivers made extended employment changes. Approximately 8% of survivors had caregivers who took time off from work lasting ≥ 2 months. Caregivers who made extended employment changes were more likely to care for survivors treated with chemotherapy or transplant; closer to diagnosis or end of treatment; who experienced functional limitations; and made work changes due to cancer themselves compared to caregivers who did not make extended employment changes. Conclusions Many informal cancer caregivers make employment changes to provide care during survivors’ treatment and recovery. Implications for cancer survivors This study describes cancer caregiving in a prevalent sample of cancer survivors, thereby reflecting the experiences of individuals with many different cancer types and places in the cancer treatment trajectory. PMID:27423439
Microchannel plate cross-talk mitigation for spatial autocorrelation measurements
NASA Astrophysics Data System (ADS)
Lipka, Michał; Parniak, Michał; Wasilewski, Wojciech
2018-05-01
Microchannel plates (MCP) are the basis for many spatially resolved single-particle detectors such as ICCD or I-sCMOS cameras employing image intensifiers (II), MCPs with delay-line anodes for the detection of cold gas particles or Cherenkov radiation detectors. However, the spatial characterization provided by an MCP is severely limited by cross-talk between its microchannels, rendering MCP and II ill-suited for autocorrelation measurements. Here, we present a cross-talk subtraction method experimentally exemplified for an I-sCMOS based measurement of pseudo-thermal light second-order intensity autocorrelation function at the single-photon level. The method merely requires a dark counts measurement for calibration. A reference cross-correlation measurement certifies the cross-talk subtraction. While remaining universal for MCP applications, the presented cross-talk subtraction, in particular, simplifies quantum optical setups. With the possibility of autocorrelation measurements, the signal needs no longer to be divided into two camera regions for a cross-correlation measurement, reducing the experimental setup complexity and increasing at least twofold the simultaneously employable camera sensor region.
The Azimuth Structure of Nuclear Collisions — I
NASA Astrophysics Data System (ADS)
Trainor, Thomas A.; Kettler, David T.
We describe azimuth structure commonly associated with elliptic and directed flow in the context of 2D angular autocorrelations for the purpose of precise separation of so-called nonflow (mainly minijets) from flow. We extend the Fourier-transform description of azimuth structure to include power spectra and autocorrelations related by the Wiener-Khintchine theorem. We analyze several examples of conventional flow analysis in that context and question the relevance of reaction plane estimation to flow analysis. We introduce the 2D angular autocorrelation with examples from data analysis and describe a simulation exercise which demonstrates precise separation of flow and nonflow using the 2D autocorrelation method. We show that an alternative correlation measure based on Pearson's normalized covariance provides a more intuitive measure of azimuth structure.
Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.
2009-01-01
A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
ERIC Educational Resources Information Center
Suen, Hoi K.; And Others
The applicability is explored of the Bayesian random-effect analysis of variance (ANOVA) model developed by G. C. Tiao and W. Y. Tan (1966) and a method suggested by H. K. Suen and P. S. Lee (1987) for the generalizability analysis of autocorrelated data. According to Tiao and Tan, if time series data could be described as a first-order…
NASA Technical Reports Server (NTRS)
Craig, R. G. (Principal Investigator)
1983-01-01
Richmond, Virginia and Denver, Colorado were study sites in an effort to determine the effect of autocorrelation on the accuracy of a parallelopiped classifier of LANDSAT digital data. The autocorrelation was assumed to decay to insignificant levels when sampled at distances of at least ten pixels. Spectral themes developed using blocks of adjacent pixels, and using groups of pixels spaced at least 10 pixels apart were used. Effects of geometric distortions were minimized by using only pixels from the interiors of land cover sections. Accuracy was evaluated for three classes; agriculture, residential and "all other"; both type 1 and type 2 errors were evaluated by means of overall classification accuracy. All classes give comparable results. Accuracy is approximately the same in both techniques; however, the variance in accuracy is significantly higher using the themes developed from autocorrelated data. The vectors of mean spectral response were nearly identical regardless of sampling method used. The estimated variances were much larger when using autocorrelated pixels.
- spac0118 Overhead view of a TIROS satellite showing interior arrangement of satellite sensing packages including TV cameras and infra-red sensors. In: "TIROS A Story of Achievement" RCA, February 28 /Graphic/Satellite/ * High Resolution Photo Available Publication of the U.S. Department of Commerce
cells. TIROS II was the first meteorological satellite to have infra-red sensors as well as television - spac0116 Making adjustments to TIROS II satellite prior to launch. Small square objects are 9,260 solar Collection Photo Date: 1960, November Category: Space/Satellite/Vehicle/ * High Resolution Photo Available
A new radial strain and strain rate estimation method using autocorrelation for carotid artery
NASA Astrophysics Data System (ADS)
Ye, Jihui; Kim, Hoonmin; Park, Jongho; Yeo, Sunmi; Shim, Hwan; Lim, Hyungjoon; Yoo, Yangmo
2014-03-01
Atherosclerosis is a leading cause of cardiovascular disease. The early diagnosis of atherosclerosis is of clinical interest since it can prevent any adverse effects of atherosclerotic vascular diseases. In this paper, a new carotid artery radial strain estimation method based on autocorrelation is presented. In the proposed method, the strain is first estimated by the autocorrelation of two complex signals from the consecutive frames. Then, the angular phase from autocorrelation is converted to strain and strain rate and they are analyzed over time. In addition, a 2D strain image over region of interest in a carotid artery can be displayed. To evaluate the feasibility of the proposed radial strain estimation method, radiofrequency (RF) data of 408 frames in the carotid artery of a volunteer were acquired by a commercial ultrasound system equipped with a research package (V10, Samsung Medison, Korea) by using a L5-13IS linear array transducer. From in vivo carotid artery data, the mean strain estimate was -0.1372 while its minimum and maximum values were -2.961 and 0.909, respectively. Moreover, the overall strain estimates are highly correlated with the reconstructed M-mode trace. Similar results were obtained from the estimation of the strain rate change over time. These results indicate that the proposed carotid artery radial strain estimation method is useful for assessing the arterial wall's stiffness noninvasively without increasing the computational complexity.
Logistic regression for southern pine beetle outbreaks with spatial and temporal autocorrelation
M. L. Gumpertz; C.-T. Wu; John M. Pye
2000-01-01
Regional outbreaks of southern pine beetle (Dendroctonus frontalis Zimm.) show marked spatial and temporal patterns. While these patterns are of interest in themselves, we focus on statistical methods for estimating the effects of underlying environmental factors in the presence of spatial and temporal autocorrelation. The most comprehensive available information on...
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Method to manage integration error in the Green-Kubo method.
Oliveira, Laura de Sousa; Greaney, P Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
Method to manage integration error in the Green-Kubo method
NASA Astrophysics Data System (ADS)
Oliveira, Laura de Sousa; Greaney, P. Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
User's guide to Monte Carlo methods for evaluating path integrals
NASA Astrophysics Data System (ADS)
Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan
2018-04-01
We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.
Characteristics of the Class of 1976,
1972-08-01
361 26 .20 ~ - • - Spanish 548 39.77 • - • Russian 33 2.39 Portuguese 1 0.07 Chinese 0.36 Other 256 18.58 (~, ) Completed an Honors Course in Any...WHAT WAS YOUR FATHERS PAST MILITARY SERVICE AND PEENCH PORTUGUE SE - CURRENT STATUS ’? BLACKEN THE SPAC E BESIDE THE — CATEGORY MOST APPROPRIATE TO
Two-way QKD with single-photon-added coherent states
NASA Astrophysics Data System (ADS)
Miranda, Mario; Mundarain, Douglas
2017-12-01
In this work we present a two-way quantum key distribution (QKD) scheme that uses single-photon-added coherent states and displacement operations. The first party randomly sends coherent states (CS) or single-photon-added coherent states (SPACS) to the second party. The latter sends back the same state it received. Both parties decide which kind of states they are receiving by detecting or not a photon on the received signal after displacement operations. The first party must determine whether its sent and received states are equal; otherwise, the case must be discarded. We are going to show that an eavesdropper provided with a beam splitter gets the same information in any of the non-discarded cases. The key can be obtained by assigning 0 to CS and 1 to SPACS in the non-discarded cases. This protocol guarantees keys' security in the presence of a beam splitter attack even for states with a high number of photons in the sent signal. It also works in a lossy quantum channel, becoming a good bet for improving long-distance QKD.
NASA Astrophysics Data System (ADS)
Valtierra, Robert Daniel
Passive acoustic localization has benefited from many major developments and has become an increasingly important focus point in marine mammal research. Several challenges still remain. This work seeks to address several of these challenges such as tracking the calling depths of baleen whales. In this work, data from an array of widely spaced Marine Acoustic Recording Units (MARUs) was used to achieve three dimensional localization by combining the methods Time Difference of Arrival (TDOA) and Direct-Reflected Time Difference of Arrival (DRTD) along with a newly developed autocorrelation technique. TDOA was applied to data for two dimensional (latitude and longitude) localization and depth was resolved using DRTD. Previously, DRTD had been limited to pulsed broadband signals, such as sperm whale or dolphin echolocation, where individual direct and reflected signals are separated in time. Due to the length of typical baleen whale vocalizations, individual multipath signal arrivals can overlap making time differences of arrival difficult to resolve. This problem can be solved using an autocorrelation, which can extract reflection information from overlapping signals. To establish this technique, a derivation was made to model the autocorrelation of a direct signal and its overlapping reflection. The model was exploited to derive performance limits allowing for prediction of the minimum resolvable direct-reflected time difference for a known signal type. The dependence on signal parameters (sweep rate, call duration) was also investigated. The model was then verified using both recorded and simulated data from two analysis cases for North Atlantic right whales (NARWs, Eubalaena glacialis) and humpback whales (Megaptera noveaengliae). The newly developed autocorrelation technique was then combined with DRTD and tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The combined DRTD-autocorrelation methods enabled calling depth and range estimations of a vocalizing NARW and humpback whale in two separate cases. The DRTD-autocorrelation method was then combined with TDOA to create a three dimensional track of a NARW in the Stellwagen Bank National Marine Sanctuary. Results from these experiments illustrated the potential of the combined methods to successfully resolve baleen calling depths in three dimensions.
techniques is presented. Two methods for linearizing the data are given. An expression for the specular-to-spattered power ratio is derived, and the inverse ... transform of the autocorrelation function is discussed. The surface roughness of the reflector, the discrete fading rates, and fading frequencies
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Awiszus, F; Feistner, H; Schäfer, S S
1991-01-01
The peri-stimulus-time histogram (PSTH) analysis of stimulus-related neuronal spike train data is usually regarded as a method to detect stimulus-induced excitations or inhibitions. However, for a fairly regularly discharging neuron such as the human alpha-motoneuron, long-latency modulations of a PSTH are difficult to interpret as PSTH modulations can also occur as a consequence of a modulated neuronal autocorrelation. The experiments reported here were made (i) to investigate the extent to which a PSTH of a human hand-muscle motoneuron may be contaminated by features of the autocorrelation and (ii) to develop methods that display the motoneuronal excitations and inhibitions without such contamination. Responses of 29 single motor units to electrical ulnar nerve stimulation below motor threshold were investigated in the first dorsal interosseous muscle of three healthy volunteers using an experimental protocol capable of demonstrating the presence of autocorrelative modulations in the neuronal response. It was found for all units that the PSTH as well as the cumulative sum (CUSUM) derived from these responses were severely affected by the presence of autocorrelative features. On the other hand, calculating the CUSUM in a slightly modified form yielded--for all units investigated--a neuronal output feature sensitive only to motoneuronal excitations and inhibitions induced by the afferent volley. The price that has to be paid to arrive at such a modified CUSUM (mCUSUM) was a high computational effort prohibiting the on-line availability of this output feature during the experiment. It was found, however, that an interspike interval superposition plot (IISP)--easily obtainable during the experiment--is also free of autocorrelative features.(ABSTRACT TRUNCATED AT 250 WORDS)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-30
.... Chamberlain, Bingham McCutchen LLP, dated November 22, 2010 (``Bingham Letter''); David Alan Miller, Managing..., the SPAC must complete one or more business combinations having an aggregative fair market value of at..., investor base, and trading interest to provide the depth and liquidity necessary to promote fair and...
Study of trabecular bone microstructure using spatial autocorrelation analysis
NASA Astrophysics Data System (ADS)
Wald, Michael J.; Vasilic, Branimir; Saha, Punam K.; Wehrli, Felix W.
2005-04-01
The spatial autocorrelation analysis method represents a powerful, new approach to quantitative characterization of structurally quasi-periodic anisotropic materials such as trabecular bone (TB). The method is applicable to grayscale images and thus does not require any preprocessing, such as segmentation which is difficult to achieve in the limited resolution regime of in vivo imaging. The 3D autocorrelation function (ACF) can be efficiently calculated using the Fourier transform. The resulting trabecular thickness and spacing measurements are robust to the presence of noise and produce values within the expected range as determined by other methods from μCT and μMRI datasets. TB features found from the ACF are shown to correlate well with those determined by the Fuzzy Distance transform (FDT) in the transverse plane, i.e. the plane orthogonal to bone"s major axis. The method is further shown to be applicable to in-vivo μMRI data. Using the ACF, we examine data acquired in a previous study aimed at evaluating the structural implications of male hypogonadism characterized by testosterone deficiency and reduced bone mass. Specifically, we consider the hypothesis that eugonadal and hypogonadal men differ in the anisotropy of their trabecular networks. The analysis indicates a significant difference in trabecular bone thickness and longitudinal spacing between the control group and the testosterone deficient group. We conclude that spatial autocorrelation analysis is able to characterize the 3D structure and anisotropy of trabecular bone and provides new insight into the structural changes associated with osteoporotic trabecular bone loss.
Chin, Sang Hoon; Kim, Young Jae; Song, Ho Seong; Kim, Dug Young
2006-10-10
We propose a simple but powerful scheme for the complete analysis of the frequency chirp of a gain-switched optical pulse using a fringe-resolved interferometric two-photon absorption autocorrelator. A frequency chirp imposed on the gain-switched pulse from a laser diode was retrieved from both the intensity autocorrelation trace and the envelope of the second-harmonic interference fringe pattern. To verify the accuracy of the proposed phase retrieval method, we have performed an optical pulse compression experiment by using dispersion-compensating fibers with different lengths. We have obtained close agreement by less than a 1% error between the compressed pulse widths and numerically calculated pulse widths.
Assessment of pulmonary arterial compliance evaluated using harmonic oscillator kinematics
Hayabuchi, Yasunobu; Ono, Akemi; Homma, Yukako; Kagami, Shoji
2017-01-01
We hypothesized that KPA, a harmonic oscillator kinematics-derived spring constant parameter of the pulmonary artery pressure (PAP) profile, reflects PA compliance in pediatric patients. In this prospective study of 33 children (age range = 0.5–20 years) with various cardiac diseases, we assessed the novel parameter designated as KPA calculated using the pressure phase plane and the equation KPA = (dP/dt_max)2/([Pmax – Pmin])/2)2, where dP/dt_max is the peak derivative of PAP, and Pmax – Pmin is the difference between the minimum and maximum PAP. PA compliance was also calculated using two conventional methods: systolic PA compliance (sPAC) was expressed as the stroke volume/Pmax – Pmin; and diastolic PA compliance (dPAC) was determined according to a two-element Windkessel model of PA diastolic pressure decay. In addition, data were recorded during abdominal compression to determine the influence of preload on KPA. A significant correlation was observed between KPA and sPAC (r = 0.52, P = 0.0018), but not dPAC. Significant correlations were also seen with the time constant (τ) of diastolic PAP (r = −0.51, P = 0.0026) and the pulmonary vascular resistance index (r = −0.39, P = 0.0242). No significant difference in KPA was seen between before and after abdominal compression. KPA had a higher intraclass correlation coefficient than other compliance and resistance parameters for both intra-observer and inter-observer variability (0.998 and 0.997, respectively). These results suggest that KPA can provide insight into the underlying mechanisms and facilitate the quantification of PA compliance. PMID:28621582
Assessment of pulmonary arterial compliance evaluated using harmonic oscillator kinematics.
Hayabuchi, Yasunobu; Ono, Akemi; Homma, Yukako; Kagami, Shoji
2017-01-01
We hypothesized that K PA , a harmonic oscillator kinematics-derived spring constant parameter of the pulmonary artery pressure (PAP) profile, reflects PA compliance in pediatric patients. In this prospective study of 33 children (age range = 0.5-20 years) with various cardiac diseases, we assessed the novel parameter designated as K PA calculated using the pressure phase plane and the equation K PA = (dP/dt_max) 2 /([Pmax - Pmin])/2) 2 , where dP/dt_max is the peak derivative of PAP, and Pmax - Pmin is the difference between the minimum and maximum PAP. PA compliance was also calculated using two conventional methods: systolic PA compliance (sPAC) was expressed as the stroke volume/Pmax - Pmin; and diastolic PA compliance (dPAC) was determined according to a two-element Windkessel model of PA diastolic pressure decay. In addition, data were recorded during abdominal compression to determine the influence of preload on K PA . A significant correlation was observed between K PA and sPAC (r = 0.52, P = 0.0018), but not dPAC. Significant correlations were also seen with the time constant (τ) of diastolic PAP (r = -0.51, P = 0.0026) and the pulmonary vascular resistance index (r = -0.39, P = 0.0242). No significant difference in K PA was seen between before and after abdominal compression. K PA had a higher intraclass correlation coefficient than other compliance and resistance parameters for both intra-observer and inter-observer variability (0.998 and 0.997, respectively). These results suggest that K PA can provide insight into the underlying mechanisms and facilitate the quantification of PA compliance.
A phase match based frequency estimation method for sinusoidal signals
NASA Astrophysics Data System (ADS)
Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao
2015-04-01
Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... Acquisition Companies (SPACs) That Have Indicated That Their Business Plan Is To Engage in a Merger or... criteria for listing companies that have indicated that their business plan is to engage in a merger or... criteria for listing companies that have indicated that their business plan is to engage in a merger or...
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Smith, Scott D; Singh, Kuldev; Lin, Shan C; Chen, Philip P; Chen, Teresa C; Francis, Brian A; Jampel, Henry D
2013-10-01
To assess the published literature pertaining to the association between anterior segment imaging and gonioscopy and to determine whether such imaging aids in the diagnosis of primary angle closure (PAC). Literature searches of the PubMed and Cochrane Library databases were last conducted on July 6, 2011. The searches yielded 371 unique citations. Members of the Ophthalmic Technology Assessment Committee Glaucoma Panel reviewed the titles and abstracts of these articles and selected 134 of possible clinical significance for further review. The panel reviewed the full text of these articles and identified 79 studies meeting the inclusion criteria, for which the panel methodologist assigned a level of evidence based on a standardized grading scheme adopted by the American Academy of Ophthalmology. Three, 70, and 6 studies were rated as providing level I, II, and III evidence, respectively. Quantitative and qualitative parameters defined from ultrasound biomicroscopy (UBM), anterior segment optical coherence tomography (OCT), Scheimpflug photography, and the scanning peripheral anterior chamber depth analyzer (SPAC) demonstrate a strong association with the results of gonioscopy. There is substantial variability in the type of information obtained from each imaging method. Imaging of structures posterior to the iris is possible only with UBM. Direct imaging of the anterior chamber angle (ACA) is possible using UBM and OCT. The ability to acquire OCT images in a completely dark environment allows greater sensitivity in detecting eyes with appositional angle closure. Noncontact imaging using OCT, Scheimpflug photography, or SPAC makes these methods more attractive for large-scale PAC screening than contact imaging using UBM. Although there is evidence suggesting that anterior segment imaging provides useful information in the evaluation of PAC, none of these imaging methods provides sufficient information about the ACA anatomy to be considered a substitute for gonioscopy. Longitudinal studies are needed to validate the diagnostic significance of the parameters measured by these instruments for prospectively identifying individuals at risk for PAC. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio; Contrafatto, Danilo; Galluzzo, Danilo; Rapisarda, Salvatore
2016-04-01
In volcanic environment the propagation of seismic signals through the shallowest layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Therefore tracing a seismic ray from the recording site back to the source is a complex matter, with obvious implications for the source location. For this reason the knowledge of the shallow velocity structure may improve the location of shallow volcano-tectonic earthquakes and volcanic tremor, thus contributing to improve the monitoring of volcanic activity. This work focuses on the analysis of seismic noise and volcanic tremor recorded in 2014 by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna. Several methods permit a reliable estimation of the shear wave velocity in the shallowest layers through the analysis of stationary random wavefield like the seismic noise. We have applied the single station HVSR method and SPAC array method to seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. We also applied the Beam Forming array method in the 0.5 Hz - 4 Hz frequency range to both seismic noise and volcanic tremor. The apparent velocity of coherent tremor signals fits quite well the dispersion curve estimated from the analysis of seismic noise, thus giving a further constrain on the estimated velocity model. Moreover, taking advantage of a borehole station installed at 130 m depth in the same area of the array, we obtained a direct estimate of the P-wave velocity by comparing the borehole recordings of local earthquakes with the same event recorded at surface. Further insight on the P-wave velocity in the upper 130 m layer comes from the surface reflected wave visible in some cases at the borehole station. From this analysis we obtained an average P-wave velocity of about 1.2 km/s, in good agreement with the shear wave velocity found from the analysis of seismic noise. To better constrain the inversion we used the HVSR computed at each array station, which also give a lateral extension to the final 3D velocity model. The obtained results indicate that site effects in the investigate area are quite homogeneous among the array stations.
Probabilistic density function method for nonlinear dynamical systems driven by colored noise.
Barajas-Solano, David A; Tartakovsky, Alexandre M
2016-05-01
We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integrodifferential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified large-eddy-diffusivity (LED) closure. In contrast to the classical LED closure, the proposed closure accounts for advective transport of the PDF in the approximate temporal deconvolution of the integrodifferential equation. In addition, we introduce the generalized local linearization approximation for deriving a computable PDF equation in the form of a second-order partial differential equation. We demonstrate that the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary autocorrelation time. We apply the proposed PDF method to analyze a set of Kramers equations driven by exponentially autocorrelated Gaussian colored noise to study nonlinear oscillators and the dynamics and stability of a power grid. Numerical experiments show the PDF method is accurate when the noise autocorrelation time is either much shorter or longer than the system's relaxation time, while the accuracy decreases as the ratio of the two timescales approaches unity. Similarly, the PDF method accuracy decreases with increasing standard deviation of the noise.
NASA Astrophysics Data System (ADS)
Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho
2016-03-01
We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.
NASA Astrophysics Data System (ADS)
Popkov, Artem
2016-01-01
The article contains information about acoustic emission signals analysing using autocorrelation function. Operation factors were analysed, such as shape of signal, the origins time and carrier frequency. The purpose of work is estimating the validity of correlations methods analysing signals. Acoustic emission signal consist of different types of waves, which propagate on different trajectories in object of control. Acoustic emission signal is amplitude-, phase- and frequency-modeling signal. It was described by carrier frequency at a given point of time. Period of signal make up 12.5 microseconds and carrier frequency make up 80 kHz for analysing signal. Usage autocorrelation function like indicator the origin time of acoustic emission signal raises validity localization of emitters.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
ERIC Educational Resources Information Center
Jansen, Wim
1992-01-01
In view of the opportunities made possible by the Framework Agreement between the European Space Agency and the Soviet Union, this article examines the linguistic aspects of the agreement and its implementation. Many communication problems are related to Western concepts of project management and control that are difficult to translate into…
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
NASA Astrophysics Data System (ADS)
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.
2002-01-01
We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656
Double-resolution electron holography with simple Fourier transform of fringe-shifted holograms.
Volkov, V V; Han, M G; Zhu, Y
2013-11-01
We propose a fringe-shifting holographic method with an appropriate image wave recovery algorithm leading to exact solution of holographic equations. With this new method the complex object image wave recovered from holograms appears to have much less traditional artifacts caused by the autocorrelation band present practically in all Fourier transformed holograms. The new analytical solutions make possible a double-resolution electron holography free from autocorrelation band artifacts and thus push the limits for phase resolution. The new image wave recovery algorithm uses a popular Fourier solution of the side band-pass filter technique, while the fringe-shifting holographic method is simple to implement in practice. Published by Elsevier B.V.
Bayesian Estimates of Autocorrelations in Single-Case Designs
ERIC Educational Resources Information Center
Shadish, William R.; Rindskopf, David M.; Hedges, Larry V.; Sullivan, Kristynn J.
2012-01-01
Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to…
NASA Astrophysics Data System (ADS)
Martins, Luis Gustavo Nogueira; Stefanello, Michel Baptistella; Degrazia, Gervásio Annes; Acevedo, Otávio Costa; Puhales, Franciano Scremin; Demarco, Giuliano; Mortarini, Luca; Anfossi, Domenico; Roberti, Débora Regina; Costa, Felipe Denardin; Maldaner, Silvana
2016-11-01
In this study we analyze natural complex signals employing the Hilbert-Huang spectral analysis. Specifically, low wind meandering meteorological data are decomposed into turbulent and non turbulent components. These non turbulent movements, responsible for the absence of a preferential direction of the horizontal wind, provoke negative lobes in the meandering autocorrelation functions. The meandering characteristic time scales (meandering periods) are determined from the spectral peak provided by the Hilbert-Huang marginal spectrum. The magnitudes of the temperature and horizontal wind meandering period obtained agree with the results found from the best fit of the heuristic meandering autocorrelation functions. Therefore, the new method represents a new procedure to evaluate meandering periods that does not employ mathematical expressions to represent observed meandering autocorrelation functions.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
NASA Astrophysics Data System (ADS)
Cárdenas-Soto, M.; Valdes, J. E.; Escobedo-Zenil, D.
2013-05-01
In June 2006, the base of the artificial lake in Chapultepec Park collapsed. 20 thousand liters of water were filtered to the ground through a crack increasing the dimensions of initial gap. Studies indicated that the collapse was due to saturated material associated with a sudden and massive water filtration process. Geological studies indicates that all the area of this section the subsoil is composed of vulcano-sedimentary materials that were economically exploited in the mid-20th century, leaving a series of underground mines that were rehabilitated for the construction of the Park. Currently, the Lake is rehabilitated and running for recreational activities. In this study we have applied two methods of seismic noise correlation; seismic interferometry (SI) in time domain and the Spatial Power Auto Correlation (SPAC) in frequency domain, in order to explore the 3D subsoil velocity structure. The aim is to highlight major variations in velocity that can be associated with irregularities in the subsoil that may pose a risk to the stability of the Lake. For this purpose we use 96 vertical geophones of 4.5 Hz with 5-m spacing that conform a semi-circular array that provide a length of 480 m around the lake zone. For both correlation methods, we extract the phase velocity associated with the dispersion characteristics between each pair of stations in the frequency range from 4 to 12 Hz. In the SPAC method the process was through the dispersion curve, and in SI method we use the time delay of the maximum amplitude in the correlation pulse, which was previously filtered in multiple frequency bands. The results of both processes were captured in 3D velocity volumes (in the case SI a process of traveltime tomography was applied). We observed that in the frequency range from 6 to 8 Hz, appear irregular structures, with high velocity contrast in relation with the shear wave velocity of surface layer (ten thick m of saturated sediments). One of these anomalies is related to areas where the lake was rehabilitated, but other ones are not reported in previous geophysical or geotechnical studies.
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
Moho depth across the Trans-European Suture Zone from ambient vibration autocorrelations
NASA Astrophysics Data System (ADS)
Becker, Gesa; Knapmeyer-Endrun, Brigitte
2017-04-01
In 2018 the InSight mission to Mars will deploy a seismic station on the planet. This seismic station will consist of a three-component very broadband seismic sensor and a collocated three-component short period seismometer. Single station methods are therefore needed to extract information from the data and learn more about the interior structure of Mars. One potential method is the extraction of reflected phases from autocorrelations. Here autocorrelations are derived from ambient seismic noise to make the most of the data expected, as seismicity on Mars is likely less abundant than on Earth. These autocorrelations are calculated using a phase autocorrelation algorithm and time-frequency domain phase-weighted stacking as the main processing steps in addition to smoothing the spectrum of the data with a short term-long term average algorithm. Afterward the obtained results are filtered and analyzed in the frequency range of 1-2 Hz. The developed processing scheme is applied to data from permanent seismic stations located in different geological provinces across Europe, i.e. the Upper Rhine Graben, Central European Platform, Bohemian Massif, Northern German and Polish Basin, and the East European Craton, with varying Moho depths between 25-50 km. These crustal thicknesses are comparable to various estimates for Mars, therefore providing a good reference and indication of resolvability for Moho depths that might be encountered at the landing site. Changes in reflectivity can be observed in the calculated autocorrelations. The lag times of these changes are converted into depths with the help of available velocity information (EPcrust and local models for Poland and the Czech Republic, respectively) and the results are compared to existing information on Moho depths, which show good agreement. The results are temporarily stable, but show a clear correlation with the existence of cultural noise. Based on the closely located broadband and short period stations of the GERESS-array, it is shown that the processing scheme is also applicable to short period stations. Subsequently it is applied to the mainly short period and temporary stations of the PASSEQ network along the seismic profile POLONAISE P4, running from Eastern Germany to Lithuania crossing the Trans-European Suture Zone.
Hotspot detection using image pattern recognition based on higher-order local auto-correlation
NASA Astrophysics Data System (ADS)
Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki
2011-04-01
Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.
Low Streamflow Forcasting using Minimum Relative Entropy
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
NASA Astrophysics Data System (ADS)
Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.
2015-12-01
Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.
Navy Personnel Survey (NPS) 1990 Survey Report, Statistical Tables. Volume 1. Enlisted Personnel.
1991-08-01
Respondents were asked to provide demographic data and to indicate their attitudes or opinions on rotation/ permanent change of station (PCS) moves...and measured military members attitudes and opinions in various areas, including rotation/permanent change of station moves, recruiting duty, pay and...about: the 0rganizaional Clmate Use the spac below to make any comments you wish about the organizational climate, including EQ is- and sexual3 harassment
A geostatistical state-space model of animal densities for stream networks.
Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H
2018-06-21
Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wilson, Lorna R M; Hopcraft, Keith I
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
NASA Astrophysics Data System (ADS)
Wilson, Lorna R. M.; Hopcraft, Keith I.
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-01-01
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634
Numerical analysis for finite-range multitype stochastic contact financial market dynamic systems
NASA Astrophysics Data System (ADS)
Yang, Ge; Wang, Jun; Fang, Wen
2015-04-01
In an attempt to reproduce and study the dynamics of financial markets, a random agent-based financial price model is developed and investigated by the finite-range multitype contact dynamic system, in which the interaction and dispersal of different types of investment attitudes in a stock market are imitated by viruses spreading. With different parameters of birth rates and finite-range, the normalized return series are simulated by Monte Carlo simulation method and numerical studied by power-law distribution analysis and autocorrelation analysis. To better understand the nonlinear dynamics of the return series, a q-order autocorrelation function and a multi-autocorrelation function are also defined in this work. The comparisons of statistical behaviors of return series from the agent-based model and the daily historical market returns of Shanghai Composite Index and Shenzhen Component Index indicate that the proposed model is a reasonable qualitative explanation for the price formation process of stock market systems.
An investigation of turbulent transport in the extreme lower atmosphere
NASA Technical Reports Server (NTRS)
Koper, C. A., Jr.; Sadeh, W. Z.
1975-01-01
A model in which the Lagrangian autocorrelation is expressed by a domain integral over a set of usual Eulerian autocorrelations acquired concurrently at all points within a turbulence box is proposed along with a method for ascertaining the statistical stationarity of turbulent velocity by creating an equivalent ensemble to investigate the flow in the extreme lower atmosphere. Simultaneous measurements of turbulent velocity on a turbulence line along the wake axis were carried out utilizing a longitudinal array of five hot-wire anemometers remotely operated. The stationarity test revealed that the turbulent velocity is approximated as a realization of a weakly self-stationary random process. Based on the Lagrangian autocorrelation it is found that: (1) large diffusion time predominated; (2) ratios of Lagrangian to Eulerian time and spatial scales were smaller than unity; and, (3) short and long diffusion time scales and diffusion spatial scales were constrained within their Eulerian counterparts.
Shang, Jianyuan; Geva, Eitan
2007-04-26
The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.
Razifar, Pasha; Lubberink, Mark; Schneider, Harald; Långström, Bengt; Bengtsson, Ewert; Bergström, Mats
2005-05-13
BACKGROUND: Positron emission tomography (PET) is a powerful imaging technique with the potential of obtaining functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules in a biological system, both in vitro and in vivo. PET images can be used directly or after kinetic modelling to extract quantitative values of a desired physiological, biochemical or pharmacological entity. Because such images are generally noisy, it is essential to understand how noise affects the derived quantitative values. A pre-requisite for this understanding is that the properties of noise such as variance (magnitude) and texture (correlation) are known. METHODS: In this paper we explored the pattern of noise correlation in experimentally generated PET images, with emphasis on the angular dependence of correlation, using the autocorrelation function (ACF). Experimental PET data were acquired in 2D and 3D acquisition mode and reconstructed by analytical filtered back projection (FBP) and iterative ordered subsets expectation maximisation (OSEM) methods. The 3D data was rebinned to a 2D dataset using FOurier REbinning (FORE) followed by 2D reconstruction using either FBP or OSEM. In synthetic images we compared the ACF results with those from covariance matrix. The results were illustrated as 1D profiles and also visualized as 2D ACF images. RESULTS: We found that the autocorrelation images from PET data obtained after FBP were not fully rotationally symmetric or isotropic if the object deviated from a uniform cylindrical radioactivity distribution. In contrast, similar autocorrelation images obtained after OSEM reconstruction were isotropic even when the phantom was not circular. Simulations indicated that the noise autocorrelation is non-isotropic in images created by FBP when the level of noise in projections is angularly variable. Comparison between 1D cross profiles on autocorrelation images obtained by FBP reconstruction and covariance matrices produced almost identical results in a simulation study. CONCLUSION: With asymmetric radioactivity distribution in PET, reconstruction using FBP, in contrast to OSEM, generates images in which the noise correlation is non-isotropic when the noise magnitude is angular dependent, such as in objects with asymmetric radioactivity distribution. In this respect, iterative reconstruction is superior since it creates isotropic noise correlations in the images.
Probabilistic density function method for nonlinear dynamical systems driven by colored noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
2016-05-01
We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integro-differential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified Large-Eddy-Diffusivity-type closure. Additionally, we introduce the generalized local linearization (LL) approximation for deriving a computable PDF equation in the form of the second-order partial differential equation (PDE). We demonstrate the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary auto-correlation time.more » We apply the proposed PDF method to the analysis of a set of Kramers equations driven by exponentially auto-correlated Gaussian colored noise to study the dynamics and stability of a power grid.« less
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
Borycki, Dawid; Kholiqov, Oybek; Chong, Shau Poh; Srinivasan, Vivek J.
2016-01-01
We introduce and implement interferometric near-infrared spectroscopy (iNIRS), which simultaneously extracts optical and dynamical properties of turbid media through analysis of a spectral interference fringe pattern. The spectral interference fringe pattern is measured using a Mach-Zehnder interferometer with a frequency-swept narrow linewidth laser. Fourier analysis of the detected signal is used to determine time-of-flight (TOF)-resolved intensity, which is then analyzed over time to yield TOF-resolved intensity autocorrelations. This approach enables quantification of optical properties, which is not possible in conventional, continuous-wave near-infrared spectroscopy (NIRS). Furthermore, iNIRS quantifies scatterer motion based on TOF-resolved autocorrelations, which is a feature inaccessible by well-established diffuse correlation spectroscopy (DCS) techniques. We prove this by determining TOF-resolved intensity and temporal autocorrelations for light transmitted through diffusive fluid phantoms with optical thicknesses of up to 55 reduced mean free paths (approximately 120 scattering events). The TOF-resolved intensity is used to determine optical properties with time-resolved diffusion theory, while the TOF-resolved intensity autocorrelations are used to determine dynamics with diffusing wave spectroscopy. iNIRS advances the capabilities of diffuse optical methods and is suitable for in vivo tissue characterization. Moreover, iNIRS combines NIRS and DCS capabilities into a single modality. PMID:26832264
NASA Astrophysics Data System (ADS)
Luo, JunYan; Yan, Yiying; Huang, Yixiao; Yu, Li; He, Xiao-Ling; Jiao, HuJun
2017-01-01
We investigate the noise correlations of spin and charge currents through an electron spin resonance (ESR)-pumped quantum dot, which is tunnel coupled to three electrodes maintained at an equivalent chemical potential. A recursive scheme is employed with inclusion of the spin degrees of freedom to account for the spin-resolved counting statistics in the presence of non-Markovian effects due to coupling with a dissipative heat bath. For symmetric spin-up and spin-down tunneling rates, an ESR-induced spin flip mechanism generates a pure spin current without an accompanying net charge current. The stochastic tunneling of spin carriers, however, produces universal shot noises of both charge and spin currents, revealing the effective charge and spin units of quasiparticles in transport. In the case of very asymmetric tunneling rates for opposite spins, an anomalous relationship between noise autocorrelations and cross correlations is revealed, where super-Poissonian autocorrelation is observed in spite of a negative cross correlation. Remarkably, with strong dissipation strength, non-Markovian memory effects give rise to a positive cross correlation of the charge current in the absence of a super-Poissonian autocorrelation. These unique noise features may offer essential methods for exploiting internal spin dynamics and various quasiparticle tunneling processes in mesoscopic transport.
An evaluation of random analysis methods for the determination of panel damping
NASA Technical Reports Server (NTRS)
Bhat, W. V.; Wilby, J. F.
1972-01-01
An analysis is made of steady-state and non-steady-state methods for the measurement of panel damping. Particular emphasis is placed on the use of random process techniques in conjunction with digital data reduction methods. The steady-state methods considered use the response power spectral density, response autocorrelation, excitation-response crosspower spectral density, or single-sided Fourier transform (SSFT) of the response autocorrelation function. Non-steady-state methods are associated mainly with the use of rapid frequency sweep excitation. Problems associated with the practical application of each method are evaluated with specific reference to the case of a panel exposed to a turbulent airflow, and two methods, the power spectral density and the single-sided Fourier transform methods, are selected as being the most suitable. These two methods are demonstrated experimentally, and it is shown that the power spectral density method is satisfactory under most conditions, provided that appropriate corrections are applied to account for filter bandwidth and background noise errors. Thus, the response power spectral density method is recommended for the measurement of the damping of panels exposed to a moving airflow.
Inference for local autocorrelations in locally stationary models.
Zhao, Zhibiao
2015-04-01
For non-stationary processes, the time-varying correlation structure provides useful insights into the underlying model dynamics. We study estimation and inferences for local autocorrelation process in locally stationary time series. Our constructed simultaneous confidence band can be used to address important hypothesis testing problems, such as whether the local autocorrelation process is indeed time-varying and whether the local autocorrelation is zero. In particular, our result provides an important generalization of the R function acf() to locally stationary Gaussian processes. Simulation studies and two empirical applications are developed. For the global temperature series, we find that the local autocorrelations are time-varying and have a "V" shape during 1910-1960. For the S&P 500 index, we conclude that the returns satisfy the efficient-market hypothesis whereas the magnitudes of returns show significant local autocorrelations.
Estrogen Regulation of Messenger RNA Stability
1990-08-17
34Amphibian albumins as members of the albumin, alpha - fetoprotein , vitamin D- binding protein multigene family." J. Moi. Evol. 29: 344-354. Hagenbuchle, 0...Related to a- Fetoproteins .... 48 5. Comparison Between the Sequence of Xenopus and Rat Serum Albumin 50 6. Effect of Estradici on Cytoplasmic mRNA... alpha -actin clone pSPAC9 with Eco RI and Barn HI. The latter clone was a gift from Drs. Tom Sargent and Igor Dawid (NIH). The probe used for RNase H
1984-11-15
Coupling to Surface Plasma Waves 20 2.3 Theory of the Effect of Traps on the Spectral Charac- teristics of Diode Lasers 23 3 . MATERIALS RESEARCH 27...Page 1-1(a) Schematic Cross Section of InGaAs PSIN Structure. Gap Spac- ing (d) Is 3 , 5, 10, or 20 pm. 2 1-1(b) Curve Tracer I-V Characteristics of a...20-pim PSIN Device in Dark and Under Illumination 2 1-2 Pulse Response of a 3 -#Am PSIN Device, Under Forward and Reverse Bias. to a Comb-Generator
Numerical analysis for finite-range multitype stochastic contact financial market dynamic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ge; Wang, Jun; Fang, Wen, E-mail: fangwen@bjtu.edu.cn
In an attempt to reproduce and study the dynamics of financial markets, a random agent-based financial price model is developed and investigated by the finite-range multitype contact dynamic system, in which the interaction and dispersal of different types of investment attitudes in a stock market are imitated by viruses spreading. With different parameters of birth rates and finite-range, the normalized return series are simulated by Monte Carlo simulation method and numerical studied by power-law distribution analysis and autocorrelation analysis. To better understand the nonlinear dynamics of the return series, a q-order autocorrelation function and a multi-autocorrelation function are also definedmore » in this work. The comparisons of statistical behaviors of return series from the agent-based model and the daily historical market returns of Shanghai Composite Index and Shenzhen Component Index indicate that the proposed model is a reasonable qualitative explanation for the price formation process of stock market systems.« less
NASA Astrophysics Data System (ADS)
Laracuente, Nicholas; Grossman, Carl
2013-03-01
We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College
An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1996-01-01
Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.
Old document image segmentation using the autocorrelation function and multiresolution analysis
NASA Astrophysics Data System (ADS)
Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy
2013-01-01
Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.
NASA Astrophysics Data System (ADS)
Chen, Junbao; Xia, Wei; Wang, Ming
2017-06-01
Photodiodes that exhibit a two-photon absorption effect within the spectral communication band region can be useful for building an ultra-compact autocorrelator for the characteristic inspection of optical pulses. In this work, we develop an autocorrelator for measuring the temporal profile of pulses at 1550 nm from an erbium-doped fiber laser based on the two-photon photovoltaic (TPP) effect in a GaAs PIN photodiode. The temporal envelope of the autocorrelation function contains two symmetrical temporal side lobes due to the third order dispersion of the laser pulses. Moreover, the joint time-frequency distribution of the dispersive pulses and the dissimilar two-photon response spectrum of GaAs and Si result in different delays for the appearance of the temporal side lobes. Compared with Si, GaAs displays a greater sensitivity for pulse shape reconstruction at 1550 nm, benefiting from the higher signal-to-noise ratio of the side lobes and the more centralized waveform of the autocorrelation trace. We also measure the pulse width using the GaAs PIN photodiode, and the resolution of the measured full width at half maximum of the TPP autocorrelation trace is 0.89 fs, which is consistent with a conventional second-harmonic generation crystal autocorrelator. The GaAs PIN photodiode is shown to be highly suitable for real-time second-order autocorrelation measurements of femtosecond optical pulses. It is used both for the generation and detection of the autocorrelation signal, allowing the construction of a compact and inexpensive intensity autocorrelator.
Fault detection in reciprocating compressor valves under varying load conditions
NASA Astrophysics Data System (ADS)
Pichler, Kurt; Lughofer, Edwin; Pichler, Markus; Buchegger, Thomas; Klement, Erich Peter; Huschenbett, Matthias
2016-03-01
This paper presents a novel approach for detecting cracked or broken reciprocating compressor valves under varying load conditions. The main idea is that the time frequency representation of vibration measurement data will show typical patterns depending on the fault state. The problem is to detect these patterns reliably. For the detection task, we make a detour via the two dimensional autocorrelation. The autocorrelation emphasizes the patterns and reduces noise effects. This makes it easier to define appropriate features. After feature extraction, classification is done using logistic regression and support vector machines. The method's performance is validated by analyzing real world measurement data. The results will show a very high detection accuracy while keeping the false alarm rates at a very low level for different compressor loads, thus achieving a load-independent method. The proposed approach is, to our best knowledge, the first automated method for reciprocating compressor valve fault detection that can handle varying load conditions.
Cheng, Yezeng; Larin, Kirill V
2006-12-20
Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.
NASA Astrophysics Data System (ADS)
Cheng, Yezeng; Larin, Kirill V.
2006-12-01
Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.
Wang, Shaobin; Luo, Kunli
2018-01-01
The relation between life expectancy and energy utilization is of particular concern. Different viewpoints concerned the health impacts of heating policy in China. However, it is still obscure that what kind of heating energy or what pattern of heating methods is the most related with the difference of life expectancies in China. The aim of this paper is to comprehensively investigate the spatial relations between life expectancy at birth (LEB) and different heating energy utilization in China by using spatial autocorrelation models including global spatial autocorrelation, local spatial autocorrelation and hot spot analysis. The results showed that: (1) Most of heating energy exhibit a distinct north-south difference, such as central heating supply, stalks and domestic coal. Whereas spatial distribution of domestic natural gas and electricity exhibited west-east differences. (2) Consumption of central heating, stalks and domestic coal show obvious spatial dependence. Whereas firewood, natural gas and electricity did not show significant spatial autocorrelation. It exhibited an extinct south-north difference of heat supply, stalks and domestic coal which were identified to show significant positive spatial autocorrelation. (3) Central heating, residential boilers and natural gas did not show any significant correlations with LEB. While, the utilization of domestic coal and biomass showed significant negative correlations with LEB, and household electricity shows positive correlations. The utilization of domestic coal in China showed a negative effect on LEB, rather than central heating. To improve the solid fuel stoves and control consumption of domestic coal consumption and other low quality solid fuel is imperative to improve the public health level in China in the future. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, R. Jason
2016-08-15
Cardiac safety assays incorporating label-free detection of human stem-cell derived cardiomyocyte contractility provide human relevance and medium throughput screening to assess compound-induced cardiotoxicity. In an effort to provide quantitative analysis of the large kinetic datasets resulting from these real-time studies, we applied bioinformatic approaches based on nonlinear dynamical system analysis, including limit cycle analysis and autocorrelation function, to systematically assess beat irregularity. The algorithms were integrated into a software program to seamlessly generate results for 96-well impedance-based data. Our approach was validated by analyzing dose- and time-dependent changes in beat patterns induced by known proarrhythmic compounds and screening a cardiotoxicitymore » library to rank order compounds based on their proarrhythmic potential. We demonstrate a strong correlation for dose-dependent beat irregularity monitored by electrical impedance and quantified by autocorrelation analysis to traditional manual patch clamp potency values for hERG blockers. In addition, our platform identifies non-hERG blockers known to cause clinical arrhythmia. Our method provides a novel suite of medium-throughput quantitative tools for assessing compound effects on cardiac contractility and predicting compounds with potential proarrhythmia and may be applied to in vitro paradigms for pre-clinical cardiac safety evaluation. - Highlights: • Impedance-based monitoring of human iPSC-derived cardiomyocyte contractility • Limit cycle analysis of impedance data identifies aberrant oscillation patterns. • Nonlinear autocorrelation function quantifies beat irregularity. • Identification of hERG and non-hERG inhibitors with known risk of arrhythmia • Automated software processes limit cycle and autocorrelation analyses of 96w data.« less
NASA Astrophysics Data System (ADS)
Peters, S. T.; Schroeder, D. M.; Romero-Wolf, A.; Haynes, M.
2017-12-01
The Radar for Icy Moon Exploration (RIME) and the Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON) have been identified as potential candidates for the implementation of passive sounding as additional observing modes for the ESA and NASA missions to Ganymede and Europa. Recent work has shown the theoretical potential for Jupiter's decametric radiation to be used as a source for passive radio sounding of its icy moons. We are further developing and adapting this geophysical approach for use in terrestrial glaciology. Here, we present results from preliminary field testing of a prototype passive radio sounder from cliffs along the California coast. This includes both using a Lloyd's mirror to measure the Sun's direct path and its reflection off the ocean's surface and exploiting autocorrelation to detect the delay time of the echo. This is the first in-situ demonstration of the autocorrelation-based passive-sounding approach using an astronomical white noise signal. We also discuss preliminary field tests on rougher terrestrial and subglacial surfaces, including at Store Glacier in Greenland. Additionally, we present modeling and experimental results that demonstrate the feasibility of applying presumming approaches to the autocorrelations to achieve coherent gain from an inherently random signal. We note that while recording with wider bandwidths and greater delays places fundamental limits on the Lloyd's mirror approach, our new autocorrelation method has no such limitation. Furthermore, we show how achieving wide bandwidths via spectral-stitching methods allows us to obtain a finer range resolution than given by the receiver's instantaneous bandwidth. Finally, we discuss the potential for this technique to eliminate the need for active transmitters in certain types of ice sounding experiments, thereby reducing the complexity, power consumption, and cost of systems and observations.
NASA Astrophysics Data System (ADS)
Mei, Zhixiong; Wu, Hao; Li, Shiyun
2018-06-01
The Conversion of Land Use and its Effects at Small regional extent (CLUE-S), which is a widely used model for land-use simulation, utilizes logistic regression to estimate the relationships between land use and its drivers, and thus, predict land-use change probabilities. However, logistic regression disregards possible spatial autocorrelation and self-organization in land-use data. Autologistic regression can depict spatial autocorrelation but cannot address self-organization, while logistic regression by considering only self-organization (NElogistic regression) fails to capture spatial autocorrelation. Therefore, this study developed a regression (NE-autologistic regression) method, which incorporated both spatial autocorrelation and self-organization, to improve CLUE-S. The Zengcheng District of Guangzhou, China was selected as the study area. The land-use data of 2001, 2005, and 2009, as well as 10 typical driving factors, were used to validate the proposed regression method and the improved CLUE-S model. Then, three future land-use scenarios in 2020: the natural growth scenario, ecological protection scenario, and economic development scenario, were simulated using the improved model. Validation results showed that NE-autologistic regression performed better than logistic regression, autologistic regression, and NE-logistic regression in predicting land-use change probabilities. The spatial allocation accuracy and kappa values of NE-autologistic-CLUE-S were higher than those of logistic-CLUE-S, autologistic-CLUE-S, and NE-logistic-CLUE-S for the simulations of two periods, 2001-2009 and 2005-2009, which proved that the improved CLUE-S model achieved the best simulation and was thereby effective to a certain extent. The scenario simulation results indicated that under all three scenarios, traffic land and residential/industrial land would increase, whereas arable land and unused land would decrease during 2009-2020. Apparent differences also existed in the simulated change sizes and locations of each land-use type under different scenarios. The results not only demonstrate the validity of the improved model but also provide a valuable reference for relevant policy-makers.
[Community trajectories of mentally ill and intellectually disabled young people].
Fleury, Marie-Josée; Grenier, Guy
2013-01-01
In the context of reforms in the field of disability, this study documents the trajectories and mechanisms of support for young people with mental illness or intellectual disability or pervasive developmental disorders, during the teen-adult life transition period; andfactorsfostering or impeding this transition for their maintenance in an everyday environment, particularly in SESSAD (special education and home care service) and the SAMSAH/ SPAC (medico-social support for adults with disabilities/support services in social life). This study was conducted in the French department of Seine-et-Marne. It was supported by a mixed call for tenders, in which 77 respondents (professionals, families and users), and 26 organizations were consulted. The study shows that few young adults in SAMSAH/SPAC programmes are derived from SESSAD, and they encounter major difficulties living in an everyday environment, particularly during the transition period. Clinical or socio-economic factors related to the profiles of users or healthcare service organization facilitate or hinder the inclusion of young people in an everyday environment. Support for users was also often limited to followup over a suboptimal period, and was hampered by insufficient networking within the regional healthcare system. On the other hand, empowerment of users and their optimal inclusion in an everyday environment, as founding principles of the reform, constitute major action priorities for healthcare structures. Strengthening services for young people (16-25 years), including integration strategies, is recommended in order to establish an integrated network of services in the field of disability.
Employment implications of informal cancer caregiving.
de Moor, Janet S; Dowling, Emily C; Ekwueme, Donatus U; Guy, Gery P; Rodriguez, Juan; Virgo, Katherine S; Han, Xuesong; Kent, Erin E; Li, Chunyu; Litzelman, Kristen; McNeel, Timothy S; Liu, Benmei; Yabroff, K Robin
2017-02-01
Previous research describing how informal cancer caregiving impacts employment has been conducted in small samples or a single disease site. This paper provides population-based estimates of the effect of informal cancer caregiving on employment and characterizes employment changes made by caregivers. The samples included cancer survivors with a friend or family caregiver, participating in either the Medical Expenditure Panel Survey Experiences with Cancer Survivorship Survey (ECSS) (n = 458) or the LIVESTRONG 2012 Survey for People Affected by Cancer (SPAC) (n = 4706). Descriptive statistics characterized the sample of survivors and their caregivers' employment changes. Multivariable logistic regression identified predictors of caregivers' extended employment changes, comprising time off and changes to hours, duties, or employment status. Among survivors with an informal caregiver, 25 % from the ECSS and 29 % from the SPAC reported that their caregivers made extended employment changes. Approximately 8 % of survivors had caregivers who took time off from work lasting ≥2 months. Caregivers who made extended employment changes were more likely to care for survivors: treated with chemotherapy or transplant; closer to diagnosis or end of treatment; who experienced functional limitations; and made work changes due to cancer themselves compared to caregivers who did not make extended employment changes. Many informal cancer caregivers make employment changes to provide care during survivors' treatment and recovery. This study describes cancer caregiving in a prevalent sample of cancer survivors, thereby reflecting the experiences of individuals with many different cancer types and places in the cancer treatment trajectory.
Understanding the Ecoydrology of Mangroves: A Simple SPAC Model for Avicennia Marina
NASA Astrophysics Data System (ADS)
Perri, Saverio; Viola, Francesco; Valerio Noto, Leonardo; Molini, Annalisa
2015-04-01
Mangroves represent one of the most carbon-rich ecosystems in the Tropics, noticeably impacting ecosystem services and the economy of these regions. Whether the ability of mangroves to exclude and tolerate salt has been extensively investigated in the literature - both from the structural and functional point of view - their eco-hydrological characteristics remains largely understudied, despite the crucial link with productivity, efficient carbon storage and fluxes. In this contribution we develop a "first-order" Soil Plant Atmosphere Continuum model for Avicennia Marina, a mangrove able to adapt to hyper-arid intertidal zones and characterized by complex morphological and eco-physiological traits. Among mangroves, Avicennia marina is one of the most tolerant to salinity and arid climatic conditions. Our model, based on a simple macroscopic approach, takes into account the specific characteristics of the mangrove ecosystem and in particular, the salinity of the water in the soil and the levels of salt stress to which the plant may be subjected. Mangrove transpiration is hence obtained by solving the plant and leaf water balance and the leaf energy balance, taking explicitly into account the role of osmotic water potential and salinity in governing plant resistance to water fluxes. The SPAC model of Avicennia is hence tested against experimental data obtained from the literature, showing the reliability and effectiveness of this minimalist model in reproducing observed transpiration fluxes. Finally, sensitivity analysis is used to assess whether uncertainty on the adopted parameters could lead to significant errors in the transpiration assessment.
Impact of Autocorrelation on Functional Connectivity
Arbabshirani, Mohammad R.; Damaraju, Eswar; Phlypo, Ronald; Plis, Sergey; Allen, Elena; Ma, Sai; Mathalon, Daniel; Preda, Adrian; Vaidya, Jatin G.; Adali, Tülay; Calhoun, Vince D.
2014-01-01
Although the impact of serial correlation (autocorrelation) in residuals of general linear models for fMRI time-series has been studied extensively, the effect of autocorrelation on functional connectivity studies has been largely neglected until recently. Some recent studies based on results from economics have questioned the conventional estimation of functional connectivity and argue that not correcting for autocorrelation in fMRI time-series results in “spurious” correlation coefficients. In this paper, first we assess the effect of autocorrelation on Pearson correlation coefficient through theoretical approximation and simulation. Then we present this effect on real fMRI data. To our knowledge this is the first work comprehensively investigating the effect of autocorrelation on functional connectivity estimates. Our results show that although FC values are altered, even following correction for autocorrelation, results of hypothesis testing on FC values remain very similar to those before correction. In real data we show this is true for main effects and also for group difference testing between healthy controls and schizophrenia patients. We further discuss model order selection in the context of autoregressive processes, effects of frequency filtering and propose a preprocessing pipeline for connectivity studies. PMID:25072392
ASSESSMENT OF SPATIAL AUTOCORRELATION IN EMPIRICAL MODELS IN ECOLOGY
Statistically assessing ecological models is inherently difficult because data are autocorrelated and this autocorrelation varies in an unknown fashion. At a simple level, the linking of a single species to a habitat type is a straightforward analysis. With some investigation int...
2014-06-17
100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with
CADDIS Volume 4. Data Analysis: Basic Principles & Issues
Use of inferential statistics in causal analysis, introduction to data independence and autocorrelation, methods to identifying and control for confounding variables, references for the Basic Principles section of Data Analysis.
Model Identification of Integrated ARMA Processes
ERIC Educational Resources Information Center
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim
2008-01-01
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…
Geostatistics for spatial genetic structures: study of wild populations of perennial ryegrass.
Monestiez, P; Goulard, M; Charmet, G
1994-04-01
Methods based on geostatistics were applied to quantitative traits of agricultural interest measured on a collection of 547 wild populations of perennial ryegrass in France. The mathematical background of these methods, which resembles spatial autocorrelation analysis, is briefly described. When a single variable is studied, the spatial structure analysis is similar to spatial autocorrelation analysis, and a spatial prediction method, called "kriging", gives a filtered map of the spatial pattern over all the sampled area. When complex interactions of agronomic traits with different evaluation sites define a multivariate structure for the spatial analysis, geostatistical methods allow the spatial variations to be broken down into two main spatial structures with ranges of 120 km and 300 km, respectively. The predicted maps that corresponded to each range were interpreted as a result of the isolation-by-distance model and as a consequence of selection by environmental factors. Practical collecting methodology for breeders may be derived from such spatial structures.
Scaling analysis and model estimation of solar corona index
NASA Astrophysics Data System (ADS)
Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik
2018-04-01
A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.
Yu, Xiang-xiang; Wang, Yu-hua
2014-01-13
Silver nanoparticles synthesized in a synthetic sapphire matrix were fabricated by ion implantation using the metal vapor vacuum arc ion source. The optical absorption spectrum of the Ag: Al2O3 composite material has been measured. The analysis of the supercontinuum spectrum displayed the nonlinear refractive property of this kind of sample. Nonlinear optical refraction index was identified at 800 nm excitation using the Kerr-lens autocorrelation (KLAC) technique. The spectrum showed that the material possessed self-defocusing property (n(2) = -1.1 × 10(-15) cm(2)W). The mechanism of nonlinear refraction has been discussed.
Stojanova, Daniela; Ceci, Michelangelo; Malerba, Donato; Dzeroski, Saso
2013-09-26
Ontologies and catalogs of gene functions, such as the Gene Ontology (GO) and MIPS-FUN, assume that functional classes are organized hierarchically, that is, general functions include more specific ones. This has recently motivated the development of several machine learning algorithms for gene function prediction that leverages on this hierarchical organization where instances may belong to multiple classes. In addition, it is possible to exploit relationships among examples, since it is plausible that related genes tend to share functional annotations. Although these relationships have been identified and extensively studied in the area of protein-protein interaction (PPI) networks, they have not received much attention in hierarchical and multi-class gene function prediction. Relations between genes introduce autocorrelation in functional annotations and violate the assumption that instances are independently and identically distributed (i.i.d.), which underlines most machine learning algorithms. Although the explicit consideration of these relations brings additional complexity to the learning process, we expect substantial benefits in predictive accuracy of learned classifiers. This article demonstrates the benefits (in terms of predictive accuracy) of considering autocorrelation in multi-class gene function prediction. We develop a tree-based algorithm for considering network autocorrelation in the setting of Hierarchical Multi-label Classification (HMC). We empirically evaluate the proposed algorithm, called NHMC (Network Hierarchical Multi-label Classification), on 12 yeast datasets using each of the MIPS-FUN and GO annotation schemes and exploiting 2 different PPI networks. The results clearly show that taking autocorrelation into account improves the predictive performance of the learned models for predicting gene function. Our newly developed method for HMC takes into account network information in the learning phase: When used for gene function prediction in the context of PPI networks, the explicit consideration of network autocorrelation increases the predictive performance of the learned models. Overall, we found that this holds for different gene features/ descriptions, functional annotation schemes, and PPI networks: Best results are achieved when the PPI network is dense and contains a large proportion of function-relevant interactions.
Isakozawa, Shigeto; Fuse, Taishi; Amano, Junpei; Baba, Norio
2018-04-01
As alternatives to the diffractogram-based method in high-resolution transmission electron microscopy, a spot auto-focusing (AF) method and a spot auto-stigmation (AS) method are presented with a unique high-definition auto-correlation function (HD-ACF). The HD-ACF clearly resolves the ACF central peak region in small amorphous-thin-film images, reflecting the phase contrast transfer function. At a 300-k magnification for a 120-kV transmission electron microscope, the smallest areas used are 64 × 64 pixels (~3 nm2) for the AF and 256 × 256 pixels for the AS. A useful advantage of these methods is that the AF function has an allowable accuracy even for a low s/n (~1.0) image. A reference database on the defocus dependency of the HD-ACF by the pre-acquisition of through-focus amorphous-thin-film images must be prepared to use these methods. This can be very beneficial because the specimens are not limited to approximations of weak phase objects but can be extended to objects outside such approximations.
Quantifying temporal change in biodiversity: challenges and opportunities
Dornelas, Maria; Magurran, Anne E.; Buckland, Stephen T.; Chao, Anne; Chazdon, Robin L.; Colwell, Robert K.; Curtis, Tom; Gaston, Kevin J.; Gotelli, Nicholas J.; Kosnik, Matthew A.; McGill, Brian; McCune, Jenny L.; Morlon, Hélène; Mumby, Peter J.; Øvreås, Lise; Studeny, Angelika; Vellend, Mark
2013-01-01
Growing concern about biodiversity loss underscores the need to quantify and understand temporal change. Here, we review the opportunities presented by biodiversity time series, and address three related issues: (i) recognizing the characteristics of temporal data; (ii) selecting appropriate statistical procedures for analysing temporal data; and (iii) inferring and forecasting biodiversity change. With regard to the first issue, we draw attention to defining characteristics of biodiversity time series—lack of physical boundaries, uni-dimensionality, autocorrelation and directionality—that inform the choice of analytic methods. Second, we explore methods of quantifying change in biodiversity at different timescales, noting that autocorrelation can be viewed as a feature that sheds light on the underlying structure of temporal change. Finally, we address the transition from inferring to forecasting biodiversity change, highlighting potential pitfalls associated with phase-shifts and novel conditions. PMID:23097514
An autocorrelation method to detect low frequency earthquakes within tremor
Brown, J.R.; Beroza, G.C.; Shelly, D.R.
2008-01-01
Recent studies have shown that deep tremor in the Nankai Trough under western Shikoku consists of a swarm of low frequency earthquakes (LFEs) that occur as slow shear slip on the down-dip extension of the primary seismogenic zone of the plate interface. The similarity of tremor in other locations suggests a similar mechanism, but the absence of cataloged low frequency earthquakes prevents a similar analysis. In this study, we develop a method for identifying LFEs within tremor. The method employs a matched-filter algorithm, similar to the technique used to infer that tremor in parts of Shikoku is comprised of LFEs; however, in this case we do not assume the origin times or locations of any LFEs a priori. We search for LFEs using the running autocorrelation of tremor waveforms for 6 Hi-Net stations in the vicinity of the tremor source. Time lags showing strong similarity in the autocorrelation represent either repeats, or near repeats, of LFEs within the tremor. We test the method on an hour of Hi-Net recordings of tremor and demonstrates that it extracts both known and previously unidentified LFEs. Once identified, we cross correlate waveforms to measure relative arrival times and locate the LFEs. The results are able to explain most of the tremor as a swarm of LFEs and the locations of newly identified events appear to fill a gap in the spatial distribution of known LFEs. This method should allow us to extend the analysis of Shelly et al. (2007a) to parts of the Nankai Trough in Shikoku that have sparse LFE coverage, and may also allow us to extend our analysis to other regions that experience deep tremor, but where LFEs have not yet been identified. Copyright 2008 by the American Geophysical Union.
Imaging the Lower Crust and Moho Beneath Long Beach, CA Using Autocorrelations
NASA Astrophysics Data System (ADS)
Clayton, R. W.
2017-12-01
Three-dimensional images of the lower crust and Moho in a 10x10 km region beneath Long Beach, CA are constructed from autocorrelations of ambient noise. The results show the Moho at a depth of 15 km at the coast and dipping at 45 degrees inland to a depth of 25 km. The shape of the Moho interface is irregular in both the coast perpendicular and parallel directions. The lower crust appears as a zone of enhanced reflectivity with numerous small-scale structures. The autocorrelations are constructed from virtual source gathers that were computed from the dense Long Beach array that were used in the Lin et al (2013) study. All near zero-offset traces within a 200 m disk are stacked to produce a single autocorrelation at that point. The stack typically is over 50-60 traces. To convert the auto correlation to reflectivity as in Claerbout (1968), the noise source autocorrelation, which is estimated as the average of all autocorrelations is subtracted from each trace. The subsurface image is then constructed with a 0.1-2 Hz filter and AGC scaling. The main features of the image are confirmed with broadband receiver functions from the LASSIE survey (Ma et al, 2016). The use of stacked autocorrelations extends ambient noise into the lower crust.
Spatial Analysis of China Province-level Perinatal Mortality
XIANG, Kun; SONG, Deyong
2016-01-01
Background: Using spatial analysis tools to determine the spatial patterns of China province-level perinatal mortality and using spatial econometric model to examine the impacts of health care resources and different socio-economic factors on perinatal mortality. Methods: The Global Moran’s I index is used to examine whether the spatial autocorrelation exists in selected regions and Moran’s I scatter plot to examine the spatial clustering among regions. Spatial econometric models are used to investigate the spatial relationships between perinatal mortality and contributing factors. Results: The overall Moran’s I index indicates that perinatal mortality displays positive spatial autocorrelation. Moran’s I scatter plot analysis implies that there is a significant clustering of mortality in both high-rate regions and low-rate regions. The spatial econometric models analyses confirm the existence of a direct link between perinatal mortality and health care resources, socio-economic factors. Conclusions: Since a positive spatial autocorrelation has been detected in China province-level perinatal mortality, the upgrading of regional economic development and medical service level will affect the mortality not only in region itself but also its adjacent regions. PMID:27398334
Li, Jie; Fang, Xiangming
2010-01-01
Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a dataset of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement-error modeling of geographic health data are discussed. PMID:20087879
A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation
NASA Astrophysics Data System (ADS)
Suryowati, K.; Bekti, R. D.; Faradila, A.
2018-04-01
Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list
MATLAB-Based Program for Teaching Autocorrelation Function and Noise Concepts
ERIC Educational Resources Information Center
Jovanovic Dolecek, G.
2012-01-01
An attractive MATLAB-based tool for teaching the basics of autocorrelation function and noise concepts is presented in this paper. This tool enhances traditional in-classroom lecturing. The demonstrations of the tool described here highlight the description of the autocorrelation function (ACF) in a general case for wide-sense stationary (WSS)…
Spatial autocorrelation in growth of undisturbed natural pine stands across Georgia
Raymond L. Czaplewski; Robin M. Reich; William A. Bechtold
1994-01-01
Moran's I statistic measures the spatial autocorrelation in a random variable measured at discrete locations in space. Permutation procedures test the null hypothesis that the observed Moran's I value is no greater than that expected by chance. The spatial autocorrelation of gross basal area increment is analyzed for undisturbed, naturally regenerated stands...
Longitudinal and bulk viscosities of Lennard-Jones fluids
NASA Astrophysics Data System (ADS)
Tankeshwar, K.; Pathak, K. N.; Ranganathan, S.
1996-12-01
Expressions for the longitudinal and bulk viscosities have been derived using Green Kubo formulae involving the time integral of the longitudinal and bulk stress autocorrelation functions. The time evolution of stress autocorrelation functions are determined using the Mori formalism and a memory function which is obtained from the Mori equation of motion. The memory function is of hyperbolic secant form and involves two parameters which are related to the microscopic sum rules of the respective autocorrelation function. We have derived expressions for the zeroth-, second-and fourth- order sum rules of the longitudinal and bulk stress autocorrelation functions. These involve static correlation functions up to four particles. The final expressions for these have been put in a form suitable for numerical calculations using low- order decoupling approximations. The numerical results have been obtained for the sum rules of longitudinal and bulk stress autocorrelation functions. These have been used to calculate the longitudinal and bulk viscosities and time evolution of the longitudinal stress autocorrelation function of the Lennard-Jones fluids over wide ranges of densities and temperatures. We have compared our results with the available computer simulation data and found reasonable agreement.
Kamioka, Hiroharu; Kawamura, Yoichi; Tsutani, Kiichiro; Maeda, Masaharu; Hayasaka, Shinya; Okuizum, Hiroyasu; Okada, Shinpei; Honda, Takuya; Iijima, Yuichi
2013-08-01
The purpose of this study was to develop a checklist of items that describes and measures the quality of reports of interventional trials assessing spa therapy. The Delphi consensus method was used to select the number of items in the checklist. A total of eight individuals participated, including an epidemiologist, a clinical research methodologist, clinical researchers, a medical journalist, and a health fitness programmer. Participants ranked on a 9-point Likert scale whether an item should be included in the checklist. Three rounds of the Delphi method were conducted to achieve consensus. The final checklist contained 19 items, with items related to title, place of implementation (specificity of spa), care provider influence, and additional measures to minimize the potential bias from withdrawals, loss to follow-up, and low treatment adherence. This checklist is simple and quick to complete, and should help clinicians and researchers critically appraise the medical and healthcare literature, reviewers assess the quality of reports included in systematic reviews, and researchers plan interventional trials of spa therapy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Map LineUps: Effects of spatial structure on graphical inference.
Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo
2017-01-01
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Design of two-dimensional zero reference codes with cross-entropy method.
Chen, Jung-Chieh; Wen, Chao-Kai
2010-06-20
We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.
The Effects of Autocorrelation on the Curve-of-Factors Growth Model
ERIC Educational Resources Information Center
Murphy, Daniel L.; Beretvas, S. Natasha; Pituch, Keenan A.
2011-01-01
This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a…
NASA Astrophysics Data System (ADS)
Klaus, Julian; Pan Chun, Kwok; Stumpp, Christine
2015-04-01
Spatio-temporal dynamics of stable oxygen (18O) and hydrogen (2H) isotopes in precipitation can be used as proxies for changing hydro-meteorological and regional and global climate patterns. While spatial patterns and distributions gained much attention in recent years the temporal trends in stable isotope time series are rarely investigated and our understanding of them is still limited. These might be a result of a lack of proper trend detection tools and effort for exploring trend processes. Here we make use of an extensive data set of stable isotope in German precipitation. In this study we investigate temporal trends of δ18O in precipitation at 17 observation station in Germany between 1978 and 2009. For that we test different approaches for proper trend detection, accounting for first and higher order serial correlation. We test if significant trends in the isotope time series based on different models can be observed. We apply the Mann-Kendall trend tests on the isotope series, using general multiplicative seasonal autoregressive integrate moving average (ARIMA) models which account for first and higher order serial correlations. With the approach we can also account for the effects of temperature, precipitation amount on the trend. Further we investigate the role of geographic parameters on isotope trends. To benchmark our proposed approach, the ARIMA results are compared to a trend-free prewhiting (TFPW) procedure, the state of the art method for removing the first order autocorrelation in environmental trend studies. Moreover, we explore whether higher order serial correlations in isotope series affects our trend results. The results show that three out of the 17 stations have significant changes when higher order autocorrelation are adjusted, and four stations show a significant trend when temperature and precipitation effects are considered. Significant trends in the isotope time series are generally observed at low elevation stations (≤315 m a.s.l.). Higher order autoregressive processes are important in the isotope time series analysis. Our results show that the widely used trend analysis with only the first order autocorrelation adjustment may not adequately take account of the high order autocorrelated processes in the stable isotope series. The investigated time series analysis method including higher autocorrelation and external climate variable adjustments is shown to be a better alternative.
Fetal source extraction from magnetocardiographic recordings by dependent component analysis
NASA Astrophysics Data System (ADS)
de Araujo, Draulio B.; Kardec Barros, Allan; Estombelo-Montesco, Carlos; Zhao, Hui; Roque da Silva Filho, A. C.; Baffa, Oswaldo; Wakai, Ronald; Ohnishi, Noboru
2005-10-01
Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.
Long-range correlation and market segmentation in bond market
NASA Astrophysics Data System (ADS)
Wang, Zhongxing; Yan, Yan; Chen, Xiaosong
2017-09-01
This paper investigates the long-range auto-correlations and cross-correlations in bond market. Based on Detrended Moving Average (DMA) method, empirical results present a clear evidence of long-range persistence that exists in one year scale. The degree of long-range correlation related to maturities has an upward tendency with a peak in short term. These findings confirm the expectations of fractal market hypothesis (FMH). Furthermore, we have developed a method based on a complex network to study the long-range cross-correlation structure and applied it to our data, and found a clear pattern of market segmentation in the long run. We also detected the nature of long-range correlation in the sub-period 2007-2012 and 2011-2016. The result from our research shows that long-range auto-correlations are decreasing in the recent years while long-range cross-correlations are strengthening.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Remote sensing of plant-water relations: An overview and future perspectives.
Damm, A; Paul-Limoges, E; Haghighi, E; Simmer, C; Morsdorf, F; Schneider, F D; van der Tol, C; Migliavacca, M; Rascher, U
2018-04-25
Vegetation is a highly dynamic component of the Earth surface and substantially alters the water cycle. Particularly the process of oxygenic plant photosynthesis determines vegetation connecting the water and carbon cycle and causing various interactions and feedbacks across Earth spheres. While vegetation impacts the water cycle, it reacts to changing water availability via functional, biochemical and structural responses. Unravelling the resulting complex feedbacks and interactions between the plant-water system and environmental change is essential for any modelling approaches and predictions, but still insufficiently understood due to currently missing observations. We hypothesize that an appropriate cross-scale monitoring of plant-water relations can be achieved by combined observational and modelling approaches. This paper reviews suitable remote sensing approaches to assess plant-water relations ranging from pure observational to combined observational-modelling approaches. We use a combined energy balance and radiative transfer model to assess the explanatory power of pure observational approaches focussing on plant parameters to estimate plant-water relations, followed by an outline for a more effective use of remote sensing by their integration into soil-plant-atmosphere continuum (SPAC) models. We apply a mechanistic model simulating water movement in the SPAC to reveal insight into the complexity of relations between soil, plant and atmospheric parameters, and thus plant-water relations. We conclude that future research should focus on strategies combining observations and mechanistic modelling to advance our knowledge on the interplay between the plant-water system and environmental change, e.g. through plant transpiration. Copyright © 2018 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Jenkins, David M.; Holmes, Zachary F.; Ishida, Kiyotaka; Manuel, Phillip D.
2018-01-01
Autocorrelation analysis of infrared spectra can provide insights on the strain energy associated with cation substitutions along a solid-solution compositional join which to date has been applied primarily to silicate minerals. In this study, the method is applied to carbonates synthesized at 10 mol% increments along the calcite-dolomite (CaCO3-CaMg(CO3)2) join in the range of 1000-1150 °C and 0.6-2.5 GPa for the purpose of determining how the band broadening in both the far- and mid-infrared ranges, as represented by the autocorrelation parameter δΔCorr, compares with the existing enthalpy of mixing data for this join. It was found that the carbonate internal vibration ν2 (out-of-plane bending) in the mid-infrared range, and the sum of the three internal vibration modes ν4 + ν2 + ν3 most closely matched the enthalpy of mixing data for the synthetic carbonates. Autocorrelation analysis of a series of biogenic carbonates in the mid-infrared range showed only a systematic variation for the ν2 band. Using the biogenic carbonate with the lowest Mg content for reference, the trend in δΔCorr for biogenic carbonates shows a steady increase with increasing Mg content suggesting a steady increase in solubility with Mg content. The results from this study indicate that autocorrelation analysis of carbonates in the mid-infrared range provides an independent and reliable assessment of the crystallographic strain energy of carbonates. In particular, inorganic carbonates in the range of 0-17 mol% MgCO3 experience a minimum in strain energy and a corresponding minimum in the enthalpy of mixing, whereas biogenic carbonates show a steady increase in strain energy with increasing MgCO3 content. In the event of increasing ocean acidification, biogenic carbonates in the range of 0-17 mol% MgCO3 will dissolve more readily than the compositionally equivalent inorganic carbonates.
NASA Astrophysics Data System (ADS)
Jenkins, David M.; Holmes, Zachary F.; Ishida, Kiyotaka; Manuel, Phillip D.
2018-06-01
Autocorrelation analysis of infrared spectra can provide insights on the strain energy associated with cation substitutions along a solid-solution compositional join which to date has been applied primarily to silicate minerals. In this study, the method is applied to carbonates synthesized at 10 mol% increments along the calcite-dolomite (CaCO3-CaMg(CO3)2) join in the range of 1000-1150 °C and 0.6-2.5 GPa for the purpose of determining how the band broadening in both the far- and mid-infrared ranges, as represented by the autocorrelation parameter δΔCorr, compares with the existing enthalpy of mixing data for this join. It was found that the carbonate internal vibration ν2 (out-of-plane bending) in the mid-infrared range, and the sum of the three internal vibration modes ν4 + ν2 + ν3 most closely matched the enthalpy of mixing data for the synthetic carbonates. Autocorrelation analysis of a series of biogenic carbonates in the mid-infrared range showed only a systematic variation for the ν2 band. Using the biogenic carbonate with the lowest Mg content for reference, the trend in δΔCorr for biogenic carbonates shows a steady increase with increasing Mg content suggesting a steady increase in solubility with Mg content. The results from this study indicate that autocorrelation analysis of carbonates in the mid-infrared range provides an independent and reliable assessment of the crystallographic strain energy of carbonates. In particular, inorganic carbonates in the range of 0-17 mol% MgCO3 experience a minimum in strain energy and a corresponding minimum in the enthalpy of mixing, whereas biogenic carbonates show a steady increase in strain energy with increasing MgCO3 content. In the event of increasing ocean acidification, biogenic carbonates in the range of 0-17 mol% MgCO3 will dissolve more readily than the compositionally equivalent inorganic carbonates.
NASA Astrophysics Data System (ADS)
Marlowe, Robert Lloyd
The dynamic light scattering technique of photon correlation spectroscopy has been used to investigate the dependence of the mutual diffusion coefficient of a macromolecular system upon concentration. The first part of the research was devoted to the design and construction of a single-clipping autocorrelator based on newly-developed integrated circuits. The resulting 128 channel instrument can perform real time autocorrelation for sample time intervals >(, )10 (mu)s, and batch processed autocorrelation for intervals down to 3 (mu)s. An improved design for a newer, all-digital autocorrelator is given. Homodyne light scattering experiments were then undertaken on monodisperse solutions of polystyrene spheres. The single-mode TEM(,oo) beam of an argon-ion laser ((lamda) = 5145 (ANGSTROM)) was used as the light source; all solutions were studied at room temperature. The scattering angle was varied from 30(DEGREES) to 110(DEGREES). Excellent agreement with the manufacturer's specification for the particle size was obtained from the photon correlation studies. Finally, aqueous solutions of the globular protein ovalbumin, ranging in concentration from 18.9 to 244.3 mg/ml, were illuminated under the same conditions of temperature and wavelength as before; the homodyne scattered light was detected at a fixed scattering angle of 30(DEGREES). The single-clipped photocount autocorrelation function was analyzed using the homodyne exponential integral method of Meneely et al. The resulting diffusion coefficients showed a general linear dependence upon concentration, as predicted by the generalized Stokes-Einstein equation. However, a clear peak in the data was evident at c (TURNEQ) 100 mg/ml, which could not be explained on the basis of a non -interacting particle theory. A semi-quantitative approach based on the Debye-Huckel theory of electrostatic interactions is suggested as the probable cause for the peak's rise, and an excluded volume effect for its decline.
NASA Astrophysics Data System (ADS)
Taylor, D. G.; Rost, S.; Houseman, G.
2015-12-01
In recent years the technique of cross-correlating the ambient seismic noise wavefield at two seismometers to reconstruct empirical Green's Functions for the determination of Earth structure has been a powerful tool to study the Earth's interior without earthquake or man-made sources. However, far less attention has been paid to using auto-correlations of seismic noise to reveal body wave reflections from interfaces in the subsurface. In principle, the Green's functions thus derived should be comparable to the Earth's impulse response to a co-located source and receiver. We use data from a dense seismic array (Dense Array for Northern Anatolia - DANA) deployed across the northern branch of the North Anatolian Fault Zone (NAFZ) in the region of the 1999 magnitude 7.6 Izmit earthquake in western Turkey. The NAFZ is a major strike-slip system that extends ~1200 km across northern Turkey and continues to pose a high level of seismic hazard, in particular to the mega-city of Istanbul. We construct reflection images for the entire crust and upper mantle over the ~35 km by 70 km footprint of the 70-station DANA array. Using auto-correlations of vertical and horizontal components of ground motion, both P- and S-wave velocity information can be retrieved from the wavefield to constrain crustal structure further to established methods. We show that clear P-wave reflections from the crust-mantle boundary (Moho) can be retrieved using the autocorrelation technique, indicating topography on the Moho on horizontal scales of less than 10 km. Offsets in crustal structure can be identified that seem to be correlated with the surface expression of the fault zone in the region. The combined analysis of auto-correlations using vertical and horizontal components will lead to further insight into the fault zone structure throughout the crust and upper mantle.
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
Sources of variation in Landsat autocorrelation
NASA Technical Reports Server (NTRS)
Craig, R. G.; Labovitz, M. L.
1980-01-01
Analysis of sixty-four scan lines representing diverse conditions across satellites, channels, scanners, locations and cloud cover confirms that Landsat data are autocorrelated and consistently follow an Arima (1,0,1) pattern. The AR parameter varies significantly with location and the MA coefficient with cloud cover. Maximum likelihood classification functions are considerably in error unless this autocorrelation is compensated for in sampling.
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
False alarms: How early warning signals falsely predict abrupt sea ice loss
NASA Astrophysics Data System (ADS)
Wagner, Till J. W.; Eisenman, Ian
2016-04-01
Uncovering universal early warning signals for critical transitions has become a coveted goal in diverse scientific disciplines, ranging from climate science to financial mathematics. There has been a flurry of recent research proposing such signals, with increasing autocorrelation and increasing variance being among the most widely discussed candidates. A number of studies have suggested that increasing autocorrelation alone may suffice to signal an impending transition, although some others have questioned this. Here we consider variance and autocorrelation in the context of sea ice loss in an idealized model of the global climate system. The model features no bifurcation, nor increased rate of retreat, as the ice disappears. Nonetheless, the autocorrelation of summer sea ice area is found to increase in a global warming scenario. The variance, by contrast, decreases. A simple physical mechanism is proposed to explain the occurrence of increasing autocorrelation but not variance when there is no approaching bifurcation. Additionally, a similar mechanism is shown to allow an increase in both indicators with no physically attainable bifurcation. This implies that relying on autocorrelation and variance as early warning signals can raise false alarms in the climate system, warning of "tipping points" that are not actually there.
Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Chaudhri, Anuj; Lukes, Jennifer R.
2010-02-01
The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
Optimal periodic binary codes of lengths 28 to 64
NASA Technical Reports Server (NTRS)
Tyler, S.; Keston, R.
1980-01-01
Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.
Time-scale effects on the gain-loss asymmetry in stock indices
NASA Astrophysics Data System (ADS)
Sándor, Bulcsú; Simonsen, Ingve; Nagy, Bálint Zsolt; Néda, Zoltán
2016-08-01
The gain-loss asymmetry, observed in the inverse statistics of stock indices is present for logarithmic return levels that are over 2 % , and it is the result of the non-Pearson-type autocorrelations in the index. These non-Pearson-type correlations can be viewed also as functionally dependent daily volatilities, extending for a finite time interval. A generalized time-window shuffling method is used to show the existence of such autocorrelations. Their characteristic time scale proves to be smaller (less than 25 trading days) than what was previously believed. It is also found that this characteristic time scale has decreased with the appearance of program trading in the stock market transactions. Connections with the leverage effect are also established.
NASA Astrophysics Data System (ADS)
Sasaki, Kenya; Mitani, Yoshihiro; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao
2017-02-01
In this paper, in order to classify liver cirrhosis on regions of interest (ROIs) images from B-mode ultrasound images, we have proposed to use the higher order local autocorrelation (HLAC) features. In a previous study, we tried to classify liver cirrhosis by using a Gabor filter based approach. However, the classification performance of the Gabor feature was poor from our preliminary experimental results. In order accurately to classify liver cirrhosis, we examined to use the HLAC features for liver cirrhosis classification. The experimental results show the effectiveness of HLAC features compared with the Gabor feature. Furthermore, by using a binary image made by an adaptive thresholding method, the classification performance of HLAC features has improved.
Hsu, Chen-Shao; Chiang, Hsin-Chien; Chuang, Hsiu-Po; Huang, Chen-Bin; Yang, Shang-Da
2011-07-15
We retrieve the spectral phase of 400 fs pulses at 1560 nm with 5.2 aJ coupled pulse energy (40 photons) by the modified interferometric field autocorrelation method, using a pulse shaper and a 5 cm long periodically poled lithium niobate waveguide. The carrier-envelope phase control of the shaper can reduce the fringe density of the interferometric trace and permits longer lock-in time constants, achieving a sensitivity of 2.7×10(-9) mW(2) (40 times better than the previous record for self-referenced nonlinear pulse measurement). The high stability of the pulse shaper allows for accurate and reproducible measurements of complicated spectral phases. © 2011 Optical Society of America
The integration of laser communication and ranging
NASA Astrophysics Data System (ADS)
Xu, Mengmeng; Sun, Jianfeng; Zhou, Yu; Zhang, Bo; Zhang, Guo; Li, Guangyuan; He, Hongyu; Lao, Chenzhe
2017-08-01
The method to realize the integration of laser communication and ranging is proposed in this paper. In the transmitter of two places, the ranging codes with uniqueness, good autocorrelation and cross-correlation properties are embed in the communication data and the encoded with the communication data to realize serial communication. And then the encoded data are modulated and send to each other, which can realize high speed two one-way laser communication. At the receiver, we can get the received ranging code after the demodulation, decoding and clock recovery. The received ranging codes and the local ranging codes do the autocorrelation to get a roughly range, while the phase difference between the local clock and the recovery clock to achieve the precision of the distance.
Fifth anniversary of the first element of the International Spac
2003-12-03
In the Space Station Processing Facility (SSPF), Charles J. Precourt, deputy manager of NASA's International Space Station Program, is interviewed by a reporter from a local television station. Representatives from the media were invited to commemorate the fifth anniversary of the launch of the first element of the Station with a tour of the facility and had the opportunity to see Space Station hardware that is being processed for deployment once the Space Shuttles return to flight. NASA and Boeing mission managers were on hand to talk about the various hardware elements currently being processed for flight.
We investigated the metapopulation structure of the California tiger salamander (Ambystoma californiense) using a combination of indirect and direct methods to evaluate two key requirements of modern metapopulation models: 1) that patches support somewhat independent populations ...
Horak, Jakub
2013-01-01
The interaction of arthropods with the environment and the management of their populations is a focus of the ecological agenda. Spatial autocorrelation and under-sampling may generate bias and, when they are ignored, it is hard to determine if results can in any way be trusted. Arthropod communities were studied during two seasons and using two methods: window and panel traps, in an area of ancient temperate lowland woodland of Zebracka (Czech Republic). The composition of arthropod communities was studied focusing on four site level variables (canopy openness, diameter in the breast height and height of tree, and water distance) and finally analysed using two approaches: with and without effects of spatial autocorrelation. I found that the proportion of variance explained by space cannot be ignored (≈20% in both years). Potential bias in analyses of the response of arthropods to site level variables without including spatial co-variables is well illustrated by redundancy analyses. Inclusion of space led to more accurate results, as water distance and tree diameter were significant, showing approximately the same ratio of explained variance and direction in both seasons. Results without spatial co-variables were much more disordered and were difficult to explain. This study showed that neglecting the effects of spatial autocorrelation could lead to wrong conclusions in site level studies and, furthermore, that inclusion of space may lead to more accurate and unambiguous outcomes. Rarefactions showed that lower sampling intensity, when appropriately designed, can produce sufficient results without exploitation of the environment.
Wittemyer, George; Polansky, Leo; Douglas-Hamilton, Iain; Getz, Wayne M.
2008-01-01
The internal state of an individual—as it relates to thirst, hunger, fear, or reproductive drive—can be inferred by referencing points on its movement path to external environmental and sociological variables. Using time-series approaches to characterize autocorrelative properties of step-length movements collated every 3 h for seven free-ranging African elephants, we examined the influence of social rank, predation risk, and seasonal variation in resource abundance on periodic properties of movement. The frequency domain methods of Fourier and wavelet analyses provide compact summaries of temporal autocorrelation and show both strong diurnal and seasonal based periodicities in the step-length time series. This autocorrelation is weaker during the wet season, indicating random movements are more common when ecological conditions are good. Periodograms of socially dominant individuals are consistent across seasons, whereas subordinate individuals show distinct differences diverging from that of dominants during the dry season. We link temporally localized statistical properties of movement to landscape features and find that diurnal movement correlation is more common within protected wildlife areas, and multiday movement correlations found among lower ranked individuals are typically outside of protected areas where predation risks are greatest. A frequency-related spatial analysis of movement-step lengths reveal that rest cycles related to the spatial distribution of critical resources (i.e., forage and water) are responsible for creating the observed patterns. Our approach generates unique information regarding the spatial-temporal interplay between environmental and individual characteristics, providing an original approach for understanding the movement ecology of individual animals and the spatial organization of animal populations. PMID:19060207
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.
2014-01-01
NASA Langley Research Center in collaboration with ITT Exelis have been experimenting with Continuous Wave (CW) laser absorption spectrometer (LAS) as a means of performing atmospheric CO2 column measurements from space to support the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission.Because range resolving Intensity Modulated (IM) CW lidar techniques presented here rely on matched filter correlations, autocorrelation properties without side lobes or other artifacts are highly desirable since the autocorrelation function is critical for the measurements of lidar return powers, laser path lengths, and CO2 column amounts. In this paper modulation techniques are investigated that improve autocorrelation properties. The modulation techniques investigated in this paper include sine waves modulated by maximum length (ML) sequences in various hardware configurations. A CW lidar system using sine waves modulated by ML pseudo random noise codes is described, which uses a time shifting approach to separate channels and make multiple, simultaneous online/offline differential absorption measurements. Unlike the pure ML sequence, this technique is useful in hardware that is band pass filtered as the IM sine wave carrier shifts the main power band. Both amplitude and Phase Shift Keying (PSK) modulated IM carriers are investigated that exibit perfect autocorrelation properties down to one cycle per code bit. In addition, a method is presented to bandwidth limit the ML sequence based on a Gaussian filter implemented in terms of Jacobi theta functions that does not seriously degrade the resolution or introduce side lobes as a means of reducing aliasing and IM carrier bandwidth.
NASA Technical Reports Server (NTRS)
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
The use of spatio-temporal correlation to forecast critical transitions
NASA Astrophysics Data System (ADS)
Karssenberg, Derek; Bierkens, Marc F. P.
2010-05-01
Complex dynamical systems may have critical thresholds at which the system shifts abruptly from one state to another. Such critical transitions have been observed in systems ranging from the human body system to financial markets and the Earth system. Forecasting the timing of critical transitions before they are reached is of paramount importance because critical transitions are associated with a large shift in dynamical regime of the system under consideration. However, it is hard to forecast critical transitions, because the state of the system shows relatively little change before the threshold is reached. Recently, it was shown that increased spatio-temporal autocorrelation and variance can serve as alternative early warning signal for critical transitions. However, thus far these second order statistics have not been used for forecasting in a data assimilation framework. Here we show that the use of spatio-temporal autocorrelation and variance in the state of the system reduces the uncertainty in the predicted timing of critical transitions compared to classical approaches that use the value of the system state only. This is shown by assimilating observed spatio-temporal autocorrelation and variance into a dynamical system model using a Particle Filter. We adapt a well-studied distributed model of a logistically growing resource with a fixed grazing rate. The model describes the transition from an underexploited system with high resource biomass to overexploitation as grazing pressure crosses the critical threshold, which is a fold bifurcation. To represent limited prior information, we use a large variance in the prior probability distributions of model parameters and the system driver (grazing rate). First, we show that the rate of increase in spatio-temporal autocorrelation and variance prior to reaching the critical threshold is relatively consistent across the uncertainty range of the driver and parameter values used. This indicates that an increase in spatio-temporal autocorrelation and variance are consistent predictors of a critical transition, even under the condition of a poorly defined system. Second, we perform data assimilation experiments using an artificial exhaustive data set generated by one realization of the model. To mimic real-world sampling, an observational data set is created from this exhaustive data set. This is done by sampling on a regular spatio-temporal grid, supplemented by sampling locations at a short distance. Spatial and temporal autocorrelation in this observational data set is calculated for different spatial and temporal separation (lag) distances. To assign appropriate weights to observations (here, autocorrelation values and variance) in the Particle Filter, the covariance matrix of the error in these observations is required. This covariance matrix is estimated using Monte Carlo sampling, selecting a different random position of the sampling network relative to the exhaustive data set for each realization. At each update moment in the Particle Filter, observed autocorrelation values are assimilated into the model and the state of the model is updated. Using this approach, it is shown that the use of autocorrelation reduces the uncertainty in the forecasted timing of a critical transition compared to runs without data assimilation. The performance of the use of spatial autocorrelation versus temporal autocorrelation depends on the timing and number of observational data. This study is restricted to a single model only. However, it is becoming increasingly clear that spatio-temporal autocorrelation and variance can be used as early warning signals for a large number of systems. Thus, it is expected that spatio-temporal autocorrelation and variance are valuable in data assimilation frameworks in a large number of dynamical systems.
Image correlation based method for the analysis of collagen fibers patterns
NASA Astrophysics Data System (ADS)
Rosa, Ramon G. T.; Pratavieira, Sebastião.; Kurachi, Cristina
2015-06-01
The collagen fibers are one of the most important structural proteins in skin, being responsible for its strength and flexibility. It is known that their properties, like fibers density, ordination and mean diameter can be affected by several skin conditions, what makes these properties a good parameter to be used on the diagnosis and evaluation of skin aging, cancer, healing, among other conditions. There is, however, a need for methods capable of analyzing quantitatively the organization patterns of these fibers. To address this need, we developed a method based on the autocorrelation function of the images that allows the construction of vector field plots of the fibers directions and does not require any kind of curve fitting or optimization. The analyzed images were obtained through Second Harmonic Generation Imaging Microscopy. This paper presents a concise review on the autocorrelation function and some of its applications to image processing, details the developed method and the results obtained through the analysis of hystopathological slides of landrace porcine skin. The method has high accuracy on the determination of the fibers direction and presents high performance. We look forward to perform further studies keeping track of different skin conditions over time.
A New Methodology of Spatial Cross-Correlation Analysis
Chen, Yanguang
2015-01-01
Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran’s index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson’s correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China’s urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes. PMID:25993120
A new methodology of spatial cross-correlation analysis.
Chen, Yanguang
2015-01-01
Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran's index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson's correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China's urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes.
Smith, Justin D.; Borckardt, Jeffrey J.; Nash, Michael R.
2013-01-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977).This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. PMID:22697454
van Mantgem, P.J.; Schwilk, D.W.
2009-01-01
Fire is an important feature of many forest ecosystems, although the quantification of its effects is compromised by the large scale at which fire occurs and its inherent unpredictability. A recurring problem is the use of subsamples collected within individual burns, potentially resulting in spatially autocorrelated data. Using subsamples from six different fires (and three unburned control areas) we show little evidence for strong spatial autocorrelation either before or after burning for eight measures of forest conditions (both fuels and vegetation). Additionally, including a term for spatially autocorrelated errors provided little improvement for simple linear models contrasting the effects of early versus late season burning. While the effects of spatial autocorrelation should always be examined, it may not always greatly influence assessments of fire effects. If high patch scale variability is common in Sierra Nevada mixed conifer forests, even following more than a century of fire exclusion, treatments designed to encourage further heterogeneity in forest conditions prior to the reintroduction of fire will likely be unnecessary.
Sun, Guanghao; Matsui, Takemi
2015-01-01
Noncontact measurement of respiratory rate using Doppler radar will play a vital role in future clinical practice. Doppler radar remotely monitors the tiny chest wall movements induced by respiration activity. The most competitive advantage of this technique is to allow users fully unconstrained with no biological electrode attachments. However, the Doppler radar, unlike other contact-type sensors, is easily affected by the random body movements. In this paper, we proposed a time domain autocorrelation model to process the radar signals for rapid and stable estimation of the respiratory rate. We tested the autocorrelation model on 8 subjects in laboratory, and compared the respiratory rates detected by noncontact radar with reference contact-type respiratory effort belt. Autocorrelation model showed the effects of reducing the random body movement noise added to Doppler radar's respiration signals. Moreover, the respiratory rate can be rapidly calculated from the first main peak in the autocorrelation waveform within 10 s.
Early Warning Signals for Abrupt Change Raise False Alarm During Sea Ice Loss
NASA Astrophysics Data System (ADS)
Wagner, T. J. W.; Eisenman, I.
2015-12-01
Uncovering universal early warning signals for critical transitions has become a coveted goal in diverse scientific disciplines, ranging from climate science to financial mathematics. There has been a flurry of recent research proposing such signals, with increasing autocorrelation and increasing variance being among the most widely discussed candidates. A number of studies have suggested that increasing autocorrelation alone may suffice to signal an impending transition, although some others have questioned this. Here, we consider variance and autocorrelation in the context of sea ice loss in an idealized model of the global climate system. The model features no bifurcation, nor increased rate of retreat, as the ice disappears. Nonetheless, the autocorrelation of summer sea ice area is found to increase with diminishing sea ice cover in a global warming scenario. The variance, by contrast, decreases. A simple physical mechanism is proposed to explain the occurrence of increasing autocorrelation but not variance in the model when there is no approaching bifurcation. Additionally, a similar mechanism is shown to allow an increase in both indicators with no physically attainable bifurcation. This implies that relying on autocorrelation and variance as early warning signals can raise false alarms in the climate system, warning of "tipping points" that are not actually there.
NASA Astrophysics Data System (ADS)
Wang, X.; Tu, C. Y.; He, J.; Wang, L.
2017-12-01
It has been a longstanding debate on what the nature of Elsässer variables z- observed in the Alfvénic solar wind is. It is widely believed that z- represents inward propagating Alfvén waves and undergoes non-linear interaction with z+ to produce energy cascade. However, z- variations sometimes show nature of convective structures. Here we present a new data analysis on z- autocorrelation functions to get some definite information on its nature. We find that there is usually a break point on the z- auto-correlation function when the fluctuations show nearly pure Alfvénicity. The break point observed by Helios-2 spacecraft near 0.3 AU is at the first time lag ( 81 s), where the autocorrelation coefficient has the value less than that at zero-time lag by a factor of more than 0.4. The autocorrelation function breaks also appear in the WIND observations near 1 AU. The z- autocorrelation function is separated by the break into two parts: fast decreasing part and slowly decreasing part, which cannot be described in a whole by an exponential formula. The breaks in the z- autocorrelation function may represent that the z- time series are composed of high-frequency white noise and low-frequency apparent structures, which correspond to the flat and steep parts of the function, respectively. This explanation is supported by a simple test with a superposition of an artificial random data series and a smoothed random data series. Since in many cases z- autocorrelation functions do not decrease very quickly at large time lag and cannot be considered as the Lanczos type, no reliable value for correlation-time can be derived. Our results showed that in these cases with high Alfvénicity, z- should not be considered as inward-propagating wave. The power-law spectrum of z+ should be made by fluid turbulence cascade process presented by Kolmogorov.
Global Autocorrelation Scales of the Partial Pressure of Oceanic CO2
NASA Technical Reports Server (NTRS)
Li, Zhen; Adamec, David; Takahashi, Taro; Sutherland, Stewart C.
2004-01-01
A global database of approximately 1.7 million observations of the partial pressure of carbon dioxide in surface ocean waters (pCO2) collected between 1970 and 2003 is used to estimate its spatial autocorrelation structure. The patterns of the lag distance where the autocorrelation exceeds 0.8 is similar to patterns in the spatial distribution of the first baroclinic Rossby radius of deformation indicating that ocean circulation processes play a significant role in determining the spatial variability of pCO2. For example, the global maximum of the distance at which autocorrelations exceed 0.8 averages about 140 km in the equatorial Pacific. Also, the lag distance at which the autocorrelation exceed 0.8 is greater in the vicinity of the Gulf Stream than it is near the Kuroshio, approximately 50 km near the Gulf Stream as opposed to 20 km near the Kuroshio. Separate calculations for times when the sun is north and south of the equator revealed no obvious seasonal dependence of the spatial autocorrelation scales. The pCO2 measurements at Ocean Weather Station (OWS) 'P', in the eastern subarctic Pacific (50 N, 145 W) is the only fixed location where an uninterrupted time series of sufficient length exists to calculate a meaningful temporal autocorrelation function for lags greater than a few days. The estimated temporal autocorrelation function at OWS 'P', is highly variable. A spectral analysis of the longest four pCO2 time series indicates a high level of variability occurring over periods from the atmospheric synoptic to the maximum length of the time series, in this case 42 days. It is likely that a relative peak in variability with a period of 3-6 days is related to atmospheric synoptic period variability and ocean mixing events due to wind stirring. However, the short length of available time series makes identifying temporal relationships between pCO2 and atmospheric or ocean processes problematic.
Kim, Jaehee; Ogden, Robert Todd; Kim, Haseong
2013-10-18
Time course gene expression experiments are an increasingly popular method for exploring biological processes. Temporal gene expression profiles provide an important characterization of gene function, as biological systems are both developmental and dynamic. With such data it is possible to study gene expression changes over time and thereby to detect differential genes. Much of the early work on analyzing time series expression data relied on methods developed originally for static data and thus there is a need for improved methodology. Since time series expression is a temporal process, its unique features such as autocorrelation between successive points should be incorporated into the analysis. This work aims to identify genes that show different gene expression profiles across time. We propose a statistical procedure to discover gene groups with similar profiles using a nonparametric representation that accounts for the autocorrelation in the data. In particular, we first represent each profile in terms of a Fourier basis, and then we screen out genes that are not differentially expressed based on the Fourier coefficients. Finally, we cluster the remaining gene profiles using a model-based approach in the Fourier domain. We evaluate the screening results in terms of sensitivity, specificity, FDR and FNR, compare with the Gaussian process regression screening in a simulation study and illustrate the results by application to yeast cell-cycle microarray expression data with alpha-factor synchronization.The key elements of the proposed methodology: (i) representation of gene profiles in the Fourier domain; (ii) automatic screening of genes based on the Fourier coefficients and taking into account autocorrelation in the data, while controlling the false discovery rate (FDR); (iii) model-based clustering of the remaining gene profiles. Using this method, we identified a set of cell-cycle-regulated time-course yeast genes. The proposed method is general and can be potentially used to identify genes which have the same patterns or biological processes, and help facing the present and forthcoming challenges of data analysis in functional genomics.
Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks
NASA Astrophysics Data System (ADS)
Ren, Shengwei; Zhang, Li; Zhang, Shibing
2016-10-01
Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.
Monitoring autocorrelated process: A geometric Brownian motion process approach
NASA Astrophysics Data System (ADS)
Li, Lee Siaw; Djauhari, Maman A.
2013-09-01
Autocorrelated process control is common in today's modern industrial process control practice. The current practice of autocorrelated process control is to eliminate the autocorrelation by using an appropriate model such as Box-Jenkins models or other models and then to conduct process control operation based on the residuals. In this paper we show that many time series are governed by a geometric Brownian motion (GBM) process. Therefore, in this case, by using the properties of a GBM process, we only need an appropriate transformation and model the transformed data to come up with the condition needs in traditional process control. An industrial example of cocoa powder production process in a Malaysian company will be presented and discussed to illustrate the advantages of the GBM approach.
Estimation of neural energy in microelectrode signals
NASA Astrophysics Data System (ADS)
Gaumond, R. P.; Clement, R.; Silva, R.; Sander, D.
2004-09-01
We considered the problem of determining the neural contribution to the signal recorded by an intracortical electrode. We developed a linear least-squares approach to determine the energy fraction of a signal attributable to an arbitrary number of autocorrelation-defined signals buried in noise. Application of the method requires estimation of autocorrelation functions Rap(tgr) characterizing the action potential (AP) waveforms and Rn(tgr) characterizing background noise. This method was applied to the analysis of chronically implanted microelectrode signals from motor cortex of rat. We found that neural (AP) energy consisted of a large-signal component which grows linearly with the number of threshold-detected neural events and a small-signal component unrelated to the count of threshold-detected AP signals. The addition of pseudorandom noise to electrode signals demonstrated the algorithm's effectiveness for a wide range of noise-to-signal energy ratios (0.08 to 39). We suggest, therefore, that the method could be of use in providing a measure of neural response in situations where clearly identified spike waveforms cannot be isolated, or in providing an additional 'background' measure of microelectrode neural activity to supplement the traditional AP spike count.
Uhrich, Mark A.; Kolasinac, Jasna; Booth, Pamela L.; Fountain, Robert L.; Spicer, Kurt R.; Mosbrucker, Adam R.
2014-01-01
Researchers at the U.S. Geological Survey, Cascades Volcano Observatory, investigated alternative methods for the traditional sample-based sediment record procedure in determining suspended-sediment concentration (SSC) and discharge. One such sediment-surrogate technique was developed using turbidity and discharge to estimate SSC for two gaging stations in the Toutle River Basin near Mount St. Helens, Washington. To provide context for the study, methods for collecting sediment data and monitoring turbidity are discussed. Statistical methods used include the development of ordinary least squares regression models for each gaging station. Issues of time-related autocorrelation also are evaluated. Addition of lagged explanatory variables was used to account for autocorrelation in the turbidity, discharge, and SSC data. Final regression model equations and plots are presented for the two gaging stations. The regression models support near-real-time estimates of SSC and improved suspended-sediment discharge records by incorporating continuous instream turbidity. Future use of such models may potentially lower the costs of sediment monitoring by reducing time it takes to collect and process samples and to derive a sediment-discharge record.
Prediction of hourly PM2.5 using a space-time support vector regression model
NASA Astrophysics Data System (ADS)
Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang
2018-05-01
Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.
Development of speech perception and production in children with cochlear implants.
Kishon-Rabin, Liat; Taitelbaum, Riki; Muchnik, Chava; Gehtler, Inbal; Kronenberg, Jona; Hildesheimer, Minka
2002-05-01
The purpose of the present study was twofold: 1) to compare the hierarchy of perceived and produced significant speech pattern contrasts in children with cochlear implants, and 2) to compare this hierarchy to developmental data of children with normal hearing. The subjects included 35 prelingual hearing-impaired children with multichannel cochlear implants. The test materials were the Hebrew Speech Pattern Contrast (HeSPAC) test and the Hebrew Picture Speech Pattern Contrast (HePiSPAC) test for older and younger children, respectively. The results show that 1) auditory speech perception performance of children with cochlear implants reaches an asymptote at 76% (after correction for guessing) between 4 and 6 years of implant use; 2) all implant users perceived vowel place extremely well immediately after implantation; 3) most implanted children perceived initial voicing at chance level until 2 to 3 years after implantation, after which scores improved by 60% to 70% with implant use; 4) the hierarchy of phonetic-feature production paralleled that of perception: vowels first, voicing last, and manner and place of articulation in between; and 5) the hierarchy in speech pattern contrast perception and production was similar between the implanted and the normal-hearing children, with the exception of the vowels (possibly because of the interaction between the specific information provided by the implant device and the acoustics of the Hebrew language). The data reported here contribute to our current knowledge about the development of phonological contrasts in children who were deprived of sound in the first few years of their lives and then developed phonetic representations via cochlear implants. The data also provide additional insight into the interrelated skills of speech perception and production.
2013-01-01
Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957
Houel, Julien; Doan, Quang T; Cajgfinger, Thomas; Ledoux, Gilles; Amans, David; Aubret, Antoine; Dominjon, Agnès; Ferriol, Sylvain; Barbier, Rémi; Nasilowski, Michel; Lhuillier, Emmanuel; Dubertret, Benoît; Dujardin, Christophe; Kulzer, Florian
2015-01-27
We present an unbiased and robust analysis method for power-law blinking statistics in the photoluminescence of single nanoemitters, allowing us to extract both the bright- and dark-state power-law exponents from the emitters' intensity autocorrelation functions. As opposed to the widely used threshold method, our technique therefore does not require discriminating the emission levels of bright and dark states in the experimental intensity timetraces. We rely on the simultaneous recording of 450 emission timetraces of single CdSe/CdS core/shell quantum dots at a frame rate of 250 Hz with single photon sensitivity. Under these conditions, our approach can determine ON and OFF power-law exponents with a precision of 3% from a comparison to numerical simulations, even for shot-noise-dominated emission signals with an average intensity below 1 photon per frame and per quantum dot. These capabilities pave the way for the unbiased, threshold-free determination of blinking power-law exponents at the microsecond time scale.
Kyle J. Haynes; Andrew M. Liebhold; Ottar N. Bjørnstad; Andrew J. Allstadt; Randall S. Morin
2018-01-01
Evaluating the causes of spatial synchrony in population dynamics in nature is notoriously difficult due to a lack of data and appropriate statistical methods. Here, we use a recently developed method, a multivariate extension of the local indicators of spatial autocorrelation statistic, to map geographic variation in the synchrony of gypsy moth outbreaks. Regression...
ERIC Educational Resources Information Center
Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.
2011-01-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…
Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E
2017-01-01
Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial autocorrelation in an SDM context and, by taking account of random effects, produce outputs that can better elucidate the role of covariates in predicting species occurrence. Given that it is often unclear what the drivers are behind data clumping in an empirical occurrence dataset, or indeed how geographically restricted these data are, spatially-explicit Bayesian SDMs may be the better choice when modelling the spatial distribution of target species.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2018-02-01
Small scale rainfall variability is a key factor driving runoff response in fast responding systems, such as mountainous, urban and arid catchments. In this paper, the spatial-temporal autocorrelation structure of convective rainfall is derived with extremely high resolutions (60 m, 1 min) using estimates from an X-Band weather radar recently installed in a semiarid-arid area. The 2-dimensional spatial autocorrelation of convective rainfall fields and the temporal autocorrelation of point-wise and distributed rainfall fields are examined. The autocorrelation structures are characterized by spatial anisotropy, correlation distances 1.5-2.8 km and rarely exceeding 5 km, and time-correlation distances 1.8-6.4 min and rarely exceeding 10 min. The observed spatial variability is expected to negatively affect estimates from rain gauges and microwave links rather than satellite and C-/S-Band radars; conversely, the temporal variability is expected to negatively affect remote sensing estimates rather than rain gauges. The presented results provide quantitative information for stochastic weather generators, cloud-resolving models, dryland hydrologic and agricultural models, and multi-sensor merging techniques.
Smith, Justin D; Borckardt, Jeffrey J; Nash, Michael R
2012-09-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977). This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. Copyright © 2011. Published by Elsevier Ltd.
Duncan, Dustin T; Kawachi, Ichiro; Kum, Susan; Aldstadt, Jared; Piras, Gianfranco; Matthews, Stephen A; Arbia, Giuseppe; Castro, Marcia C; White, Kellee; Williams, David R
2014-04-01
The racial/ethnic and income composition of neighborhoods often influences local amenities, including the potential spatial distribution of trees, which are important for population health and community wellbeing, particularly in urban areas. This ecological study used spatial analytical methods to assess the relationship between neighborhood socio-demographic characteristics (i.e. minority racial/ethnic composition and poverty) and tree density at the census tact level in Boston, Massachusetts (US). We examined spatial autocorrelation with the Global Moran's I for all study variables and in the ordinary least squares (OLS) regression residuals as well as computed Spearman correlations non-adjusted and adjusted for spatial autocorrelation between socio-demographic characteristics and tree density. Next, we fit traditional regressions (i.e. OLS regression models) and spatial regressions (i.e. spatial simultaneous autoregressive models), as appropriate. We found significant positive spatial autocorrelation for all neighborhood socio-demographic characteristics (Global Moran's I range from 0.24 to 0.86, all P =0.001), for tree density (Global Moran's I =0.452, P =0.001), and in the OLS regression residuals (Global Moran's I range from 0.32 to 0.38, all P <0.001). Therefore, we fit the spatial simultaneous autoregressive models. There was a negative correlation between neighborhood percent non-Hispanic Black and tree density (r S =-0.19; conventional P -value=0.016; spatially adjusted P -value=0.299) as well as a negative correlation between predominantly non-Hispanic Black (over 60% Black) neighborhoods and tree density (r S =-0.18; conventional P -value=0.019; spatially adjusted P -value=0.180). While the conventional OLS regression model found a marginally significant inverse relationship between Black neighborhoods and tree density, we found no statistically significant relationship between neighborhood socio-demographic composition and tree density in the spatial regression models. Methodologically, our study suggests the need to take into account spatial autocorrelation as findings/conclusions can change when the spatial autocorrelation is ignored. Substantively, our findings suggest no need for policy intervention vis-à-vis trees in Boston, though we hasten to add that replication studies, and more nuanced data on tree quality, age and diversity are needed.
NASA Astrophysics Data System (ADS)
Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio Alex; Galluzzo, Danilo; Contrafatto, Danilo; Rapisarda, Salvatore
2015-04-01
A continuous monitoring of seismic activity is a fundamental task to detect the most common signals possibly related with volcanic activity, such as volcano-tectonic earthquakes, long-period events, and volcanic tremor. A reliable prediction of the ray-path propagated back from the recording site to the source is strongly limited by the poor knowledge of the local shallow velocity structure. Usually in volcanic environments the shallowest few hundreds meters of rock are characterized by strongly variable mechanical properties. Therefore the propagation of seismic signals through these shallow layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Driven by these motivations, between May and October 2014 we deployed a seismic array in the area called "Pozzo Pitarrone", where two seismic stations of the local monitoring network are installed, one at surface and one borehole at a depth of about 130 meters. The Pitarrone borehole is located in the middle northeastern flank along one of the main intrusion zones of Etna volcano, the so called NE-rift. With the 3D array we recorded seismic signals coming from the summit craters, and also from the seismogenetic fault called Pernicana Fault, which is located nearby. We used array data to analyse the dispersion characteristics of ambient noise vibrations and we derived one-dimensional (1D) shallow shear-velocity profiles through the inversion of dispersion curves measured by autocorrelation methods (SPAC). We observed a one-dimensional variation of shear-velocity between 430 m/s and 700 m/s to a depth of investigation of about 130 m. An abrupt velocity variation was recorded at a depth of about 60 m, probably corresponding to the transition between two different layers. Our preliminary results suggest a good correlation between the velocity model deducted with the stratigraphic section on Etna. The analysis of the entire data set will improve our knowledge about the (i) structure of the top layer and its relationship with geology, (ii) analysis of the signal to noise ratio (SNR) of volcanic signals as a function of frequency, (iii) study of seismic ray-path deformation caused by the interaction of the seismic waves with the free surface, (iv) evaluation of the attenuation of the seismic signals correlated with the volcanic activity. Moreover the knowledge of a shallow velocity model could improve the study of the source mechanism of low frequency events (VLP, LP and volcanic tremor), and give a new contribution to the seismic monitoring of Etna volcano through the detection and location of seismic sources by using 3D array techniques.
Boucher-Lalonde, Véronique; Currie, David J
2016-01-01
Species' geographic ranges could primarily be physiological tolerances drawn in space. Alternatively, geographic ranges could be only broadly constrained by physiological climatic tolerances: there could generally be much more proximate constraints on species' ranges (dispersal limitation, biotic interactions, etc.) such that species often occupy a small and unpredictable subset of tolerable climates. In the literature, species' climatic tolerances are typically estimated from the set of conditions observed within their geographic range. Using this method, studies have concluded that broader climatic niches permit larger ranges. Similarly, other studies have investigated the biological causes of incomplete range filling. But, when climatic constraints are measured directly from species' ranges, are correlations between species' range size and climate necessarily consistent with a causal link? We evaluated the extent to which variation in range size among 3277 bird and 1659 mammal species occurring in the Americas is statistically related to characteristics of species' realized climatic niches. We then compared how these relationships differed from the ones expected in the absence of a causal link. We used a null model that randomizes the predictor variables (climate), while retaining their broad spatial autocorrelation structure, thereby removing any causal relationship between range size and climate. We found that, although range size is strongly positively related to climatic niche breadth, range filling and, to a lesser extent, niche position in nature, the observed relationships are not always stronger than expected from spatial autocorrelation alone. Thus, we conclude that equally strong relationships between range size and climate would result from any processes causing ranges to be highly spatially autocorrelated.
Boucher-Lalonde, Véronique; Currie, David J.
2016-01-01
Species’ geographic ranges could primarily be physiological tolerances drawn in space. Alternatively, geographic ranges could be only broadly constrained by physiological climatic tolerances: there could generally be much more proximate constraints on species’ ranges (dispersal limitation, biotic interactions, etc.) such that species often occupy a small and unpredictable subset of tolerable climates. In the literature, species’ climatic tolerances are typically estimated from the set of conditions observed within their geographic range. Using this method, studies have concluded that broader climatic niches permit larger ranges. Similarly, other studies have investigated the biological causes of incomplete range filling. But, when climatic constraints are measured directly from species’ ranges, are correlations between species’ range size and climate necessarily consistent with a causal link? We evaluated the extent to which variation in range size among 3277 bird and 1659 mammal species occurring in the Americas is statistically related to characteristics of species’ realized climatic niches. We then compared how these relationships differed from the ones expected in the absence of a causal link. We used a null model that randomizes the predictor variables (climate), while retaining their broad spatial autocorrelation structure, thereby removing any causal relationship between range size and climate. We found that, although range size is strongly positively related to climatic niche breadth, range filling and, to a lesser extent, niche position in nature, the observed relationships are not always stronger than expected from spatial autocorrelation alone. Thus, we conclude that equally strong relationships between range size and climate would result from any processes causing ranges to be highly spatially autocorrelated. PMID:27855201
Time-series analysis of delta13C from tree rings. I. Time trends and autocorrelation.
Monserud, R A; Marshall, J D
2001-09-01
Univariate time-series analyses were conducted on stable carbon isotope ratios obtained from tree-ring cellulose. We looked for the presence and structure of autocorrelation. Significant autocorrelation violates the statistical independence assumption and biases hypothesis tests. Its presence would indicate the existence of lagged physiological effects that persist for longer than the current year. We analyzed data from 28 trees (60-85 years old; mean = 73 years) of western white pine (Pinus monticola Dougl.), ponderosa pine (Pinus ponderosa Laws.), and Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco var. glauca) growing in northern Idaho. Material was obtained by the stem analysis method from rings laid down in the upper portion of the crown throughout each tree's life. The sampling protocol minimized variation caused by changing light regimes within each tree. Autoregressive moving average (ARMA) models were used to describe the autocorrelation structure over time. Three time series were analyzed for each tree: the stable carbon isotope ratio (delta(13)C); discrimination (delta); and the difference between ambient and internal CO(2) concentrations (c(a) - c(i)). The effect of converting from ring cellulose to whole-leaf tissue did not affect the analysis because it was almost completely removed by the detrending that precedes time-series analysis. A simple linear or quadratic model adequately described the time trend. The residuals from the trend had a constant mean and variance, thus ensuring stationarity, a requirement for autocorrelation analysis. The trend over time for c(a) - c(i) was particularly strong (R(2) = 0.29-0.84). Autoregressive moving average analyses of the residuals from these trends indicated that two-thirds of the individual tree series contained significant autocorrelation, whereas the remaining third were random (white noise) over time. We were unable to distinguish between individuals with and without significant autocorrelation beforehand. Significant ARMA models were all of low order, with either first- or second-order (i.e., lagged 1 or 2 years, respectively) models performing well. A simple autoregressive (AR(1)), model was the most common. The most useful generalization was that the same ARMA model holds for each of the three series (delta(13)C, delta, c(a) - c(i)) for an individual tree, if the time trend has been properly removed for each series. The mean series for the two pine species were described by first-order ARMA models (1-year lags), whereas the Douglas-fir mean series were described by second-order models (2-year lags) with negligible first-order effects. Apparently, the process of constructing a mean time series for a species preserves an underlying signal related to delta(13)C while canceling some of the random individual tree variation. Furthermore, the best model for the overall mean series (e.g., for a species) cannot be inferred from a consensus of the individual tree model forms, nor can its parameters be estimated reliably from the mean of the individual tree parameters. Because two-thirds of the individual tree time series contained significant autocorrelation, the normal assumption of a random structure over time is unwarranted, even after accounting for the time trend. The residuals of an appropriate ARMA model satisfy the independence assumption, and can be used to make hypothesis tests.
Leptonic decay constants for D-mesons from 3-flavour CLS ensembles
NASA Astrophysics Data System (ADS)
Collins, Sara; Eckert, Kevin; Heitger, Jochen; Hofmann, Stefan; Söldner, Wolfgang
2018-03-01
e report on the status of an ongoing effort by the RQCD and ALPHA Collaborations, aimed at determining leptonic decay constants of charmed mesons. Our analysis is based on large-volume ensembles generated within the CLS effort, employing Nf = 2 + 1 non-perturbatively O(a) improved Wilson quarks, tree-level Symanzik-improved gauge action and open boundary conditions. The ensembles cover lattice spac-ings from a ≈ 0.09 fm to a ≈ 0.05 fm, with pion masses varied from 420 to 200 MeV. To extrapolate to the physical masses, we follow both the (2ml + ms) = const. and the ms = const. lines in parameter space.
Correction of Spatial Bias in Oligonucleotide Array Data
Lemieux, Sébastien
2013-01-01
Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Borycki, Dawid; Kholiqov, Oybek; Zhou, Wenjun; Srinivasan, Vivek J.
2017-03-01
Sensing and imaging methods based on the dynamic scattering of coherent light, including laser speckle, laser Doppler, and diffuse correlation spectroscopy quantify scatterer motion using light intensity (speckle) fluctuations. The underlying optical field autocorrelation (OFA), rather than being measured directly, is typically inferred from the intensity autocorrelation (IA) through the Siegert relationship, by assuming that the scattered field obeys Gaussian statistics. In this work, we demonstrate interferometric near-infrared spectroscopy (iNIRS) for measurement of time-of-flight (TOF) resolved field and intensity autocorrelations in fluid tissue phantoms and in vivo. In phantoms, we find a breakdown of the Siegert relationship for short times-of-flight due to a contribution from static paths whose optical field does not decorrelate over experimental time scales, and demonstrate that eliminating such paths by polarization gating restores the validity of the Siegert relationship. Inspired by these results, we developed a method, called correlation gating, for separating the OFA into static and dynamic components. Correlation gating enables more precise quantification of tissue dynamics. To prove this, we show that iNIRS and correlation gating can be applied to measure cerebral hemodynamics of the nude mouse in vivo using dynamically scattered (ergodic) paths and not static (non-ergodic) paths, which may not be impacted by blood. More generally, correlation gating, in conjunction with TOF resolution, enables more precise separation of diffuse and non-diffusive contributions to OFA than is possible with TOF resolution alone. Finally, we show that direct measurements of OFA are statistically more efficient than indirect measurements based on IA.
Studies of the micromorphology of sputtered TiN thin films by autocorrelation techniques
NASA Astrophysics Data System (ADS)
Smagoń, Kamil; Stach, Sebastian; Ţălu, Ştefan; Arman, Ali; Achour, Amine; Luna, Carlos; Ghobadi, Nader; Mardani, Mohsen; Hafezi, Fatemeh; Ahmadpourian, Azin; Ganji, Mohsen; Grayeli Korpi, Alireza
2017-12-01
Autocorrelation techniques are crucial tools for the study of the micromorphology of surfaces: They provide the description of anisotropic properties and the identification of repeated patterns on the surface, facilitating the comparison of samples. In the present investigation, some fundamental concepts of these techniques including the autocorrelation function and autocorrelation length have been reviewed and applied in the study of titanium nitride thin films by atomic force microscopy (AFM). The studied samples were grown on glass substrates by reactive magnetron sputtering at different substrate temperatures (from 25 {}°C to 400 {}°C , and their micromorphology was studied by AFM. The obtained AFM data were analyzed using MountainsMap Premium software obtaining the correlation function, the structure of isotropy and the spatial parameters according to ISO 25178 and EUR 15178N. These studies indicated that the substrate temperature during the deposition process is an important parameter to modify the micromorphology of sputtered TiN thin films and to find optimized surface properties. For instance, the autocorrelation length exhibited a maximum value for the sample prepared at a substrate temperature of 300 {}°C , and the sample obtained at 400 {}°C presented a maximum angle of the direction of the surface structure.
Decorrelation scales for Arctic Ocean hydrography - Part I: Amerasian Basin
NASA Astrophysics Data System (ADS)
Sumata, Hiroshi; Kauker, Frank; Karcher, Michael; Rabe, Benjamin; Timmermans, Mary-Louise; Behrendt, Axel; Gerdes, Rüdiger; Schauer, Ursula; Shimada, Koji; Cho, Kyoung-Ho; Kikuchi, Takashi
2018-03-01
Any use of observational data for data assimilation requires adequate information of their representativeness in space and time. This is particularly important for sparse, non-synoptic data, which comprise the bulk of oceanic in situ observations in the Arctic. To quantify spatial and temporal scales of temperature and salinity variations, we estimate the autocorrelation function and associated decorrelation scales for the Amerasian Basin of the Arctic Ocean. For this purpose, we compile historical measurements from 1980 to 2015. Assuming spatial and temporal homogeneity of the decorrelation scale in the basin interior (abyssal plain area), we calculate autocorrelations as a function of spatial distance and temporal lag. The examination of the functional form of autocorrelation in each depth range reveals that the autocorrelation is well described by a Gaussian function in space and time. We derive decorrelation scales of 150-200 km in space and 100-300 days in time. These scales are directly applicable to quantify the representation error, which is essential for use of ocean in situ measurements in data assimilation. We also describe how the estimated autocorrelation function and decorrelation scale should be applied for cost function calculation in a data assimilation system.
Tivnan, Matthew; Gurjar, Rajan; Wolf, David E; Vishwanath, Karthik
2015-08-12
Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications.
Tivnan, Matthew; Gurjar, Rajan; Wolf, David E.; Vishwanath, Karthik
2015-01-01
Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications. PMID:26274961
Analysis of data from NASA B-57B gust gradient program
NASA Technical Reports Server (NTRS)
Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.
1985-01-01
Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.
Suzuki, Satoshi N; Kachi, Naoki; Suzuki, Jun-Ichirou
2008-09-01
During the development of an even-aged plant population, the spatial distribution of individuals often changes from a clumped pattern to a random or regular one. The development of local size hierarchies in an Abies forest was analysed for a period of 47 years following a large disturbance in 1959. In 1980 all trees in an 8 x 8 m plot were mapped and their height growth after the disturbance was estimated. Their mortality and growth were then recorded at 1- to 4-year intervals between 1980 and 2006. Spatial distribution patterns of trees were analysed by the pair correlation function. Spatial correlations between tree heights were analysed with a spatial autocorrelation function and the mark correlation function. The mark correlation function was able to detect a local size hierarchy that could not be detected by the spatial autocorrelation function alone. The small-scale spatial distribution pattern of trees changed from clumped to slightly regular during the 47 years. Mortality occurred in a density-dependent manner, which resulted in regular spacing between trees after 1980. The spatial autocorrelation and mark correlation functions revealed the existence of tree patches consisting of large trees at the initial stage. Development of a local size hierarchy was detected within the first decade after the disturbance, although the spatial autocorrelation was not negative. Local size hierarchies that developed persisted until 2006, and the spatial autocorrelation became negative at later stages (after about 40 years). This is the first study to detect local size hierarchies as a prelude to regular spacing using the mark correlation function. The results confirm that use of the mark correlation function together with the spatial autocorrelation function is an effective tool to analyse the development of a local size hierarchy of trees in a forest.
Crustal thickness across the Trans-European Suture Zone from ambient noise autocorrelations
NASA Astrophysics Data System (ADS)
Becker, G.; Knapmeyer-Endrun, B.
2018-02-01
We derive autocorrelations from ambient seismic noise to image the reflectivity of the subsurface and to extract the Moho depth beneath the stations for two different data sets in Central Europe. The autocorrelations are calculated by smoothing the spectrum of the data in order to suppress high amplitude, narrow-band signals of industrial origin, applying a phase autocorrelation algorithm and time-frequency domain phase-weighted stacking. The stacked autocorrelation results are filtered and analysed predominantly in the frequency range of 1-2 Hz. Moho depth is automatically picked inside uncertainty windows obtained from prior information. The processing scheme we developed is applied to data from permanent seismic stations located in different geological provinces across Europe, with varying Moho depths between 25 and 50 km, and to the mainly short period temporary PASSEQ stations along seismic profile POLONAISE P4. The autocorrelation results are spatially and temporarily stable, but show a clear correlation with the existence of cultural noise. On average, a minimum of six months of data is needed to obtain stable results. The obtained Moho depth results are in good agreement with the subsurface model provided by seismic profiling, receiver function estimates and the European Moho depth map. In addition to extracting the Moho depth, it is possible to identify an intracrustal layer along the profile, again closely matching the seismic model. For more than half of the broad-band stations, another change in reflectivity within the mantle is observed and can be correlated with the lithosphere-asthenosphere boundary to the west and a mid-lithospheric discontinuity beneath the East European Craton. With the application of the developed autocorrelation processing scheme to different stations with varying crustal thicknesses, it is shown that Moho depth can be extracted independent of subsurface structure, when station coverage is low, when no strong seismic sources are present, and when only limited amounts of data are available.
Autocorrelation of location estimates and the analysis of radiotracking data
Otis, D.L.; White, Gary C.
1999-01-01
The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.
Lévy flights, autocorrelation, and slow convergence
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2004-06-01
Previously we have put forward that the sluggish convergence of truncated Lévy flights to a Gaussian (Phys. Rev. Lett. 73 (1994) 2946) together with the scaling power laws in their probability of return to the origin (Nature 376 (1995) 46) can be explained by autocorrelation in data (Physica A 323 (2003) 601; Phys. Lett. A 315 (2003) 51). A purpose of this paper is to improve and enlarge the scope of such a result. The role of the autocorrelations in the convergence process as well as the problem of establishing the distance of a given distribution to the Gaussian are analyzed in greater detail. We show that whereas power laws in the second moment can still be explained by linear correlation of pairs, sluggish convergence can now emerge from nonlinear autocorrelations. Our approach is exemplified with data from the British pound-US dollar exchange rate.
NASA Astrophysics Data System (ADS)
Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.
2013-12-01
Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.
NASA Astrophysics Data System (ADS)
Cui, Sheng; Jin, Shang; Xia, Wenjuan; Ke, Changjian; Liu, Deming
2015-11-01
Symbol rate identification (SRI) based on asynchronous delayed sampling is accurate, cost-effective and robust to impairments. For on-off keying (OOK) signals the symbol rate can be derived from the periodicity of the second-order autocorrelation function (ACF2) of the delay tap samples. But it is found that when applied this method to advanced modulation format signals with auxiliary amplitude modulation (AAM), incorrect results may be produced because AAM has significant impact on ACF2 periodicity, which makes the symbol period harder or even unable to be correctly identified. In this paper it is demonstrated that for these signals the first order autocorrelation function (ACF1) has stronger periodicity and can be used to replace ACF2 to produce more accurate and robust results. Utilizing the characteristics of the ACFs, an improved SRI method is proposed to accommodate both OOK and advanced modulation formant signals in a transparent manner. Furthermore it is proposed that by minimizing the peak to average ratio (PAPR) of the delay tap samples with an additional tunable dispersion compensator (TDC) the limited dispersion tolerance can be expanded to desired values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumway, R.H.; McQuarrie, A.D.
Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less
NASA Astrophysics Data System (ADS)
Karabulut, Savas; Cinku, Mualla; Tezel, Okan; Hisarli, Mumtaz; Ozcep, Ferhat; Tun, Muammer; Avdan, Ugur; Ozel, Oguz; Acikca, Ahmet; Aygordu, Ozan; Benli, Aral; Kesisyan, Arda; Yilmaz, Hakan; Varici, Cagri; Ozturkan, Hasan; Ozcan, Cuneyt; Kivrak, Ali
2015-04-01
Social Responsibility Projects (SRP) are important tools in contributing to the development of communities and applied educational science. Researchers dealing with engineering studies generally focus on technical specifications. However, when the subject depends on earthquake, engineers should be consider also social and educational components, besides the technical aspects. If scientific projects collaborated with municipalities of cities, it should be known that it will reach a wide range of people. Turkey is one of the most active region that experienced destructive earthquakes. The 1999 Marmara earthquake was responsible for the loose of more than 18.000 people. The destructive damage occurred on buildings that made on problematic soils. This however, is still the one of most important issues in Turkey which needs to be solved. Inspite of large earthquakes that occurred along the major segments of the North and East Anatolian Fault Zones due to the northwards excursion of Anatolia, the extensional regime in the Aegean region is also characterized by earthquakes that occurred with the movement of a number of strike slip and normal faults. The Dikili village within the Eastern Aegean extensional region experienced a large earthquake in 1939 (M: 6.8). The seismic activity is still characterised by high level and being detected. A lot of areas like the Kabakum village have been moved to its present location during this earthquake. The probability of an earthquake hazard in Dikili is considerably high level, today. Therefore, it is very important to predict the soil behaviour and engineering problems by using Geographic Information System (GIS) tools in this area. For this purpose we conducted a project with the collaboration of the Dikili Municipality in İzmir (Turkey) to determine the following issues: a) Possible disaster mitigation as a result of earthquake-soil-structure interaction, b) Geo-enginnering problems (i.e: soil liquefaction, soil settlement, soil bearing capacity, soil amplification), c) The basin structure and possible fault of the Dikili district, d) Risk analysis on cultivated areas due to salty water injection, e) The tectonic activity of the study area from Miocene to present. During this study a number of measurements were carried out to solve the problems defined above. These measurements include; microtremor single station (H/V) method according to Nakamura's technique, which is applied at 222 points. The results provide maps of soil fundamental frequency, soil amplification and soil sedimentary thickness by using developed amprical relationships. Spatial Autocorrelation Technique (SPAC) was carried out in 11 sites with Guralp CG-5 seismometer to predict the shear wave velocity-depth model towards the sismological bedrock. Multi-channel analysis of Surface Wave (MASW), Microtremor Array Method (MAM) and Seismic Refraction Method were applied at 121 sites with SARA-Doremi Seismograph. The soil liquefaction-induced settlements are determined in the frame of shallow soil engineering problems. Vertical Electrical Sounding (VES) was carried out to define the presence of saltly and drinkable and hot/cold underground water, the location of possible faults and the bedrock depth which was estimated with a Scientrex Saris Resistivity Equipment. To define the areas which are influenced by salty water, induced polarization (IP) method was applied at 34 sites. The basin structure and the probably faults of the study area were determined by applying gravity measurements on 248 points with a CG-5 Autogravity meter. Evaluation of the combined data is very important for producing microzonation maps. We therefore integrated all of the data into the GIS database and prepared large variety of maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zobov, V. E., E-mail: rsa@iph.krasn.ru; Kucherov, M. M.
2017-01-15
The singularities of the time autocorrelation functions (ACFs) of magnetically diluted spin systems with dipole–dipole interaction (DDI), which determine the high-frequency asymptotics of autocorrelation functions and the wings of a magnetic resonance line, are studied. Using the self-consistent fluctuating local field approximation, nonlinear equations are derived for autocorrelation functions averaged over the independent random arrangement of spins (magnetic atoms) in a diamagnetic lattice with different spin concentrations. The equations take into account the specificity of the dipole–dipole interaction. First, due to its axial symmetry in a strong static magnetic field, the autocorrelation functions of longitudinal and transverse spin components aremore » described by different equations. Second, the long-range type of the dipole–dipole interaction is taken into account by separating contributions into the local field from distant and near spins. The recurrent equations are obtained for the expansion coefficients of autocorrelation functions in power series in time. From them, the numerical value of the coordinate of the nearest singularity of the autocorrelation function is found on the imaginary time axis, which is equal to the radius of convergence of these expansions. It is shown that in the strong dilution case, the logarithmic concentration dependence of the coordinate of the singularity is observed, which is caused by the presence of a cluster of near spins whose fraction is small but contribution to the modulation frequency is large. As an example a silicon crystal with different {sup 29}Si concentrations in magnetic fields directed along three crystallographic axes is considered.« less
Counting the peaks in the excitation function for precompound processes
NASA Astrophysics Data System (ADS)
Bonetti, R.; Hussein, M. S.; Mello, P. A.
1983-08-01
The "counting of maxima" method of Brink and Stephen, conventionally used for the extraction of the correlation width of statistical (compound nucleus) reactions, is generalized to include precompound processes as well. It is found that this method supplies an important independent check of the results obtained from autocorrelation studies. An application is made to the reaction 25Mg(3He,p). NUCLEAR REACTIONS Statistical multistep compound processes discussed.
NASA Astrophysics Data System (ADS)
Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming
2018-01-01
The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.
Correlation time and diffusion coefficient imaging: application to a granular flow system.
Caprihan, A; Seymour, J D
2000-05-01
A parametric method for spatially resolved measurements for velocity autocorrelation functions, R(u)(tau) = , expressed as a sum of exponentials, is presented. The method is applied to a granular flow system of 2-mm oil-filled spheres rotated in a half-filled horizontal cylinder, which is an Ornstein-Uhlenbeck process with velocity autocorrelation function R(u)(tau) = e(- ||tau ||/tau(c)), where tau(c) is the correlation time and D = tau(c) is the diffusion coefficient. The pulsed-field-gradient NMR method consists of applying three different gradient pulse sequences of varying motion sensitivity to distinguish the range of correlation times present for particle motion. Time-dependent apparent diffusion coefficients are measured for these three sequences and tau(c) and D are then calculated from the apparent diffusion coefficient images. For the cylinder rotation rate of 2.3 rad/s, the axial diffusion coefficient at the top center of the free surface was 5.5 x 10(-6) m(2)/s, the correlation time was 3 ms, and the velocity fluctuation or granular temperature was 1.8 x 10(-3) m(2)/s(2). This method is also applicable to study transport in systems involving turbulence and porous media flows. Copyright 2000 Academic Press.
Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong
2016-01-01
Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.
2013-01-01
Background Time course gene expression experiments are an increasingly popular method for exploring biological processes. Temporal gene expression profiles provide an important characterization of gene function, as biological systems are both developmental and dynamic. With such data it is possible to study gene expression changes over time and thereby to detect differential genes. Much of the early work on analyzing time series expression data relied on methods developed originally for static data and thus there is a need for improved methodology. Since time series expression is a temporal process, its unique features such as autocorrelation between successive points should be incorporated into the analysis. Results This work aims to identify genes that show different gene expression profiles across time. We propose a statistical procedure to discover gene groups with similar profiles using a nonparametric representation that accounts for the autocorrelation in the data. In particular, we first represent each profile in terms of a Fourier basis, and then we screen out genes that are not differentially expressed based on the Fourier coefficients. Finally, we cluster the remaining gene profiles using a model-based approach in the Fourier domain. We evaluate the screening results in terms of sensitivity, specificity, FDR and FNR, compare with the Gaussian process regression screening in a simulation study and illustrate the results by application to yeast cell-cycle microarray expression data with alpha-factor synchronization. The key elements of the proposed methodology: (i) representation of gene profiles in the Fourier domain; (ii) automatic screening of genes based on the Fourier coefficients and taking into account autocorrelation in the data, while controlling the false discovery rate (FDR); (iii) model-based clustering of the remaining gene profiles. Conclusions Using this method, we identified a set of cell-cycle-regulated time-course yeast genes. The proposed method is general and can be potentially used to identify genes which have the same patterns or biological processes, and help facing the present and forthcoming challenges of data analysis in functional genomics. PMID:24134721
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
An alternative respiratory sounds classification system utilizing artificial neural networks.
Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen
2015-01-01
Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.
NASA Astrophysics Data System (ADS)
Lan, Ma; Xiao, Wen; Chen, Zonghui; Hao, Hongliang; Pan, Feng
2018-01-01
Real-time micro-vibration measurement is widely used in engineering applications. It is very difficult for traditional optical detection methods to achieve real-time need in a relatively high frequency and multi-spot synchronous measurement of a region at the same time,especially at the nanoscale. Based on the method of heterodyne interference, an experimental system of real-time measurement of micro - vibration is constructed to satisfy the demand in engineering applications. The vibration response signal is measured by combing optical heterodyne interferometry and a high-speed CMOS-DVR image acquisition system. Then, by extracting and processing multiple pixels at the same time, four digital demodulation technique are implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. Different kinds of demodulation algorithms are analyzed and the results show that these four demodulation algorithms are suitable for different interference signals. Both autocorrelation algorithm and cross-correlation algorithm meet the needs of real-time measurements. The autocorrelation algorithm demodulates the frequency more accurately, while the cross-correlation algorithm is more accurate in solving the amplitude.
Accounting for spatial effects in land use regression for urban air pollution modeling.
Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G
2015-01-01
In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
Maury, Augusto; Revilla, Reynier I
2015-08-01
Cosmic rays (CRs) occasionally affect charge-coupled device (CCD) detectors, introducing large spikes with very narrow bandwidth in the spectrum. These CR features can distort the chemical information expressed by the spectra. Consequently, we propose here an algorithm to identify and remove significant spikes in a single Raman spectrum. An autocorrelation analysis is first carried out to accentuate the CRs feature as outliers. Subsequently, with an adequate selection of the threshold, a discrete wavelet transform filter is used to identify CR spikes. Identified data points are then replaced by interpolated values using the weighted-average interpolation technique. This approach only modifies the data in a close vicinity of the CRs. Additionally, robust wavelet transform parameters are proposed (a desirable property for automation) after optimizing them with the application of the method in a great number of spectra. However, this algorithm, as well as all the single-spectrum analysis procedures, is limited to the cases in which CRs have much narrower bandwidth than the Raman bands. This might not be the case when low-resolution Raman instruments are used.
NASA Technical Reports Server (NTRS)
Hinedi, S.; Polydoros, A.
1988-01-01
The authors present and analyze a frequency-noncoherent two-lag autocorrelation statistic for the wideband detection of random BPSK signals in noise-plus-random-multitone interference. It is shown that this detector is quite robust to the presence or absence of interference and its specific parameter values, contrary to the case of an energy detector. The rule assumes knowledge of the data rate and the active scenario under H0. It is concluded that the real-time autocorrelation domain and its samples (lags) are a viable approach for detecting random signals in dense environments.
NASA Astrophysics Data System (ADS)
Borra, Ermanno F.; Romney, Jonathan D.; Trottier, Eric
2018-06-01
We demonstrate that extremely rapid and weak periodic and non-periodic signals can easily be detected by using the autocorrelation of intensity as a function of time. We use standard radio-astronomical observations that have artificial periodic and non-periodic signals generated by the electronics of terrestrial origin. The autocorrelation detects weak signals that have small amplitudes because it averages over long integration times. Another advantage is that it allows a direct visualization of the shape of the signals, while it is difficult to see the shape with a Fourier transform. Although Fourier transforms can also detect periodic signals, a novelty of this work is that we demonstrate another major advantage of the autocorrelation, that it can detect non-periodic signals while the Fourier transform cannot. Another major novelty of our work is that we use electric fields taken in a standard format with standard instrumentation at a radio observatory and therefore no specialized instrumentation is needed. Because the electric fields are sampled every 15.625 ns, they therefore allow detection of very rapid time variations. Notwithstanding the long integration times, the autocorrelation detects very rapid intensity variations as a function of time. The autocorrelation could also detect messages from Extraterrestrial Intelligence as non-periodic signals.
Duncan, Dustin T.; Kawachi, Ichiro; Kum, Susan; Aldstadt, Jared; Piras, Gianfranco; Matthews, Stephen A.; Arbia, Giuseppe; Castro, Marcia C.; White, Kellee; Williams, David R.
2017-01-01
The racial/ethnic and income composition of neighborhoods often influences local amenities, including the potential spatial distribution of trees, which are important for population health and community wellbeing, particularly in urban areas. This ecological study used spatial analytical methods to assess the relationship between neighborhood socio-demographic characteristics (i.e. minority racial/ethnic composition and poverty) and tree density at the census tact level in Boston, Massachusetts (US). We examined spatial autocorrelation with the Global Moran’s I for all study variables and in the ordinary least squares (OLS) regression residuals as well as computed Spearman correlations non-adjusted and adjusted for spatial autocorrelation between socio-demographic characteristics and tree density. Next, we fit traditional regressions (i.e. OLS regression models) and spatial regressions (i.e. spatial simultaneous autoregressive models), as appropriate. We found significant positive spatial autocorrelation for all neighborhood socio-demographic characteristics (Global Moran’s I range from 0.24 to 0.86, all P=0.001), for tree density (Global Moran’s I=0.452, P=0.001), and in the OLS regression residuals (Global Moran’s I range from 0.32 to 0.38, all P<0.001). Therefore, we fit the spatial simultaneous autoregressive models. There was a negative correlation between neighborhood percent non-Hispanic Black and tree density (rS=−0.19; conventional P-value=0.016; spatially adjusted P-value=0.299) as well as a negative correlation between predominantly non-Hispanic Black (over 60% Black) neighborhoods and tree density (rS=−0.18; conventional P-value=0.019; spatially adjusted P-value=0.180). While the conventional OLS regression model found a marginally significant inverse relationship between Black neighborhoods and tree density, we found no statistically significant relationship between neighborhood socio-demographic composition and tree density in the spatial regression models. Methodologically, our study suggests the need to take into account spatial autocorrelation as findings/conclusions can change when the spatial autocorrelation is ignored. Substantively, our findings suggest no need for policy intervention vis-à-vis trees in Boston, though we hasten to add that replication studies, and more nuanced data on tree quality, age and diversity are needed. PMID:29354668
Moran, John L; Solomon, Patricia J
2013-05-24
Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.
NASA Astrophysics Data System (ADS)
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2017-02-01
Apoptosis is a fundamental process controlling normal tissue homeostasis by regulating a balance between cell proliferation and death. Predicting subcellular location of apoptosis proteins is very helpful for understanding its mechanism of programmed cell death. Prediction of apoptosis protein subcellular location is still a challenging and complicated task, and existing methods mainly based on protein primary sequences. In this paper, we propose a new position-specific scoring matrix (PSSM)-based model by using Geary autocorrelation function and detrended cross-correlation coefficient (DCCA coefficient). Then a 270-dimensional (270D) feature vector is constructed on three widely used datasets: ZD98, ZW225 and CL317, and support vector machine is adopted as classifier. The overall prediction accuracies are significantly improved by rigorous jackknife test. The results show that our model offers a reliable and effective PSSM-based tool for prediction of apoptosis protein subcellular localization.
A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.
Ahn, C B; Cho, Z H
1987-01-01
A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.
Borycki, Dawid; Kholiqov, Oybek; Srinivasan, Vivek J
2017-02-01
Interferometric near-infrared spectroscopy (iNIRS) is a new technique that measures time-of-flight- (TOF-) resolved autocorrelations in turbid media, enabling simultaneous estimation of optical and dynamical properties. Here, we demonstrate reflectance-mode iNIRS for noninvasive monitoring of a mouse brain in vivo. A method for more precise quantification with less static interference from superficial layers, based on separating static and dynamic components of the optical field autocorrelation, is presented. Absolute values of absorption, reduced scattering, and blood flow index (BFI) are measured, and changes in BFI and absorption are monitored during a hypercapnic challenge. Absorption changes from TOF-resolved iNIRS agree with absorption changes from continuous wave NIRS analysis, based on TOF-integrated light intensity changes, an effective path length, and the modified Beer-Lambert Law. Thus, iNIRS is a promising approach for quantitative and noninvasive monitoring of perfusion and optical properties in vivo.
Ring polymer dynamics in curved spaces
NASA Astrophysics Data System (ADS)
Wolf, S.; Curotto, E.
2012-07-01
We formulate an extension of the ring polymer dynamics approach to curved spaces using stereographic projection coordinates. We test the theory by simulating the particle in a ring, {T}^1, mapped by a stereographic projection using three potentials. Two of these are quadratic, and one is a nonconfining sinusoidal model. We propose a new class of algorithms for the integration of the ring polymer Hamilton equations in curved spaces. These are designed to improve the energy conservation of symplectic integrators based on the split operator approach. For manifolds, the position-position autocorrelation function can be formulated in numerous ways. We find that the position-position autocorrelation function computed from configurations in the Euclidean space {R}^2 that contains {T}^1 as a submanifold has the best statistical properties. The agreement with exact results obtained with vector space methods is excellent for all three potentials, for all values of time in the interval simulated, and for a relatively broad range of temperatures.
NASA Astrophysics Data System (ADS)
Wang, He; Otsu, Hideaki; Sakurai, Hiroyoshi; Ahn, DeukSoon; Aikawa, Masayuki; Ando, Takashi; Araki, Shouhei; Chen, Sidong; Chiga, Nobuyuki; Doornenbal, Pieter; Fukuda, Naoki; Isobe, Tadaaki; Kawakami, Shunsuke; Kawase, Shoichiro; Kin, Tadahiro; Kondo, Yosuke; Koyama, Shupei; Kubono, Shigeru; Maeda, Yukie; Makinaga, Ayano; Matsushita, Masafumi; Matsuzaki, Teiichiro; Michimasa, Shinichiro; Momiyama, Satoru; Nagamine, Shunsuke; Nakamura, Takashi; Nakano, Keita; Niikura, Megumi; Ozaki, Tomoyuki; Saito, Atsumi; Saito, Takeshi; Shiga, Yoshiaki; Shikata, Mizuki; Shimizu, Yohei; Shimoura, Susumu; Sumikama, Toshiyuki; Söderström, Pär-Anders; Suzuki, Hiroshi; Takeda, Hiroyuki; Takeuchi, Satoshi; Taniuchi, Ryo; Togano, Yasuhiro; Tsubota, Junichi; Uesaka, Meiko; Watanabe, Yasushi; Watanabe, Yukinobu; Wimmer, Kathrin; Yamamoto, Tatsuya; Yoshida, Koichi
2017-09-01
Spallation reactions for the long-lived fission products 137Cs, 90Sr and 107Pd have been studied for the purpose of nuclear waste transmutation. The cross sections on the proton- and deuteron-induced spallation were obtained in inverse kinematics at the RIKEN Radioactive Isotope Beam Factory. Both the target and energy dependences of cross sections have been investigated systematically. and the cross-section differences between the proton and deuteron are found to be larger for lighter fragments. The experimental data are compared with the SPACS semi-empirical parameterization and the PHITS calculations including both the intra-nuclear cascade and evaporation processes.
2017 First Nations Launch Competition Winners visit Kennedy Spac
2017-08-02
A group of 19 college students recently visited NASA's Kennedy Space Center as winners of the First Nations Launch competition in Wisconsin. They were part of teams that successfully flew high-powered rockets, earning them an opportunity to visit the Florida spaceport. During their visit, they toured the Vehicle Assembly Building, Launch Control Center and the Kennedy visitor complex. The competition is supported by NASA and the Wisconsin Space Grant Consortium. It provides an opportunity for students attending tribal colleges or universities, or who are members of a campus American Indian Science and Engineering Society, or AISES, chapter to design, build and launch a rocket at a competition in Kansasville, Wisconsin.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Spatial autocorrelation analysis of health care hotspots in Taiwan in 2006
2009-01-01
Background Spatial analytical techniques and models are often used in epidemiology to identify spatial anomalies (hotspots) in disease regions. These analytical approaches can be used to not only identify the location of such hotspots, but also their spatial patterns. Methods In this study, we utilize spatial autocorrelation methodologies, including Global Moran's I and Local Getis-Ord statistics, to describe and map spatial clusters, and areas in which these are situated, for the 20 leading causes of death in Taiwan. In addition, we use the fit to a logistic regression model to test the characteristics of similarity and dissimilarity by gender. Results Gender is compared in efforts to formulate the common spatial risk. The mean found by local spatial autocorrelation analysis is utilized to identify spatial cluster patterns. There is naturally great interest in discovering the relationship between the leading causes of death and well-documented spatial risk factors. For example, in Taiwan, we found the geographical distribution of clusters where there is a prevalence of tuberculosis to closely correspond to the location of aboriginal townships. Conclusions Cluster mapping helps to clarify issues such as the spatial aspects of both internal and external correlations for leading health care events. This is of great aid in assessing spatial risk factors, which in turn facilitates the planning of the most advantageous types of health care policies and implementation of effective health care services. PMID:20003460
2014-01-01
Background Measures of similarity for chemical molecules have been developed since the dawn of chemoinformatics. Molecular similarity has been measured by a variety of methods including molecular descriptor based similarity, common molecular fragments, graph matching and 3D methods such as shape matching. Similarity measures are widespread in practice and have proven to be useful in drug discovery. Because of our interest in electrostatics and high throughput ligand-based virtual screening, we sought to exploit the information contained in atomic coordinates and partial charges of a molecule. Results A new molecular descriptor based on partial charges is proposed. It uses the autocorrelation function and linear binning to encode all atoms of a molecule into two rotation-translation invariant vectors. Combined with a scoring function, the descriptor allows to rank-order a database of compounds versus a query molecule. The proposed implementation is called ACPC (AutoCorrelation of Partial Charges) and released in open source. Extensive retrospective ligand-based virtual screening experiments were performed and other methods were compared with in order to validate the method and associated protocol. Conclusions While it is a simple method, it performed remarkably well in experiments. At an average speed of 1649 molecules per second, it reached an average median area under the curve of 0.81 on 40 different targets; hence validating the proposed protocol and implementation. PMID:24887178
Effect of the spatial autocorrelation of empty sites on the evolution of cooperation
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Li; Hou, Dongshuang
2016-02-01
An evolutionary game model is constructed to investigate the spatial autocorrelation of empty sites on the evolution of cooperation. Each individual is assumed to imitate the strategy of the one who scores the highest in its neighborhood including itself. Simulation results illustrate that the evolutionary dynamics based on the Prisoner's Dilemma game (PD) depends severely on the initial conditions, while the Snowdrift game (SD) is hardly affected by that. A high degree of autocorrelation of empty sites is beneficial for the evolution of cooperation in the PD, whereas it shows diversification effects depending on the parameter of temptation to defect in the SD. Moreover, for the repeated game with three strategies, 'always defect' (ALLD), 'tit-for-tat' (TFT), and 'always cooperate' (ALLC), simulations reveal that an amazing evolutionary diversity appears for varying of parameters of the temptation to defect and the probability of playing in the next round of the game. The spatial autocorrelation of empty sites can have profound effects on evolutionary dynamics (equilibrium and oscillation) and spatial distribution.
NASA Astrophysics Data System (ADS)
Prudnikov, V. V.; Prudnikov, P. V.; Popov, I. S.
2018-03-01
A Monte Carlo numerical simulation of the specific features of nonequilibrium critical behavior is carried out for the two-dimensional structurally disordered XY model during its evolution from a low-temperature initial state. On the basis of the analysis of the two-time dependence of autocorrelation functions and dynamic susceptibility for systems with spin concentrations of p = 1.0, 0.9, and 0.6, aging phenomena characterized by a slowing down of the relaxation system with increasing waiting time and the violation of the fluctuation-dissipation theorem (FDT) are revealed. The values of the universal limiting fluctuation-dissipation ratio (FDR) are obtained for the systems considered. As a result of the analysis of the two-time scaling dependence for spin-spin and connected spin autocorrelation functions, it is found that structural defects lead to subaging phenomena in the behavior of the spin-spin autocorrelation function and superaging phenomena in the behavior of the connected spin autocorrelation function.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Hoover, Brian G; Gamiz, Victor L
2006-02-01
The scalar bidirectional reflectance distribution function (BRDF) due to a perfectly conducting surface with roughness and autocorrelation width comparable with the illumination wavelength is derived from coherence theory on the assumption of a random reflective phase screen and an expansion valid for large effective roughness. A general quadratic expansion of the two-dimensional isotropic surface autocorrelation function near the origin yields representative Cauchy and Gaussian BRDF solutions and an intermediate general solution as the sum of an incoherent component and a nonspecular coherent component proportional to an integral of the plasma dispersion function in the complex plane. Plots illustrate agreement of the derived general solution with original bistatic BRDF data due to a machined aluminum surface, and comparisons are drawn with previously published data in the examination of variations with incident angle, roughness, illumination wavelength, and autocorrelation coefficients in the bistatic and monostatic geometries. The general quadratic autocorrelation expansion provides a BRDF solution that smoothly interpolates between the well-known results of the linear and parabolic approximations.
Chemometric modeling of 5-Phenylthiophenecarboxylic acid derivatives as anti-rheumatic agents.
Adhikari, Nilanjan; Jana, Dhritiman; Halder, Amit K; Mondal, Chanchal; Maiti, Milan K; Jha, Tarun
2012-09-01
Arthritis involves joint inflammation, synovial proliferation and damage of cartilage. Interleukin-1 undergoes acute and chronic inflammatory mechanisms of arthritis. Non-steroidal anti-inflammatory drugs can produce symptomatic relief but cannot act through mechanisms of arthritis. Diseases modifying anti-rheumatoid drugs reduce the symptoms of arthritis like decrease in pain and disability score, reduction of swollen joints, articular index and serum concentration of acute phage proteins. Recently, some literature references are obtained on molecular modeling of antirheumatic agents. We have tried chemometric modeling through 2D-QSAR studies on a dataset of fifty-one compounds out of which forty-four 5-Phenylthiophenecarboxylic acid derivatives have IL-1 inhibitory activity and forty-six 5-Phenylthiophenecarboxylic acid derivatives have %AIA suppressive activity. The work was done to find out the structural requirements of these anti-rheumatic agents. 2D QSAR models were generated by 2D and 3D descriptors by using multiple linear regression and partial least square method where IL-1 antagonism was considered as the biological activity parameter. Statistically significant models were developed on the training set developed by k-means cluster analysis. Sterimol parameters, electronic interaction at atom number 9, 2D autocorrelation descriptors, information content descriptor, average connectivity index chi-3, radial distribution function, Balaban 3D index and 3D-MoRSE descriptors were found to play crucial roles to modulate IL-1 inhibitory activity. 2D autocorrelation descriptors like Broto-Moreau autocorrelation of topological structure-lag 3 weighted by atomic van der Waals volumes, Geary autocorrelation-lag 7 associated with weighted atomic Sanderson electronegativities and 3D-MoRSE descriptors like 3D-MoRSE-signal 22 related to atomic van der Waals volumes, 3D-MoRSE-signal 28 related to atomic van der Waals volumes and 3D-MoRSE-signal 9 which was unweighted, were found to play important roles to model %AIA suppressive activity.
Aragón, Pedro; Fitze, Patrick S.
2014-01-01
Geographical body size variation has long interested evolutionary biologists, and a range of mechanisms have been proposed to explain the observed patterns. It is considered to be more puzzling in ectotherms than in endotherms, and integrative approaches are necessary for testing non-exclusive alternative mechanisms. Using lacertid lizards as a model, we adopted an integrative approach, testing different hypotheses for both sexes while incorporating temporal, spatial, and phylogenetic autocorrelation at the individual level. We used data on the Spanish Sand Racer species group from a field survey to disentangle different sources of body size variation through environmental and individual genetic data, while accounting for temporal and spatial autocorrelation. A variation partitioning method was applied to separate independent and shared components of ecology and phylogeny, and estimated their significance. Then, we fed-back our models by controlling for relevant independent components. The pattern was consistent with the geographical Bergmann's cline and the experimental temperature-size rule: adults were larger at lower temperatures (and/or higher elevations). This result was confirmed with additional multi-year independent data-set derived from the literature. Variation partitioning showed no sex differences in phylogenetic inertia but showed sex differences in the independent component of ecology; primarily due to growth differences. Interestingly, only after controlling for independent components did primary productivity also emerge as an important predictor explaining size variation in both sexes. This study highlights the importance of integrating individual-based genetic information, relevant ecological parameters, and temporal and spatial autocorrelation in sex-specific models to detect potentially important hidden effects. Our individual-based approach devoted to extract and control for independent components was useful to reveal hidden effects linked with alternative non-exclusive hypothesis, such as those of primary productivity. Also, including measurement date allowed disentangling and controlling for short-term temporal autocorrelation reflecting sex-specific growth plasticity. PMID:25090025
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madanipour, Khosro; Tavassoly, Mohammad T
2009-02-01
We show theoretically and verify experimentally that the modulation transfer function (MTF) of a printing system can be determined by measuring the autocorrelation of a printed Ronchi grating. In practice, two similar Ronchi gratings are printed on two transparencies and the transparencies are superimposed with parallel grating lines. Then, the gratings are uniformly illuminated and the transmitted light from a large section is measured versus the displacement of one grating with respect to the other in a grating pitch interval. This measurement provides the required autocorrelation function for determination of the MTF.
Hydrodynamics of confined colloidal fluids in two dimensions
NASA Astrophysics Data System (ADS)
Sané, Jimaan; Padding, Johan T.; Louis, Ard A.
2009-05-01
We apply a hybrid molecular dynamics and mesoscopic simulation technique to study the dynamics of two-dimensional colloidal disks in confined geometries. We calculate the velocity autocorrelation functions and observe the predicted t-1 long-time hydrodynamic tail that characterizes unconfined fluids, as well as more complex oscillating behavior and negative tails for strongly confined geometries. Because the t-1 tail of the velocity autocorrelation function is cut off for longer times in finite systems, the related diffusion coefficient does not diverge but instead depends logarithmically on the overall size of the system. The Langevin equation gives a poor approximation to the velocity autocorrelation function at both short and long times.
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
Analysis of spatial autocorrelation patterns of heavy and super-heavy rainfall in Iran
NASA Astrophysics Data System (ADS)
Rousta, Iman; Doostkamian, Mehdi; Haghighi, Esmaeil; Ghafarian Malamiri, Hamid Reza; Yarahmadi, Parvane
2017-09-01
Rainfall is a highly variable climatic element, and rainfall-related changes occur in spatial and temporal dimensions within a regional climate. The purpose of this study is to investigate the spatial autocorrelation changes of Iran's heavy and super-heavy rainfall over the past 40 years. For this purpose, the daily rainfall data of 664 meteorological stations between 1971 and 2011 are used. To analyze the changes in rainfall within a decade, geostatistical techniques like spatial autocorrelation analysis of hot spots, based on the Getis-Ord G i statistic, are employed. Furthermore, programming features in MATLAB, Surfer, and GIS are used. The results indicate that the Caspian coast, the northwest and west of the western foothills of the Zagros Mountains of Iran, the inner regions of Iran, and southern parts of Southeast and Northeast Iran, have the highest likelihood of heavy and super-heavy rainfall. The spatial pattern of heavy rainfall shows that, despite its oscillation in different periods, the maximum positive spatial autocorrelation pattern of heavy rainfall includes areas of the west, northwest and west coast of the Caspian Sea. On the other hand, a negative spatial autocorrelation pattern of heavy rainfall is observed in central Iran and parts of the east, particularly in Zabul. Finally, it is found that patterns of super-heavy rainfall are similar to those of heavy rainfall.
Autocorrelations of stellar light and mass at z˜ 0 and ˜1: from SDSS to DEEP2
NASA Astrophysics Data System (ADS)
Li, Cheng; White, Simon D. M.; Chen, Yanmei; Coil, Alison L.; Davis, Marc; De Lucia, Gabriella; Guo, Qi; Jing, Y. P.; Kauffmann, Guinevere; Willmer, Christopher N. A.; Zhang, Wei
2012-01-01
We present measurements of projected autocorrelation functions wp(rp) for the stellar mass of galaxies and for their light in the U, B and V bands, using data from the third data release of the DEEP2 Galaxy Redshift Survey and the final data release of the Sloan Digital Sky Survey (SDSS). We investigate the clustering bias of stellar mass and light by comparing these to projected autocorrelations of dark matter estimated from the Millennium Simulations (MS) at z= 1 and 0.07, the median redshifts of our galaxy samples. All of the autocorrelation and bias functions show systematic trends with spatial scale and waveband which are impressively similar at the two redshifts. This shows that the well-established environmental dependence of stellar populations in the local Universe is already in place at z= 1. The recent MS-based galaxy formation simulation of Guo et al. reproduces the scale-dependent clustering of luminosity to an accuracy better than 30 per cent in all bands and at both redshifts, but substantially overpredicts mass autocorrelations at separations below about 2 Mpc. Further comparison of the shapes of our stellar mass bias functions with those predicted by the model suggests that both the SDSS and DEEP2 data prefer a fluctuation amplitude of σ8˜ 0.8 rather than the σ8= 0.9 assumed by the MS.
Risk-based transfer responses to climate change, simulated through autocorrelated stochastic methods
NASA Astrophysics Data System (ADS)
Kirsch, B.; Characklis, G. W.
2009-12-01
Maintaining municipal water supply reliability despite growing demands can be achieved through a variety of mechanisms, including supply strategies such as temporary transfers. However, much of the attention on transfers has been focused on market-based transfers in the western United States largely ignoring the potential for transfers in the eastern U.S. The different legal framework of the eastern and western U.S. leads to characteristic differences between their respective transfers. Western transfers tend to be agricultural-to-urban and involve raw, untreated water, with the transfer often involving a simple change in the location and/or timing of withdrawals. Eastern transfers tend to be contractually established urban-to-urban transfers of treated water, thereby requiring the infrastructure to transfer water between utilities. Utilities require the tools to be able to evaluate transfer decision rules and the resulting expected future transfer behavior. Given the long-term planning horizons of utilities, potential changes in hydrologic patterns due to climate change must be considered. In response, this research develops a method for generating a stochastic time series that reproduces the historic autocorrelation and can be adapted to accommodate future climate scenarios. While analogous in operation to an autoregressive model, this method reproduces the seasonal autocorrelation structure, as opposed to assuming the strict stationarity produced by an autoregressive model. Such urban-to-urban transfers are designed to be rare, transient events used primarily during times of severe drought, and incorporating Monte Carlo techniques allows for the development of probability distributions of likely outcomes. This research evaluates a system risk-based, urban-to-urban transfer agreement between three utilities in the Triangle region of North Carolina. Two utilities maintain their own surface water supplies in adjoining watersheds and look to obtain transfers via interconnections to a third utility with access to excess supply. The stochastic generation method is adapted to maintain the cross-correlation of inflows between watersheds. Risk-based decision rules are developed to govern transfers based upon the current level of risk to the water supply. This work determines how expected transfer behavior changes under four future climate scenarios assuming several different risk-thresholds.
Comparison of two fractal interpolation methods
NASA Astrophysics Data System (ADS)
Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo
2017-03-01
As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has strong periodicity, which is suitable for simulating periodic surface.
The Generalized Multilevel Facets Model for Longitudinal Data
ERIC Educational Resources Information Center
Hung, Lai-Fa; Wang, Wen-Chung
2012-01-01
In the human sciences, ability tests or psychological inventories are often repeatedly conducted to measure growth. Standard item response models do not take into account possible autocorrelation in longitudinal data. In this study, the authors propose an item response model to account for autocorrelation. The proposed three-level model consists…
Exploring Online Learning Data Using Fractal Dimensions. Research Report. ETS RR-17-15
ERIC Educational Resources Information Center
Guo, Hongwen
2017-01-01
Data collected from online learning and tutoring systems for individual students showed strong autocorrelation or dependence because of content connection, knowledge-based dependency, or persistence of learning behavior. When the response data show little dependence or negative autocorrelations for individual students, it is suspected that…
VizieR Online Data Catalog: Molecular clumps in W51 giant molecular cloud (Parsons+, 2012)
NASA Astrophysics Data System (ADS)
Parsons, H.; Thompson, M. A.; Clark, J. S.; Chrysostomou, A.
2013-04-01
The W51 GMC was mapped using the Heterodyne Array Receiver Programme (HARP) receiver with the back-end digital autocorrelator spectrometer Auto-Correlation Spectral Imaging System (ACSIS) on the James Clerk Maxwell Telescope (JCMT). Data were taken in 2008 May. (2 data files).
Spatial Autocorrelation And Autoregressive Models In Ecology
Jeremy W. Lichstein; Theodore R. Simons; Susan A. Shriner; Kathleen E. Franzreb
2003-01-01
Abstract. Recognition and analysis of spatial autocorrelation has defined a new paradigm in ecology. Attention to spatial pattern can lead to insights that would have been otherwise overlooked, while ignoring space may lead to false conclusions about ecological relationships. We used Gaussian spatial autoregressive models, fit with widely available...
A comparative simulation study of AR(1) estimators in short time series.
Krone, Tanja; Albers, Casper J; Timmerman, Marieke E
2017-01-01
Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim
2017-06-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2018-01-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035
Statistical procedures for evaluating daily and monthly hydrologic model predictions
Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.
2004-01-01
The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.
Time series analysis of particle tracking data for molecular motion on the cell membrane.
Ying, Wenxia; Huerta, Gabriel; Steinberg, Stanly; Zúñiga, Martha
2009-11-01
Biophysicists use single particle tracking (SPT) methods to probe the dynamic behavior of individual proteins and lipids in cell membranes. The mean squared displacement (MSD) has proven to be a powerful tool for analyzing the data and drawing conclusions about membrane organization, including features like lipid rafts, protein islands, and confinement zones defined by cytoskeletal barriers. Here, we implement time series analysis as a new analytic tool to analyze further the motion of membrane proteins. The experimental data track the motion of 40 nm gold particles bound to Class I major histocompatibility complex (MHCI) molecules on the membranes of mouse hepatoma cells. Our first novel result is that the tracks are significantly autocorrelated. Because of this, we developed linear autoregressive models to elucidate the autocorrelations. Estimates of the signal to noise ratio for the models show that the autocorrelated part of the motion is significant. Next, we fit the probability distributions of jump sizes with four different models. The first model is a general Weibull distribution that shows that the motion is characterized by an excess of short jumps as compared to a normal random walk. We also fit the data with a chi distribution which provides a natural estimate of the dimension d of the space in which a random walk is occurring. For the biological data, the estimates satisfy 1 < d < 2, implying that particle motion is not confined to a line, but also does not occur freely in the plane. The dimension gives a quantitative estimate of the amount of nanometer scale obstruction met by a diffusing molecule. We introduce a new distribution and use the generalized extreme value distribution to show that the biological data also have an excess of long jumps as compared to normal diffusion. These fits provide novel estimates of the microscopic diffusion constant. Previous MSD analyses of SPT data have provided evidence for nanometer-scale confinement zones that restrict lateral diffusion, supporting the notion that plasma membrane organization is highly structured. Our demonstration that membrane protein motion is autocorrelated and is characterized by an excess of both short and long jumps reinforces the concept that the membrane environment is heterogeneous and dynamic. Autocorrelation analysis and modeling of the jump distributions are powerful new techniques for the analysis of SPT data and the development of more refined models of membrane organization. The time series analysis also provides several methods of estimating the diffusion constant in addition to the constant provided by the mean squared displacement. The mean squared displacement for most of the biological data shows a power law behavior rather the linear behavior of Brownian motion. In this case, we introduce the notion of an instantaneous diffusion constant. All of the diffusion constants show a strong consistency for most of the biological data.
Investment Dynamics with Natural Expectations.
Fuster, Andreas; Hebert, Benjamin; Laibson, David
2010-01-01
We study an investment model in which agents have the wrong beliefs about the dynamic properties of fundamentals. Specifically, we assume that agents underestimate the rate of mean reversion. The model exhibits the following six properties: (i) Beliefs are excessively optimistic in good times and excessively pessimistic in bad times. (ii) Asset prices are too volatile. (iii) Excess returns are negatively autocorrelated. (iv) High levels of corporate profits predict negative future excess returns. (v) Real economic activity is excessively volatile; the economy experiences amplified investment cycles. (vi) Corporate profits are positively autocorrelated in the short run and negatively autocorrelated in the medium run. The paper provides an illustrative model of animal spirits, amplified business cycles, and excess volatility.
Investment Dynamics with Natural Expectations*
Fuster, Andreas; Hebert, Benjamin; Laibson, David
2012-01-01
We study an investment model in which agents have the wrong beliefs about the dynamic properties of fundamentals. Specifically, we assume that agents underestimate the rate of mean reversion. The model exhibits the following six properties: (i) Beliefs are excessively optimistic in good times and excessively pessimistic in bad times. (ii) Asset prices are too volatile. (iii) Excess returns are negatively autocorrelated. (iv) High levels of corporate profits predict negative future excess returns. (v) Real economic activity is excessively volatile; the economy experiences amplified investment cycles. (vi) Corporate profits are positively autocorrelated in the short run and negatively autocorrelated in the medium run. The paper provides an illustrative model of animal spirits, amplified business cycles, and excess volatility. PMID:23243469
15 pixels digital autocorrelation spectrometer system
NASA Astrophysics Data System (ADS)
Lee, Changhoon; Kim, Hyo-Ryung; Kim, Kwang-Dong; Chung, Mun-Hee; Timoc, C.
2006-06-01
In this paper describes the system configuration and the some performance test results of the 15 pixels digital autocorrelation spectrometer to be used at the Taeduk Radio Astronomy Observatory (TRAO) of Korea. This autocorrelation spectrometer instrument enclosed in a 3-slot VXI module and controlled via a USB port by a backend PC. This spectrometer system consists of the 4 band-pass filters unit, the digitizer, the 512 lags correlator, the clock distribution unit, and USB controller. And here we describe the frequency accuracy and the root-mean-square noise characteristic of this spectrometer. After some calibration procedure, this spectrometer can be use as the back-end system at TRAO for the 3x5 focal plane array receivers.
Single-channel autocorrelation functions: the effects of time interval omission.
Ball, F G; Sansom, M S
1988-01-01
We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553
Quantitative fluorescence correlation spectroscopy on DNA in living cells
NASA Astrophysics Data System (ADS)
Hodges, Cameron; Kafle, Rudra P.; Meiners, Jens-Christian
2017-02-01
FCS is a fluorescence technique conventionally used to study the kinetics of fluorescent molecules in a dilute solution. Being a non-invasive technique, it is now drawing increasing interest for the study of more complex systems like the dynamics of DNA or proteins in living cells. Unlike an ordinary dye solution, the dynamics of macromolecules like proteins or entangled DNA in crowded environments is often slow and subdiffusive in nature. This in turn leads to longer residence times of the attached fluorophores in the excitation volume of the microscope and artifacts from photobleaching abound that can easily obscure the signature of the molecular dynamics of interest and make quantitative analysis challenging.We discuss methods and procedures to make FCS applicable to quantitative studies of the dynamics of DNA in live prokaryotic and eukaryotic cells. The intensity autocorrelation is computed function from weighted arrival times of the photons on the detector that maximizes the information content while simultaneously correcting for the effect of photobleaching to yield an autocorrelation function that reflects only the underlying dynamics of the sample. This autocorrelation function in turn is used to calculate the mean square displacement of the fluorophores attached to DNA. The displacement data is more amenable to further quantitative analysis than the raw correlation functions. By using a suitable integral transform of the mean square displacement, we can then determine the viscoelastic moduli of the DNA in its cellular environment. The entire analysis procedure is extensively calibrated and validated using model systems and computational simulations.
Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping
2013-01-01
Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.
Samuel A. Cushman; Michael Chase; Curtice Griffin
2005-01-01
Autocorrelation in animal movements can be both a serious nuisance to analysis and a source of valuable information about the scale and patterns of animal behavior, depending on the question and the techniques employed. In this paper we present an approach to analyzing the patterns of autocorrelation in animal movements that provides a detailed picture of seasonal...
Estimating the Autocorrelated Error Model with Trended Data: Further Results,
1979-11-01
Perhaps the most serious deficiency of OLS in the presence of autocorrelation is not inefficiency but bias in its estimated standard errors--a bias...k for all t has variance var(b) = o2/ Tk2 2This refutes Maeshiro’s (1976) conjecture that "an estimator utilizing relevant extraneous information
Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.
ERIC Educational Resources Information Center
Mandell, Marvin B.; Bretschneider, Stuart I.
1984-01-01
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
USDA-ARS?s Scientific Manuscript database
If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...
Exploratory spatial data analysis of global MODIS active fire data
NASA Astrophysics Data System (ADS)
Oom, D.; Pereira, J. M. C.
2013-04-01
We performed an exploratory spatial data analysis (ESDA) of autocorrelation patterns in the NASA MODIS MCD14ML Collection 5 active fire dataset, for the period 2001-2009, at the global scale. The dataset was screened, resulting in an annual rate of false alarms and non-vegetation fires ranging from a minimum of 3.1% in 2003 to a maximum of 4.4% in 2001. Hot bare soils and gas flares were the major sources of false alarms and non-vegetation fires. The data were aggregated at 0.5° resolution for the global and local spatial autocorrelation Fire counts were found to be positively correlated up to distances of around 200 km, and negatively for larger distances. A value of 0.80 (p = 0.001, α = 0.05) for Moran's I indicates strong spatial autocorrelation between fires at global scale, with 60% of all cells displaying significant positive or negative spatial correlation. Different types of spatial autocorrelation were mapped and regression diagnostics allowed for the identification of spatial outlier cells, with fire counts much higher or lower than expected, considering their spatial context.
Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics.
Chen, Xun; Liu, Aiping; Chen, Qiang; Liu, Yu; Zou, Liang; McKeown, Martin J
2017-09-01
Electroencephalography (EEG) recordings are frequently contaminated by both ocular and muscle artifacts. These are normally dealt with separately, by employing blind source separation (BSS) techniques relying on either second-order or higher-order statistics (SOS & HOS respectively). When HOS-based methods are used, it is usually in the setting of assuming artifacts are statistically independent to the EEG. When SOS-based methods are used, it is assumed that artifacts have autocorrelation characteristics distinct from the EEG. In reality, ocular and muscle artifacts do not completely follow the assumptions of strict temporal independence to the EEG nor completely unique autocorrelation characteristics, suggesting that exploiting HOS or SOS alone may be insufficient to remove these artifacts. Here we employ a novel BSS technique, independent vector analysis (IVA), to jointly employ HOS and SOS simultaneously to remove ocular and muscle artifacts. Numerical simulations and application to real EEG recordings were used to explore the utility of the IVA approach. IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fifth anniversary of the first element of the International Spac
2003-12-03
Members of the media (at left) were invited to commemorate the fifth anniversary of the launch of the first element of the International Space Station by touring the Space Station Processing Facility (SSPF) at KSC. Giving an overview of Space Station processing are, at right, David Bethay (white shirt), Boeing/ISS Florida Operations; Charlie Precourt, deputy manager of the International Space Station Program; and Tip Talone, director of Space Station and Payload Processing at KSC. Reporters also had the opportunity to see Space Station hardware that is being processed for deployment once the Space Shuttles return to flight. The facility tour also included an opportunity for reporters to talk with NASA and Boeing mission managers about the various hardware elements currently being processed for flight.
Fifth anniversary of the first element of the International Spac
2003-12-03
Members of the media (at right) were invited to commemorate the fifth anniversary of the launch of the International Space Station by touring the Space Station Processing Facility (SSPF) at KSC. Giving an overview of Space Station processing are, at left, David Bethay (white shirt), Boeing/ISS Florida Operations; Charlie Precourt, deputy manager of the International Space Station Program; and Tip Talone, director of Space Station and Payload Processing at KSC. Reporters also had the opportunity to see Space Station hardware that is being processed for deployment once the Space Shuttles return to flight. The facility tour also included an opportunity for reporters to talk with NASA and Boeing mission managers about the various hardware elements currently being processed for flight.
Du, Han; Wang, Lijuan
2018-04-23
Intraindividual variability can be measured by the intraindividual standard deviation ([Formula: see text]), intraindividual variance ([Formula: see text]), estimated hth-order autocorrelation coefficient ([Formula: see text]), and mean square successive difference ([Formula: see text]). Unresolved issues exist in the research on reliabilities of intraindividual variability indicators: (1) previous research only studied conditions with 0 autocorrelations in the longitudinal responses; (2) the reliabilities of [Formula: see text] and [Formula: see text] have not been studied. The current study investigates reliabilities of [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and the intraindividual mean, with autocorrelated longitudinal data. Reliability estimates of the indicators were obtained through Monte Carlo simulations. The impact of influential factors on reliabilities of the intraindividual variability indicators is summarized, and the reliabilities are compared across the indicators. Generally, all the studied indicators of intraindividual variability were more reliable with a more reliable measurement scale and more assessments. The reliabilities of [Formula: see text] were generally lower than those of [Formula: see text] and [Formula: see text], the reliabilities of [Formula: see text] were usually between those of [Formula: see text] and [Formula: see text] unless the scale reliability was large and/or the interindividual standard deviation in autocorrelation coefficients was large, and the reliabilities of the intraindividual mean were generally the highest. An R function is provided for planning longitudinal studies to ensure sufficient reliabilities of the intraindividual indicators are achieved.
Broms, Kristin M; Johnson, Devin S; Altwegg, Res; Conquest, Loveday L
2014-03-01
Determining the range of a species and exploring species--habitat associations are central questions in ecology and can be answered by analyzing presence--absence data. Often, both the sampling of sites and the desired area of inference involve neighboring sites; thus, positive spatial autocorrelation between these sites is expected. Using survey data for the Southern Ground Hornbill (Bucorvus leadbeateri) from the Southern African Bird Atlas Project, we compared advantages and disadvantages of three increasingly complex models for species occupancy: an occupancy model that accounted for nondetection but assumed all sites were independent, and two spatial occupancy models that accounted for both nondetection and spatial autocorrelation. We modeled the spatial autocorrelation with an intrinsic conditional autoregressive (ICAR) model and with a restricted spatial regression (RSR) model. Both spatial models can readily be applied to any other gridded, presence--absence data set using a newly introduced R package. The RSR model provided the best inference and was able to capture small-scale variation that the other models did not. It showed that ground hornbills are strongly dependent on protected areas in the north of their South African range, but less so further south. The ICAR models did not capture any spatial autocorrelation in the data, and they took an order, of magnitude longer than the RSR models to run. Thus, the RSR occupancy model appears to be an attractive choice for modeling occurrences at large spatial domains, while accounting for imperfect detection and spatial autocorrelation.
Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A
2014-08-01
Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate forecasts lead to ineffective prioritization of conservation activities and potentially to avoidable species extinctions. © 2014 John Wiley & Sons Ltd.
Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.
Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M
2015-03-06
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.
Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'
Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.
2015-01-01
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468
Blind equalization with criterion with memory nonlinearity
NASA Astrophysics Data System (ADS)
Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.
1992-06-01
Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.
Entropy, instrument scan and pilot workload
NASA Technical Reports Server (NTRS)
Tole, J. R.; Stephens, A. T.; Vivaudou, M.; Harris, R. L., Jr.; Ephrath, A. R.
1982-01-01
Correlation and information theory which analyze the relationships between mental loading and visual scanpath of aircraft pilots are described. The relationship between skill, performance, mental workload, and visual scanning behavior are investigated. The experimental method required pilots to maintain a general aviation flight simulator on a straight and level, constant sensitivity, Instrument Landing System (ILS) course with a low level of turbulence. An additional periodic verbal task whose difficulty increased with frequency was used to increment the subject's mental workload. The subject's looppoint on the instrument panel during each ten minute run was computed via a TV oculometer and stored. Several pilots ranging in skill from novices to test pilots took part in the experiment. Analysis of the periodicity of the subject's instrument scan was accomplished by means of correlation techniques. For skilled pilots, the autocorrelation of instrument/dwell times sequences showed the same periodicity as the verbal task. The ability to multiplex simultaneous tasks increases with skill. Thus autocorrelation provides a way of evaluating the operator's skill level.
NASA Astrophysics Data System (ADS)
Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z.
2018-04-01
Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection.
Borycki, Dawid; Kholiqov, Oybek; Srinivasan, Vivek J.
2017-01-01
Interferometric near-infrared spectroscopy (iNIRS) is a new technique that measures time-of-flight- (TOF-) resolved autocorrelations in turbid media, enabling simultaneous estimation of optical and dynamical properties. Here, we demonstrate reflectance-mode iNIRS for noninvasive monitoring of a mouse brain in vivo. A method for more precise quantification with less static interference from superficial layers, based on separating static and dynamic components of the optical field autocorrelation, is presented. Absolute values of absorption, reduced scattering, and blood flow index (BFI) are measured, and changes in BFI and absorption are monitored during a hypercapnic challenge. Absorption changes from TOF-resolved iNIRS agree with absorption changes from continuous wave NIRS analysis, based on TOF-integrated light intensity changes, an effective path length, and the modified Beer–Lambert Law. Thus, iNIRS is a promising approach for quantitative and non-invasive monitoring of perfusion and optical properties in vivo. PMID:28146535
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao
2008-01-01
Autocorrelated residuals in longitudinal data are widely reported as common to longitudinal data. Yet few, if any, researchers modeling growth processes evaluate a priori whether their data have this feature. Sivo, Fan, and Witta (2005) found that not modeling autocorrelated residuals present in longitudinal data severely biases latent curve…
Lag-One Autocorrelation in Short Series: Estimation and Hypotheses Testing
ERIC Educational Resources Information Center
Solanas, Antonio; Manolov, Rumen; Sierra, Vicenta
2010-01-01
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is…
Autocorrelation as a source of truncated Lévy flights in foreign exchange rates
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2003-05-01
We suggest that the ultraslow speed of convergence associated with truncated Lévy flights (Phys. Rev. Lett. 73 (1994) 2946) may well be explained by autocorrelations in data. We show how a particular type of autocorrelation generates power laws consistent with a truncated Lévy flight. Stock exchanges have been suggested to be modeled by a truncated Lévy flight (Nature 376 (1995) 46; Physica A 297 (2001) 509; Econom. Bull. 7 (2002) 1). Here foreign exchange rate data are taken instead. Scaling power laws in the “probability of return to the origin” are shown to emerge for most currencies. A novel approach to measure how distant a process is from a Gaussian regime is presented.
A novel technique for fetal heart rate estimation from Doppler ultrasound signal
2011-01-01
Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764
NASA Astrophysics Data System (ADS)
Blackburn, Brecken J.; Gu, Shi; Jenkins, Michael W.; Rollins, Andrew M.
2017-02-01
A robust method to measure viscosity of microquantities of biological samples, such as blood and mucus, could lead to a better understanding and diagnosis of diseases. Microsamples have presented persistent challenges to conventional rheology, which requires bulk quantities of a sample. Alternatively, fluid viscosity can be probed by monitoring microscale motion of particles. Here, we present a decorrelation-based method using M-mode phase-sensitive optical coherence tomography (OCT) to measure particle Brownian motion. This is similar to previous methods using laser speckle decorrelation but with sensitivity to nanometer-scale displacement. This allows for the measurement of decorrelation in less than 1 millisecond and significantly decreases sensitivity to bulk motion, thereby potentially enabling in vivo and in situ applications. From first principles, an analytical method is established using M-mode images obtained from a 47 kHz spectral-domain OCT system. A g(1) first-order autocorrelation is calculated from windows containing several pixels over a time frame of 200-1000 microseconds. Total imaging time is 500 milliseconds for averaging purposes. The autocorrelation coefficient over this short time frame decreases linearly and at a rate proportional to the diffusion constant of the particles, allowing viscosity to be calculated. In verification experiments using phantoms of microbeads in 200 µL glycerol-water mixtures, this method showed insensitivity to 2 mm/s lateral bulk motion and accurate viscosity measurements over a depth of 400 µm. In addition, the method measured a significant decrease of the apparent diffusion constant of soft tissue after formalin fixation, suggesting potential applications in mapping tissue stiffness.
Genome engineering using a synthetic gene circuit in Bacillus subtilis.
Jeong, Da-Eun; Park, Seung-Hwan; Pan, Jae-Gu; Kim, Eui-Joong; Choi, Soo-Keun
2015-03-31
Genome engineering without leaving foreign DNA behind requires an efficient counter-selectable marker system. Here, we developed a genome engineering method in Bacillus subtilis using a synthetic gene circuit as a counter-selectable marker system. The system contained two repressible promoters (B. subtilis xylA (Pxyl) and spac (Pspac)) and two repressor genes (lacI and xylR). Pxyl-lacI was integrated into the B. subtilis genome with a target gene containing a desired mutation. The xylR and Pspac-chloramphenicol resistant genes (cat) were located on a helper plasmid. In the presence of xylose, repression of XylR by xylose induced LacI expression, the LacIs repressed the Pspac promoter and the cells become chloramphenicol sensitive. Thus, to survive in the presence of chloramphenicol, the cell must delete Pxyl-lacI by recombination between the wild-type and mutated target genes. The recombination leads to mutation of the target gene. The remaining helper plasmid was removed easily under the chloramphenicol absent condition. In this study, we showed base insertion, deletion and point mutation of the B. subtilis genome without leaving any foreign DNA behind. Additionally, we successfully deleted a 2-kb gene (amyE) and a 38-kb operon (ppsABCDE). This method will be useful to construct designer Bacillus strains for various industrial applications. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Islanding detection technique using wavelet energy in grid-connected PV system
NASA Astrophysics Data System (ADS)
Kim, Il Song
2016-08-01
This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.
A Pitch Extraction Method with High Frequency Resolution for Singing Evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo
This paper proposes a pitch estimation method suitable for singing evaluation incorporable in KARAOKE machines. Professional singers and musicians have sharp hearing for music and singing voice. They recognize that singer's voice pitch is “a little off key” or “be in tune”. In the same way, the pitch estimation method that has high frequency resolution is necessary in order to evaluate singing. This paper proposes a pitch estimation method with high frequency resolution utilizing harmonic characteristic of autocorrelation function. The proposed method can estimate a fundamental frequency in the range 50 ∼ 1700[Hz] with resolution less than 3.6 cents in light processing.
Krishtop, Victor; Doronin, Ivan; Okishev, Konstantin
2012-11-05
Photon correlation spectroscopy is an effective method for measuring nanoparticle sizes and has several advantages over alternative methods. However, this method suffers from a disadvantage in that its measuring accuracy reduces in the presence of convective flows of fluid containing nanoparticles. In this paper, we propose a scheme based on attenuated total reflectance in order to reduce the influence of convection currents. The autocorrelation function for the light-scattering intensity was found for this case, and it was shown that this method afforded a significant decrease in the time required to measure the particle sizes and an increase in the measuring accuracy.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Laurence R. Schimleck; Justin A. Tyson; David Jones; Gary F. Peter; Richard F. Daniels; Alexander III Clark
2007-01-01
Near infrared (NIR) spectroscopy provides a rapid, non-destructive method for the estimation of several wood properties of increment cores. MR spectra are collected from adjacent sections of the same core; however, not all spectra are required for calibration purposes as spectra from the same core are autocorrelated. Previously, we showed that wood property...
Broadband distortion modeling in Lyman-α forest BAO fitting
Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...
2015-11-23
Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b F and the redshift-space distortion parameter β F for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination b F(1+β F) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39 +0.11 +0.24 +0.38 -0.10 -0.19 -0.28 and bF(1+βF)=-0.374 +0.007 +0.013 +0.020 -0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less
Song, Weize; Jia, Haifeng; Li, Zhilin; Tang, Deliang
2018-08-01
Urban air pollutant distribution is a concern in environmental and health studies. Particularly, the spatial distribution of NO 2 and PM 2.5 , which represent photochemical smog and haze pollution in urban areas, is of concern. This paper presents a study quantifying the seasonal differences between urban NO 2 and PM 2.5 distributions in Foshan, China. A geographical semi-variogram analysis was conducted to delineate the spatial variation in daily NO 2 and PM 2.5 concentrations. The data were collected from 38 sites in the government-operated monitoring network. The results showed that the total spatial variance of NO 2 is 38.5% higher than that of PM 2.5 . The random spatial variance of NO 2 was 1.6 times than that of PM 2.5 . The nugget effect (i.e., random to total spatial variance ratio) values of NO 2 and PM 2.5 were 29.7 and 20.9%, respectively. This indicates that urban NO 2 distribution was affected by both local and regional influencing factors, while urban PM 2.5 distribution was dominated by regional influencing factors. NO 2 had a larger seasonally averaged spatial autocorrelation distance (48km) than that of PM 2.5 (33km). The spatial range of NO 2 autocorrelation was larger in winter than the other seasons, and PM 2.5 has a smaller range of spatial autocorrelation in winter than the other seasons. Overall, the geographical semi-variogram analysis is a very effective method to enrich the understanding of NO 2 and PM 2.5 distributions. It can provide scientific evidences for the buffering radius selection of spatial predictors for land use regression models. It will also be beneficial for developing the targeted policies and measures to reduce NO 2 and PM 2.5 pollution levels. Copyright © 2018 Elsevier B.V. All rights reserved.
Broadband distortion modeling in Lyman-α forest BAO fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blomqvist, Michael; Kirkby, David; Margala, Daniel, E-mail: cblomqvi@uci.edu, E-mail: dkirkby@uci.edu, E-mail: dmargala@uci.edu
2015-11-01
In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less
Image correlation microscopy for uniform illumination.
Gaborski, T R; Sealander, M N; Ehrenberg, M; Waugh, R E; McGrath, J L
2010-01-01
Image cross-correlation microscopy is a technique that quantifies the motion of fluorescent features in an image by measuring the temporal autocorrelation function decay in a time-lapse image sequence. Image cross-correlation microscopy has traditionally employed laser-scanning microscopes because the technique emerged as an extension of laser-based fluorescence correlation spectroscopy. In this work, we show that image correlation can also be used to measure fluorescence dynamics in uniform illumination or wide-field imaging systems and we call our new approach uniform illumination image correlation microscopy. Wide-field microscopy is not only a simpler, less expensive imaging modality, but it offers the capability of greater temporal resolution over laser-scanning systems. In traditional laser-scanning image cross-correlation microscopy, lateral mobility is calculated from the temporal de-correlation of an image, where the characteristic length is the illuminating laser beam width. In wide-field microscopy, the diffusion length is defined by the feature size using the spatial autocorrelation function. Correlation function decay in time occurs as an object diffuses from its original position. We show that theoretical and simulated comparisons between Gaussian and uniform features indicate the temporal autocorrelation function depends strongly on particle size and not particle shape. In this report, we establish the relationships between the spatial autocorrelation function feature size, temporal autocorrelation function characteristic time and the diffusion coefficient for uniform illumination image correlation microscopy using analytical, Monte Carlo and experimental validation with particle tracking algorithms. Additionally, we demonstrate uniform illumination image correlation microscopy analysis of adhesion molecule domain aggregation and diffusion on the surface of human neutrophils.
The Effect of Alcohol on Emotional Inertia: A Test of Alcohol Myopia
Fairbairn, Catharine E.; Sayette, Michael A.
2017-01-01
Alcohol Myopia (AM) has emerged as one of the most widely-researched theories of alcohol’s effects on emotional experience. Given this theory’s popularity it is notable that a central tenet of AM has not been tested—namely, that alcohol creates a myopic focus on the present moment, limiting the extent to which the present is permeated by emotions derived from prior experience. We aimed to test the impact of alcohol on moment-to-moment fluctuations in affect, applying advances in emotion assessment and statistical analysis to test this aspect of AM without drawing the attention of participants to their own emotional experiences. We measured emotional fluctuations using autocorrelation, a statistic borrowed from time-series analysis measuring the correlation between successive observations in time. High emotion autocorrelation is termed “emotional inertia” and linked to negative mood outcomes. Seven-hundred-twenty social drinkers consumed alcohol, placebo, or control beverages in groups of three over a 36-min group formation task. We indexed affect using the Duchenne smile, recorded continuously during the interaction (34.9 million video frames) according to Paul Ekman’s Facial Action Coding System. Autocorrelation of Duchenne smiling emerged as the most consistent predictor of self-reported mood and social bonding when compared with Duchenne smiling mean, standard deviation, and linear trend. Alcohol reduced affective autocorrelation, and autocorrelation mediated the link between alcohol and self-reported mood and social outcomes. Findings suggest that alcohol enhances our ability to freely enjoy the present moment untethered by past experience and highlight the importance of emotion dynamics in research examining affective correlates of psychopathology. PMID:24016015
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1988-01-01
This thesis reviews the technique established to clear channels in the Power Spectral Estimate by applying linear combinations of well known window functions to the autocorrelation function. The need for windowing the auto correlation function is due to the fact that the true auto correlation is not generally used to obtain the Power Spectral Estimate. When applied, the windows serve to reduce the effect that modifies the auto correlation by truncating the data and possibly the autocorrelation has on the Power Spectral Estimate. It has been shown in previous work that a single channel has been cleared, allowing for the detection of a small peak in the presence of a large peak in the Power Spectral Estimate. The utility of this method is dependent on the robustness of it on different input situations. We extend the analysis in this paper, to include clearing up to three channels. We examine the relative positions of the spikes to each other and also the effect of taking different percentages of lags of the auto correlation in the Power Spectral Estimate. This method could have application wherever the Power Spectrum is used. An example of this is beam forming for source location, where a small target can be located next to a large target. Other possibilities extend into seismic data processing. As the method becomes more automated other applications may present themselves.
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
NASA Astrophysics Data System (ADS)
Xu, Ke-Jun; Luo, Qing-Lin; Wang, Gang; Liu, San-Shan; Kang, Yi-Bo
2010-07-01
Digital signal processing methods have been applied to vortex flowmeter for extracting the useful information from noisy output of the vortex flow sensor. But these approaches are unavailable when the power of the mechanical vibration noise is larger than that of the vortex flow signal. In order to solve this problem, an antistrong-disturbance signal processing method is proposed based on frequency features of the vortex flow signal and mechanical vibration noise for the vortex flowmeter with single sensor. The frequency bandwidth of the vortex flow signal is different from that of the mechanical vibration noise. The autocorrelation function can represent bandwidth features of the signal and noise. The output of the vortex flow sensor is processed by the spectrum analysis, filtered by bandpass filters, and calculated by autocorrelation function at the fixed delaying time and at τ =0 to obtain ratios. The frequency corresponding to the minimal ratio is regarded as the vortex flow frequency. With an ultralow-power microcontroller, a digital signal processing system is developed to implement the antistrong-disturbance algorithm, and at the same time to ensure low-power and two-wire mode for meeting the requirement of process instrumentation. The water flow-rate calibration and vibration test experiments are conducted, and the experimental results show that both the algorithm and system are effective.
Xu, Ke-Jun; Luo, Qing-Lin; Wang, Gang; Liu, San-Shan; Kang, Yi-Bo
2010-07-01
Digital signal processing methods have been applied to vortex flowmeter for extracting the useful information from noisy output of the vortex flow sensor. But these approaches are unavailable when the power of the mechanical vibration noise is larger than that of the vortex flow signal. In order to solve this problem, an antistrong-disturbance signal processing method is proposed based on frequency features of the vortex flow signal and mechanical vibration noise for the vortex flowmeter with single sensor. The frequency bandwidth of the vortex flow signal is different from that of the mechanical vibration noise. The autocorrelation function can represent bandwidth features of the signal and noise. The output of the vortex flow sensor is processed by the spectrum analysis, filtered by bandpass filters, and calculated by autocorrelation function at the fixed delaying time and at tau=0 to obtain ratios. The frequency corresponding to the minimal ratio is regarded as the vortex flow frequency. With an ultralow-power microcontroller, a digital signal processing system is developed to implement the antistrong-disturbance algorithm, and at the same time to ensure low-power and two-wire mode for meeting the requirement of process instrumentation. The water flow-rate calibration and vibration test experiments are conducted, and the experimental results show that both the algorithm and system are effective.
Qing, Feng Ting; Peng, Yu
2016-05-01
Based on the remote sensing data in 1997, 2001, 2005, 2009 and 2013, this article classified the landscape types of Shunyi, and the ecological risk index was built based on landscape disturbance index and landscape fragility. The spatial auto-correlation and geostatistical analysis by GS + and ArcGIS was used to study temporal and spatial changes of ecological risk. The results showed that eco-risk degree in the study region had positive spatial correlation which decreased with the increasing grain size. Within a certain grain range (<12 km), the spatial auto-correlation had an obvious dependence on scale. The random variation of spatial heterogeneity was less than spatial auto-correlation variation from 1997 to 2013, which meant the auto-correlation had a dominant role in spatial heterogeneity. The ecological risk of Shunyi was mainly at moderate level during the study period. The area of the district with higher and lower ecological risk increased, while that of mode-rate ecological risk decreased. The area with low ecological risk was mainly located in the airport region and forest of southeast Shunyi, while that with high ecological risk was mainly concentrated in the water landscape, such as the banks of Chaobai River.
A Possible Application of Coherent Light Scattering on Biological Fluids
NASA Astrophysics Data System (ADS)
Chicea, Dan; Chicea, Liana Maria
2007-04-01
Human urine from both healthy patients and patients with different diseases was used as scattering medium in a coherent light scattering experiment. The time variation of the light intensity in the far field speckle image was acquired using a data acquisition system on a PC and a time series resulted for each sample. The autocorrelation function for each sample was calculated and the autocorrelation time was determined. The same samples were analyzed in a medical laboratory using the standard procedure. We found so far that the autocorrelation time is differently modified by the presence of pus, albumin, urobilin and sediments. The results suggest a fast procedure that can be used as laboratory test to detect the presence not of each individual component in suspensions but of big conglomerates as albumin, cylinders, oxalate crystals.
NASA Technical Reports Server (NTRS)
Mcgwire, K.; Friedl, M.; Estes, J. E.
1993-01-01
This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.
Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J
2009-01-01
Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial configuration matrices were then used to define expectations for prior distributions using a Markov chain Monte Carlo (MCMC) algorithm. A set of posterior means were defined in WinBUGS 1.4.3®. After the model had converged, samples from the conditional distributions were used to summarize the posterior distribution of the parameters. Thereafter, a spatial residual trend analyses was used to evaluate variance uncertainty propagation in the model using an autocovariance error matrix. Results By specifying coefficient estimates in a Bayesian framework, the covariate number of tillers was found to be a significant predictor, positively associated with An. arabiensis aquatic habitats. The spatial filter models accounted for approximately 19% redundant locational information in the ecological sampled An. arabiensis aquatic habitat data. In the residual error estimation model there was significant positive autocorrelation (i.e., clustering of habitats in geographic space) based on log-transformed larval/pupal data and the sampled covariate depth of habitat. Conclusion An autocorrelation error covariance matrix and a spatial filter analyses can prioritize mosquito control strategies by providing a computationally attractive and feasible description of variance uncertainty estimates for correctly identifying clusters of prolific An. arabiensis aquatic habitats based on larval/pupal productivity. PMID:19772590
Carbonaceous aerosol at two rural locations in New York State: Characterization and behavior
NASA Astrophysics Data System (ADS)
Sunder Raman, Ramya; Hopke, Philip K.; Holsen, Thomas M.
2008-06-01
Fine particle samples were collected to determine the chemical constituents in PM2.5 at two rural background sites (Potsdam and Stockton, N. Y.) in the northeastern United States from November 2002 to August 2005. Samples were collected every third day for 24 h with a speciation network sampler. The measured carbonaceous species included thermal-optical organic carbon (OC), elemental carbon (EC), pyrolytic carbon (OP), black carbon (BC), and water-soluble, short-chain (WSSC) organic acids. Concentration time series, autocorrelations, and seasonal variations of the carbonaceous species were examined. During this multiyear period, the contributions of the total carbon (OC + EC) to the measured fine particle mass were 31.2% and 31.1% at Potsdam and Stockton, respectively. The average sum of the WSSC acids carbon accounted for approximately 2.5% of the organic carbon at Potsdam and 3.0% at Stockton. At Potsdam, the seasonal differences in the autocorrelation function (ACF) and partial autocorrelation function (PACF) values for carbonaceous species suggest that secondary formation may be an important contributor to the observed concentrations of species likely to be secondary in origin, particularly during the photochemically active time of the year (May to October). This study also investigated the relationships between carbonaceous species to better understand the behavior of carbonaceous aerosol and to assess the contribution of secondary organic carbon (SOC) to the total organic carbon mass (the EC tracer method was used to estimate SOC). At Potsdam the average SOC contribution to total OC varied between 66% and 72%, while at Stockton it varied between 58% and 64%.
Islam, Abu Reza Md Towfiqul; Ahmed, Nasir; Bodrud-Doza, Md; Chu, Ronghao
2017-12-01
Drinking water is susceptible to the poor quality of contaminated water affecting the health of humans. Thus, it is an essential study to investigate factors affecting groundwater quality and its suitability for drinking uses. In this paper, the entropy theory, multivariate statistics, spatial autocorrelation index, and geostatistics are applied to characterize groundwater quality and its spatial variability in the Sylhet district of Bangladesh. A total of 91samples have been collected from wells (e.g., shallow, intermediate, and deep tube wells at 15-300-m depth) from the study area. The results show that NO 3 - , then SO 4 2- , and As are the most contributed parameters influencing the groundwater quality according to the entropy theory. The principal component analysis (PCA) and correlation coefficient also confirm the results of the entropy theory. However, Na + has the highest spatial autocorrelation and the most entropy, thus affecting the groundwater quality. Based on the entropy-weighted water quality index (EWQI) and groundwater quality index (GWQI) classifications, it is observed that 60.45 and 53.86% of water samples are classified as having an excellent to good qualities, while the remaining samples vary from medium to extremely poor quality domains for drinking purposes. Furthermore, the EWQI classification provides the more reasonable results than GWQIs due to its simplicity, accuracy, and ignoring of artificial weight. A Gaussian semivariogram model has been chosen to the best fit model, and groundwater quality indices have a weak spatial dependence, suggesting that both geogenic and anthropogenic factors play a pivotal role in spatial heterogeneity of groundwater quality oscillations.
Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John
2002-05-20
We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.
Space, race, and poverty: Spatial inequalities in walkable neighborhood amenities?
Aldstadt, Jared; Whalen, John; White, Kellee; Castro, Marcia C.; Williams, David R.
2017-01-01
BACKGROUND Multiple and varied benefits have been suggested for increased neighborhood walkability. However, spatial inequalities in neighborhood walkability likely exist and may be attributable, in part, to residential segregation. OBJECTIVE Utilizing a spatial demographic perspective, we evaluated potential spatial inequalities in walkable neighborhood amenities across census tracts in Boston, MA (US). METHODS The independent variables included minority racial/ethnic population percentages and percent of families in poverty. Walkable neighborhood amenities were assessed with a composite measure. Spatial autocorrelation in key study variables were first calculated with the Global Moran’s I statistic. Then, Spearman correlations between neighborhood socio-demographic characteristics and walkable neighborhood amenities were calculated as well as Spearman correlations accounting for spatial autocorrelation. We fit ordinary least squares (OLS) regression and spatial autoregressive models, when appropriate, as a final step. RESULTS Significant positive spatial autocorrelation was found in neighborhood socio-demographic characteristics (e.g. census tract percent Black), but not walkable neighborhood amenities or in the OLS regression residuals. Spearman correlations between neighborhood socio-demographic characteristics and walkable neighborhood amenities were not statistically significant, nor were neighborhood socio-demographic characteristics significantly associated with walkable neighborhood amenities in OLS regression models. CONCLUSIONS Our results suggest that there is residential segregation in Boston and that spatial inequalities do not necessarily show up using a composite measure. COMMENTS Future research in other geographic areas (including international contexts) and using different definitions of neighborhoods (including small-area definitions) should evaluate if spatial inequalities are found using composite measures but also should use measures of specific neighborhood amenities. PMID:29046612
Fifth anniversary of the first element of the International Spac
2003-12-03
In the Space Station Processing Facility, (from left) David Bethay, Boeing/ISS Florida Operations; Charlie Precourt, deputy manager of the International Space Station Program; and Tip Talone, director of Space Station and Payload Processing, give an overview of Space Station processing for the media. Members of the media were invited to commemorate the fifth anniversary of the launch of the first element of the International Space Station by touring the Space Station Processing Facility (SSPF) at KSC. Reporters also had the opportunity to see Space Station hardware that is being processed for deployment once the Space Shuttles return to flight. The facility tour also included an opportunity for reporters to talk with NASA and Boeing mission managers about the various hardware elements currently being processed for flight.
High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II
NASA Astrophysics Data System (ADS)
Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark
2018-06-01
To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
Tabano, David C; Bol, Kirk; Newcomer, Sophia R; Barrow, Jennifer C; Daley, Matthew F
2017-12-06
Measuring obesity prevalence across geographic areas should account for environmental and socioeconomic factors that contribute to spatial autocorrelation, the dependency of values in estimates across neighboring areas, to mitigate the bias in measures and risk of type I errors in hypothesis testing. Dependency among observations across geographic areas violates statistical independence assumptions and may result in biased estimates. Empirical Bayes (EB) estimators reduce the variability of estimates with spatial autocorrelation, which limits the overall mean square-error and controls for sample bias. Using the Colorado Body Mass Index (BMI) Monitoring System, we modeled the spatial autocorrelation of adult (≥ 18 years old) obesity (BMI ≥ 30 kg m 2 ) measurements using patient-level electronic health record data from encounters between January 1, 2009, and December 31, 2011. Obesity prevalence was estimated among census tracts with >=10 observations in Denver County census tracts during the study period. We calculated the Moran's I statistic to test for spatial autocorrelation across census tracts, and mapped crude and EB obesity prevalence across geographic areas. In Denver County, there were 143 census tracts with 10 or more observations, representing a total of 97,710 adults with a valid BMI. The crude obesity prevalence for adults in Denver County was 29.8 percent (95% CI 28.4-31.1%) and ranged from 12.8 to 45.2 percent across individual census tracts. EB obesity prevalence was 30.2 percent (95% CI 28.9-31.5%) and ranged from 15.3 to 44.3 percent across census tracts. Statistical tests using the Moran's I statistic suggest adult obesity prevalence in Denver County was distributed in a non-random pattern. Clusters of EB obesity estimates were highly significant (alpha=0.05) in neighboring census tracts. Concentrations of obesity estimates were primarily in the west and north in Denver County. Statistical tests reveal adult obesity prevalence exhibit spatial autocorrelation in Denver County at the census tract level. EB estimates for obesity prevalence can be used to control for spatial autocorrelation between neighboring census tracts and may produce less biased estimates of obesity prevalence.
[Design of a pulse oximeter used to low perfusion and low oxygen saturation].
Tan, Shuangping; Ai, Zhiguang; Yang, Yuxing; Xie, Qingguo
2013-05-01
This paper presents a new pulse oximeter used to low perfusion at 0.125% and wide oxygen saturation range from 35% to 100%. In order to acquire the best PPG signals, the variable gain amplifier(VGA) is adopted in hardware. The self-developed auto-correlation modeling method is adopted in software and it can extract pulse wave from low perfusion signals and remove motion artifacts partly.
Robust estimation of fetal heart rate from US Doppler signals
NASA Astrophysics Data System (ADS)
Voicu, Iulian; Girault, Jean-Marc; Roussel, Catherine; Decock, Aliette; Kouame, Denis
2010-01-01
Introduction: In utero, Monitoring of fetal wellbeing or suffering is today an open challenge, due to the high number of clinical parameters to be considered. An automatic monitoring of fetal activity, dedicated for quantifying fetal wellbeing, becomes necessary. For this purpose and in a view to supply an alternative for the Manning test, we used an ultrasound multitransducer multigate Doppler system. One important issue (and first step in our investigation) is the accurate estimation of fetal heart rate (FHR). An estimation of the FHR is obtained by evaluating the autocorrelation function of the Doppler signals for ills and healthiness foetus. However, this estimator is not enough robust since about 20% of FHR are not detected in comparison to a reference system. These non detections are principally due to the fact that the Doppler signal generated by the fetal moving is strongly disturbed by the presence of others several Doppler sources (mother' s moving, pseudo breathing, etc.). By modifying the existing method (autocorrelation method) and by proposing new time and frequency estimators used in the audio' s domain, we reduce to 5% the probability of non-detection of the fetal heart rate. These results are really encouraging and they enable us to plan the use of automatic classification techniques in order to discriminate between healthy and in suffering foetus.
The analysis of image feature robustness using cometcloud
Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin
2012-01-01
The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759
NASA Astrophysics Data System (ADS)
Hao, Hongliang; Xiao, Wen; Chen, Zonghui; Ma, Lan; Pan, Feng
2018-01-01
Heterodyne interferometric vibration metrology is a useful technique for dynamic displacement and velocity measurement as it can provide a synchronous full-field output signal. With the advent of cost effective, high-speed real-time signal processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. However, due to the coherent nature of the laser sources, the sequence of heterodyne interferogram are corrupted by a mixture of coherent speckle and incoherent additive noise, which can severely degrade the accuracy of the demodulated signal and the optical display. In this paper, a new heterodyne interferometric demodulation method by combining auto-correlation analysis and spectral filtering is described leading to an expression for the dynamic displacement and velocity of the object under test that is significantly more accurate in both the amplitude and frequency of the vibrating waveform. We present a mathematical model of the signals obtained from interferograms that contain both vibration information of the measured objects and the noise. A simulation of the signal demodulation process is presented and used to investigate the noise from the system and external factors. The experimental results show excellent agreement with measurements from a commercial Laser Doppler Velocimetry (LDV).
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
Chapman, Christopher A. R.; Ly, Sonny; Wang, Ling; ...
2016-03-02
Here we show the use of dynamic laser speckle autocorrelation spectroscopy in conjunction with the photothermal treatment of nanoporous gold (np-Au) thin films to probe nanoscale morphology changes during the photothermal treatment. Utilizing this spectroscopy method, backscattered speckle from the incident laser is tracked during photothermal treatment and both the characteristic feature size and annealing time of the film are determined. These results demonstrate that this method can successfully be used to monitor laser-based surface modification processes without the use of ex-situ characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Christopher A. R.; Ly, Sonny; Wang, Ling
Here we show the use of dynamic laser speckle autocorrelation spectroscopy in conjunction with the photothermal treatment of nanoporous gold (np-Au) thin films to probe nanoscale morphology changes during the photothermal treatment. Utilizing this spectroscopy method, backscattered speckle from the incident laser is tracked during photothermal treatment and both the characteristic feature size and annealing time of the film are determined. These results demonstrate that this method can successfully be used to monitor laser-based surface modification processes without the use of ex-situ characterization.
NASA Astrophysics Data System (ADS)
Jones, A. L.; Smart, P. L.
2005-08-01
Autoregressive modelling is used to investigate the internal structure of long-term (1935-1999) records of nitrate concentration for five karst springs in the Mendip Hills. There is a significant short term (1-2 months) positive autocorrelation at three of the five springs due to the availability of sufficient nitrate within the soil store to maintain concentrations in winter recharge for several months. The absence of short term (1-2 months) positive autocorrelation in the other two springs is due to the marked contrast in land use between the limestone and swallet parts of the catchment, rapid concentrated recharge from the latter causing short term switching in the dominant water source at the spring and thus fluctuating nitrate concentrations. Significant negative autocorrelation is evident at lags varying from 4 to 7 months through to 14-22 months for individual springs, with positive autocorrelation at 19-20 months at one site. This variable timing is explained by moderation of the exhaustion effect in the soil by groundwater storage, which gives longer residence times in large catchments and those with a dominance of diffuse flow. The lags derived from autoregressive modelling may therefore provide an indication of average groundwater residence times. Significant differences in the structure of the autocorrelation function for successive 10-year periods are evident at Cheddar Spring, and are explained by the effect the ploughing up of grasslands during the Second World War and increased fertiliser usage on available nitrogen in the soil store. This effect is moderated by the influence of summer temperatures on rates of mineralization, and of both summer and winter rainfall on the timing and magnitude of nitrate leaching. The pattern of nitrate leaching also appears to have been perturbed by the 1976 drought.
Liang, Jia Xin; Li, Xin Ju
2018-02-01
With remote sensing images from 1985, 2000 Lantsat 5 TM and 2015 Lantsat 8 OLI as data sources, we tried to select the suitable research scale and examine the temporal-spatial diffe-rentiation with such scale in the Nansihu Lake wetland by using landscape pattern vulnerability index constructed by sensitivity index and adaptability index, and combined with space statistics such as semivariogram and spatial autocorrelation. The results showed that 1 km × 1 km equidistant grid was the suitable research scale, which could eliminate the influence of spatial heterogeneity induced by random factors. From 1985 to 2015, the landscape pattern vulnerability in the Nansihu Lake wetland deteriorated gradually. The high-risk area of landscape pattern vulnerability dramatically expanded with time. The spatial heterogeneity of landscape pattern vulnerability increased, and the influence of non-structural factors on landscape pattern vulnerability strengthened. Spatial variability affected by spatial autocorrelation slightly weakened. Landscape pattern vulnerability had strong general spatial positive correlation, with the significant form of spatial agglomeration. The positive spatial autocorrelation continued to increase and the phenomenon of spatial concentration was more and more obvious over time. The local autocorrelation mainly based on high-high accumulation zone and low-low accumulation zone had stronger spatial autocorrelation among neighboring space units. The high-high accumulation areas showed the strongest level of significance, and the significant level of low-low accumulation zone increased with time. Natural factors, such as temperature and precipitation, affected water-level and landscape distribution, and thus changed the landscape patterns vulnerability of Nansihu Lake wetland. The dominant driver for the deterioration of landscape patterns vulnerability was human activities, including social economy activity and policy system.
The relationship between affective state and the rhythmicity of activity in bipolar disorder.
Gonzalez, Robert; Tamminga, Carol A; Tohen, Mauricio; Suppes, Trisha
2014-04-01
The aim of this study was to test the relationships between mood state and rhythm disturbances as measured via actigraphy in bipolar disorder by assessing the correlations between manic and depressive symptoms as measured via Young Mania Rating Scale (YMRS) and 30-item Inventory for Depressive Symptomatology, Clinician-Rated (IDS-C-30) scores and the actigraphic measurements of rhythm, the 24-hour autocorrelation coefficient and circadian quotient. The research was conducted at the University of Texas Southwestern Medical Center at Dallas from February 2, 2009, to March 30, 2010. 42 patients with a DSM-IV-TR diagnosis of bipolar I disorder were included in the study. YMRS and the IDS-C-30 were used to determine symptom severity. Subjects wore the actigraph continuously for 7 days. The 24-hour autocorrelation coefficient was used as an indicator of overall rhythmicity. The circadian quotient was used to characterize the strength of a circadian rhythm. A greater severity of manic symptoms correlated with a lower degree of rhythmicity and less robust rhythms of locomotor activity as indicated by lower 24-hour autocorrelation (r = -0.3406, P = .03) and circadian quotient (r = -0.5485, P = .0002) variables, respectively. No relationship was noted between the degree of depression and 24-hour autocorrelation scores (r = -0.1190, P = .45) or circadian quotient (r = 0.0083, P = .96). Correlation was noted between the 24-hour autocorrelation and circadian quotient scores (r = 0.6347, P < .0001). These results support the notion that circadian rhythm disturbances are associated with bipolar disorder and that these disturbances may be associated with clinical signatures of the disorder. Further assessment of rhythm disturbances in bipolar disorder is warranted. © Copyright 2014 Physicians Postgraduate Press, Inc.
NASA Astrophysics Data System (ADS)
Loranty, Michael M.; Mackay, D. Scott; Ewers, Brent E.; Adelman, Jonathan D.; Kruger, Eric L.
2008-02-01
Assumed representative center-of-stand measurements are typical inputs to models that scale forest transpiration to stand and regional extents. These inputs do not consider gradients in transpiration at stand boundaries or along moisture gradients and therefore potentially bias the large-scale estimates. We measured half-hourly sap flux (JS) for 173 trees in a spatially explicit cyclic sampling design across a topographically controlled gradient between a forested wetland and upland forest in northern Wisconsin. Our analyses focused on three dominant species in the site: quaking aspen (Populus tremuloides Michx), speckled alder (Alnus incana (DuRoi) Spreng), and white cedar (Thuja occidentalis L.). Sapwood area (AS) was used to scale JS to whole tree transpiration (EC). Because spatial patterns imply underlying processes, geostatistical analyses were employed to quantify patterns of spatial autocorrelation across the site. A simple Jarvis type model parameterized using a Monte Carlo sampling approach was used to simulate EC (EC-SIM). EC-SIM was compared with observed EC(EC-OBS) and found to reproduce both the temporal trends and spatial variance of canopy transpiration. EC-SIM was then used to examine spatial autocorrelation as a function of environmental drivers. We found no spatial autocorrelation in JS across the gradient from forested wetland to forested upland. EC was spatially autocorrelated and this was attributed to spatial variation in AS which suggests species spatial patterns are important for understanding spatial estimates of transpiration. However, the range of autocorrelation in EC-SIM decreased linearly with increasing vapor pressure deficit, implying that consideration of spatial variation in the sensitivity of canopy stomatal conductance to D is also key to accurately scaling up transpiration in space.
Autocorrelation Function for Monitoring the Gap between The Steel Plates During Laser Welding
NASA Astrophysics Data System (ADS)
Mrna, Libor; Hornik, Petr
Proper alignment of the plates prior to laser welding represents an important factor that determines the quality of the resulting weld. A gap between the plates in a butt or overlap joint affects the oscillations of the keyhole and the surrounding weld pool. We present an experimental study of the butt and overlap welds with the artificial gap of the different thickness of the plates. The welds were made on a 2 kW fiber laser machine for the steel plates and the various welding parameters settings. The eigenfrequency of the keyhole oscillations and its changes were determined from the light emissions of the plasma plume using an autocorrelation function. As a result, we describe the relations between the autocorrelation characteristics, the thickness of the gap between plates and the weld geometry.
Sire, Clément
2004-09-24
We study the autocorrelation function of a conserved spin system following a quench at the critical temperature. Defining the correlation length L(t) approximately t(1/z), we find that for times t' and t satisfying L(t')
A recursive linear predictive vocoder
NASA Astrophysics Data System (ADS)
Janssen, W. A.
1983-12-01
A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.
2011-01-01
Background Hemorrhagic fever with renal syndrome (HFRS) is an important infectious disease caused by different species of hantaviruses. As a rodent-borne disease with a seasonal distribution, external environmental factors including climate factors may play a significant role in its transmission. The city of Shenyang is one of the most seriously endemic areas for HFRS. Here, we characterized the dynamic temporal trend of HFRS, and identified climate-related risk factors and their roles in HFRS transmission in Shenyang, China. Methods The annual and monthly cumulative numbers of HFRS cases from 2004 to 2009 were calculated and plotted to show the annual and seasonal fluctuation in Shenyang. Cross-correlation and autocorrelation analyses were performed to detect the lagged effect of climate factors on HFRS transmission and the autocorrelation of monthly HFRS cases. Principal component analysis was constructed by using climate data from 2004 to 2009 to extract principal components of climate factors to reduce co-linearity. The extracted principal components and autocorrelation terms of monthly HFRS cases were added into a multiple regression model called principal components regression model (PCR) to quantify the relationship between climate factors, autocorrelation terms and transmission of HFRS. The PCR model was compared to a general multiple regression model conducted only with climate factors as independent variables. Results A distinctly declining temporal trend of annual HFRS incidence was identified. HFRS cases were reported every month, and the two peak periods occurred in spring (March to May) and winter (November to January), during which, nearly 75% of the HFRS cases were reported. Three principal components were extracted with a cumulative contribution rate of 86.06%. Component 1 represented MinRH0, MT1, RH1, and MWV1; component 2 represented RH2, MaxT3, and MAP3; and component 3 represented MaxT2, MAP2, and MWV2. The PCR model was composed of three principal components and two autocorrelation terms. The association between HFRS epidemics and climate factors was better explained in the PCR model (F = 446.452, P < 0.001, adjusted R2 = 0.75) than in the general multiple regression model (F = 223.670, P < 0.000, adjusted R2 = 0.51). Conclusion The temporal distribution of HFRS in Shenyang varied in different years with a distinctly declining trend. The monthly trends of HFRS were significantly associated with local temperature, relative humidity, precipitation, air pressure, and wind velocity of the different previous months. The model conducted in this study will make HFRS surveillance simpler and the control of HFRS more targeted in Shenyang. PMID:22133347
Long-range correlations in time series generated by time-fractional diffusion: A numerical study
NASA Astrophysics Data System (ADS)
Barbieri, Davide; Vivoli, Alessandro
2005-09-01
Time series models showing power law tails in autocorrelation functions are common in econometrics. A special non-Markovian model for such kind of time series is provided by the random walk introduced by Gorenflo et al. as a discretization of time fractional diffusion. The time series so obtained are analyzed here from a numerical point of view in terms of autocorrelations and covariance matrices.
Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R
2018-06-25
Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.
The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals
NASA Astrophysics Data System (ADS)
Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat
2018-01-01
Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.
Reynolds, Andy M
2010-12-06
For many years, the dominant conceptual framework for describing non-oriented animal movement patterns has been the correlated random walk (CRW) model in which an individual's trajectory through space is represented by a sequence of distinct, independent randomly oriented 'moves'. It has long been recognized that the transformation of an animal's continuous movement path into a broken line is necessarily arbitrary and that probability distributions of move lengths and turning angles are model artefacts. Continuous-time analogues of CRWs that overcome this inherent shortcoming have appeared in the literature and are gaining prominence. In these models, velocities evolve as a Markovian process and have exponential autocorrelation. Integration of the velocity process gives the position process. Here, through a simple scaling argument and through an exact analytical analysis, it is shown that autocorrelation inevitably leads to Lévy walk (LW) movement patterns on timescales less than the autocorrelation timescale. This is significant because over recent years there has been an accumulation of evidence from a variety of experimental and theoretical studies that many organisms have movement patterns that can be approximated by LWs, and there is now intense debate about the relative merits of CRWs and LWs as representations of non-orientated animal movement patterns.
Plant osmoregulation as an emergent water-saving adaptation under salt-stress conditions
NASA Astrophysics Data System (ADS)
Perri, S.; Entekhabi, D.; Molini, A.
2017-12-01
Ecohydrological models have been widely used in studying plant-environment relations and hydraulic traits in response to water, light and nutrient limitations. In this context, models become a tool to investigate how plants exploit available resources to maximize transpiration and growth, eventually pointing out possible pathways to adaptation. In contrast, ecohydrologists have rarely focused on the effects of salinity on plant transpiration, which are commonly considered marginal in terrestrial biomes. The effect of salinity, however, cannot be neglected in the case of salt affected soils - estimated to cover over 9 billion ha worldwide - and in intertidal and coastal ecosystems. The objective of this study is to model the effects of salinity on plant-water relations in order to better understand the interplay of soil hyperosmotic conditions and osmoregulation strategies in determining different transpiration patterns. Salinity reduces the water potential, therefore is expected to affect the plant hydraulics and reduce plant conductance (eventually leading to cavitation for very high salt concentrations). Also, plant adaptation to short and long-term exposure to salinity comes into place to maintain an efficient water and nutrients uptake. We introduce a parsimonious soil-plant-atmosphere continuum (SPAC) model that incorporates parameterizations for morphological, physiological and biochemical mechanisms involving varying salt concentrations in the soil water solution. Transpiration is expressed as a function of soil water salinity and salt-mediated water flows within the SPAC (the conceptual representation of the model is shown in Figure c). The model is used to explain a paradox observed in salt-tolerant plants where maximum transpiration occurs at an intermediate value of salinity (CTr,max), and is lower in more fresh (CTr,max) and more saline (C>CTr,max) conditions (Figure a and b). In particular, we show that - in salt-tolerant species - osmoregulation emerges as a water-saving behavior similar to the strategies that xerophytes use to cope with aridity. Possible anatomical and morphological adaptations to long-term salinity exposure are addressed through an analysis of transpiration patterns for different values of root and leaf density and for diverse levels of salt-tolerance.
The hysteretic evapotranspiration - vapor pressure deficit relation
NASA Astrophysics Data System (ADS)
Zhang, Q.; Manzoni, S.; Katul, G. G.; Porporato, A. M.; Yang, D.
2013-12-01
Diurnal hysteresis between evapotranspiration (ET) and vapor pressure deficit (VPD) was reported in many ecosystems but justification for its onset and magnitude remain incomplete with biotic and abiotic factors invoked as possible explanations. To place these explanations within a mathematical framework, ';rate-dependent' hysteresis originating from a phase angle difference between periodic input and output time series is first considered. Lysimeter evaporation (E) measurements from wet bare soils and model calculations using the Penman equation demonstrate that the E-VPD hysteresis emerges without any biotic effects due to a phase angle difference (or time lag) between net radiation the main driver of E, and VPD. Modulations originating from biotic effects on the ET-VPD hysteresis were then considered. The phase angle difference representation earlier employed was mathematically transformed into a storage problem and applied to the soil-plant system. The transformed system shows that soil moisture storage within the root zone can produce an ET-VPD hysteresis prototypical of those generated by phase-angle differences. To explore the interplay between all the lags in the soil-plant-atmosphere system and phase angle differences among forcing and response variables, a detailed soil-plant-atmosphere continuum (SPAC) model was developed and applied to a grassland ecosystem. The results of the SPAC model suggest that the hysteresis magnitude depends on the radiation-VPD lag. The soil moisture dry-down simulations also suggest that modeled root water potential and leaf water potential are both better indicators of the hysteresis magnitude than soil moisture, suggesting that plant water status is the main factor regulating the hysteretic relation between ET and VPD. Hence, the genesis and magnitude of the ET-VPD hysteresis are controlled directly by both biotic factors and abiotic factors such as time lag between radiation and VPD originating from boundary layer processes. Measured eddy covariance evapotranspiration (ET) and vapor pressure deficit (VPD) time series normalized by their maximum values collected in a grassland ecosystem. The magnitude of the hysteresis is quantified as the area enveloped by the ET-VPD relation (Ahys). The arrows together with time ticks indicate the progression of the diurnal cycle from sunrise to sunset.
A Search for Quasi-periodic Oscillations in the Blazar 1ES 1959+650
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiao-Pan; Luo, Yu-Hui; Yang, Hai-Yan
We have searched quasi-periodic oscillations (QPOs) in the 15 GHz light curve of the BL Lac object 1ES 1959+650 monitored by the Owens Valley Radio Observatory 40 m telescope during the period from 2008 January to 2016 February, using the Lomb–Scargle Periodogram, power spectral density (PSD), discrete autocorrelation function, and phase dispersion minimization (PDM) techniques. The red noise background has been established via the PSD method, and no QPO can be derived at the 3 σ confidence level accounting for the impact of the red noise variability. We conclude that the light curve of 1ES 1959+650 can be explained bymore » a stochastic red noise process that contributes greatly to the total observed variability amplitude, dominates the power spectrum, causes spurious bumps and wiggles in the autocorrelation function and can result in the variance of the folded light curve decreasing toward lower temporal frequencies when few-cycle, sinusoid-like patterns are present. Moreover, many early supposed periodicity claims for blazar light curves need to be reevaluated assuming red noise.« less
Analysis of Spatiotemporal Characteristics of Pandemic SARS Spread in Mainland China.
Cao, Chunxiang; Chen, Wei; Zheng, Sheng; Zhao, Jian; Wang, Jinfeng; Cao, Wuchun
2016-01-01
Severe acute respiratory syndrome (SARS) is one of the most severe emerging infectious diseases of the 21st century so far. SARS caused a pandemic that spread throughout mainland China for 7 months, infecting 5318 persons in 194 administrative regions. Using detailed mainland China epidemiological data, we study spatiotemporal aspects of this person-to-person contagious disease and simulate its spatiotemporal transmission dynamics via the Bayesian Maximum Entropy (BME) method. The BME reveals that SARS outbreaks show autocorrelation within certain spatial and temporal distances. We use BME to fit a theoretical covariance model that has a sine hole spatial component and exponential temporal component and obtain the weights of geographical and temporal autocorrelation factors. Using the covariance model, SARS dynamics were estimated and simulated under the most probable conditions. Our study suggests that SARS transmission varies in its epidemiological characteristics and SARS outbreak distributions exhibit palpable clusters on both spatial and temporal scales. In addition, the BME modelling demonstrates that SARS transmission features are affected by spatial heterogeneity, so we analyze potential causes. This may benefit epidemiological control of pandemic infectious diseases.
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2015-01-01
River water is a major resource of drinking water on earth. Management of river water is highly needed for surviving. Yamuna is the main river of India, and monthly variation of water quality of river Yamuna, using statistical methods have been compared at different sites for each water parameters. Regression, correlation coefficient, autoregressive integrated moving average (ARIMA), box-Jenkins, residual autocorrelation function (ACF), residual partial autocorrelation function (PACF), lag, fractal, Hurst exponent, and predictability index have been estimated to analyze trend and prediction of water quality. Predictive model is useful at 95% confidence limits and all water parameters reveal platykurtic curve. Brownian motion (true random walk) behavior exists at different sites for BOD, AMM, and total Kjeldahl nitrogen (TKN). Quality of Yamuna River water at Hathnikund is good, declines at Nizamuddin, Mazawali, Agra D/S, and regains good quality again at Juhikha. For all sites, almost all parameters except potential of hydrogen (pH), water temperature (WT) crosses the prescribed limits of World Health Organization (WHO)/United States Environmental Protection Agency (EPA).
Analysis of Spatiotemporal Characteristics of Pandemic SARS Spread in Mainland China
Cao, Chunxiang; Zheng, Sheng; Zhao, Jian; Wang, Jinfeng; Cao, Wuchun
2016-01-01
Severe acute respiratory syndrome (SARS) is one of the most severe emerging infectious diseases of the 21st century so far. SARS caused a pandemic that spread throughout mainland China for 7 months, infecting 5318 persons in 194 administrative regions. Using detailed mainland China epidemiological data, we study spatiotemporal aspects of this person-to-person contagious disease and simulate its spatiotemporal transmission dynamics via the Bayesian Maximum Entropy (BME) method. The BME reveals that SARS outbreaks show autocorrelation within certain spatial and temporal distances. We use BME to fit a theoretical covariance model that has a sine hole spatial component and exponential temporal component and obtain the weights of geographical and temporal autocorrelation factors. Using the covariance model, SARS dynamics were estimated and simulated under the most probable conditions. Our study suggests that SARS transmission varies in its epidemiological characteristics and SARS outbreak distributions exhibit palpable clusters on both spatial and temporal scales. In addition, the BME modelling demonstrates that SARS transmission features are affected by spatial heterogeneity, so we analyze potential causes. This may benefit epidemiological control of pandemic infectious diseases. PMID:27597972
Tan, Q; Tu, H W; Gu, C H; Li, X D; Li, R Z; Wang, M; Chen, S G; Cheng, Y J; Liu, Y M
2017-11-20
Objective: To explore the occupational disease spatial distribution characteristics in Guangzhou and Foshan city in 2006-2013 with Geographic Information System and to provide evidence for making control strategy. Methods: The data on occupational disease diagnosis in Guangzhou and Foshan city from 2006 through 2013 were collected and linked to the digital map at administrative county level with Arc GIS12.0 software for spatial analysis. Results: The maps of occupational disease and Moran's spatial autocor-relation analysis showed that the spatial aggregation existed in Shunde and Nanhai region with Moran's index 1.727, -0.003. Local Moran's I spatial autocorrelation analysis pointed out the "positive high incidence re-gion" and the "negative high incidence region" during 2006~2013. Trend analysis showed that the diagnosis case increased slightly then declined from west to east, increase obviously from north to south, declined from? southwest to northeast, high in the middle and low on both sides in northwest-southeast direction. Conclusions: The occupational disease is obviously geographical distribution in Guangzhou and Foshan city. The corresponding prevention measures should be made according to the geographical distribution.
ERIC Educational Resources Information Center
Huitema, Bradley E.; McKean, Joseph W.
2007-01-01
Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…
Statistical Approach To Extraction Of Texture In SAR
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Kwok, Ronald
1992-01-01
Improved statistical method of extraction of textural features in synthetic-aperture-radar (SAR) images takes account of effects of scheme used to sample raw SAR data, system noise, resolution of radar equipment, and speckle. Treatment of speckle incorporated into overall statistical treatment of speckle, system noise, and natural variations in texture. One computes speckle auto-correlation function from system transfer function that expresses effect of radar aperature and incorporates range and azimuth resolutions.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1976-01-01
This concise paper considers the effect on the autocorrelation function of a pseudonoise (PN) code when the acquisition scheme only integrates coherently over part of the code and then noncoherently combines these results. The peak-to-null ratio of the effective PN autocorrelation function is shown to degrade to the square root of n, where n is the number of PN symbols over which coherent integration takes place.
Detection limit used for early warning in public health surveillance.
Kobari, Tsuyoshi; Iwaki, Kazuo; Nagashima, Tomomi; Ishii, Fumiyoshi; Hayashi, Yuzuru; Yajima, Takehiko
2009-06-01
A theory of detection limit, developed in analytical chemistry, is applied to public health surveillance to detect an outbreak of national emergencies such as natural disaster and bioterrorism. In this investigation, the influenza epidemic around the Tokyo area from 2003 to 2006 is taken as a model of normal and large-scale epidemics. The detection limit of the normal epidemic is used as a threshold with a specified level of significance to identify a sign of the abnormal epidemic among the daily variation in anti-influenza drug sales at community pharmacies. While auto-correlation of data is often an obstacle to an unbiased estimator of standard deviation involved in the detection limit, the analytical theory (FUMI) can successfully treat the auto-correlation of the drug sales in the same way as the auto-correlation appearing as 1/f noise in many analytical instruments.
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Masuoka, E. J. (Principal Investigator)
1981-01-01
The presence of positive serial correlation (autocorrelation) in remotely sensed data results in an underestimate of the variance-covariance matrix when calculated using contiguous pixels. This underestimate produces an inflation in F statistics. For a set of Thematic Mapper Simulator data (TMS), used to test the ability to discriminate a known geobotanical anomaly from its background, the inflation in F statistics related to serial correlation is between 7 and 70 times. This means that significance tests of means of the spectral bands initially appear to suggest that the anomalous site is very different in spectral reflectance and emittance from its background sites. However, this difference often disappears and is always dramatically reduced when compared to frequency distributions of test statistics produced by the comparison of simulated training sets possessing equal means, but which are composed of autocorrelated observations.
Autocorrelation and cross-correlation in time series of homicide and attempted homicide
NASA Astrophysics Data System (ADS)
Machado Filho, A.; da Silva, M. F.; Zebende, G. F.
2014-04-01
We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.
Evaluation of terrain complexity by autocorrelation. [geomorphology and geobotany
NASA Technical Reports Server (NTRS)
Craig, R. G.
1982-01-01
The topographic complexity of various sections of the Ozark, Appalachian, and Interior Low Plateaus, as well as of the New England, Piedmont, Blue Ridge, Ouachita, and Valley and Ridge Provinces of the Eastern United States were characterized. The variability of autocorrelation within a small area (7 1/2-ft quadrangle) to the variability at widely separated and diverse areas within the same physiographic region was compared to measure the degree of uniformity of the processes which can be expected to be encountered within a given physiographic province. The variability of autocorrelation across the eight geomorphic regions was compared and contrasted. The total study area was partitioned into subareas homogeneous in terrain complexity. The relation between the complexity measured, the geomorphic process mix implied, and the way in which geobotanical information is modified into a more or less recognizable entity is demonstrated. Sampling strategy is described.
Spatio-Temporal Patterns of Barmah Forest Virus Disease in Queensland, Australia
Naish, Suchithra; Hu, Wenbiao; Mengersen, Kerrie; Tong, Shilu
2011-01-01
Background Barmah Forest virus (BFV) disease is a common and wide-spread mosquito-borne disease in Australia. This study investigated the spatio-temporal patterns of BFV disease in Queensland, Australia using geographical information system (GIS) tools and geostatistical analysis. Methods/Principal Findings We calculated the incidence rates and standardised incidence rates of BFV disease. Moran's I statistic was used to assess the spatial autocorrelation of BFV incidences. Spatial dynamics of BFV disease was examined using semi-variogram analysis. Interpolation techniques were applied to visualise and display the spatial distribution of BFV disease in statistical local areas (SLAs) throughout Queensland. Mapping of BFV disease by SLAs reveals the presence of substantial spatio-temporal variation over time. Statistically significant differences in BFV incidence rates were identified among age groups (χ2 = 7587, df = 7327,p<0.01). There was a significant positive spatial autocorrelation of BFV incidence for all four periods, with the Moran's I statistic ranging from 0.1506 to 0.2901 (p<0.01). Semi-variogram analysis and smoothed maps created from interpolation techniques indicate that the pattern of spatial autocorrelation was not homogeneous across the state. Conclusions/Significance This is the first study to examine spatial and temporal variation in the incidence rates of BFV disease across Queensland using GIS and geostatistics. The BFV transmission varied with age and gender, which may be due to exposure rates or behavioural risk factors. There are differences in the spatio-temporal patterns of BFV disease which may be related to local socio-ecological and environmental factors. These research findings may have implications in the BFV disease control and prevention programs in Queensland. PMID:22022430
Gijzel, Sanne M W; van de Leemput, Ingrid A; Scheffer, Marten; Roppolo, Mattia; Olde Rikkert, Marcel G M; Melis, René J F
2017-07-01
We currently still lack valid methods to dynamically measure resilience for stressors before the appearance of adverse health outcomes that hamper well-being. Quantifying an older adult's resilience in an early stage would aid complex decision-making in health care. Translating complex dynamical systems theory to humans, we hypothesized that three dynamical indicators of resilience (variance, temporal autocorrelation, and cross-correlation) in time series of self-rated physical, mental, and social health were associated with frailty levels in older adults. We monitored self-rated physical, mental, and social health during 100 days using daily visual analogue scale questions in 22 institutionalized older adults (mean age 84.0, SD: 5.9 years). Frailty was determined by the Survey of Health, Ageing and Retirement in Europe (SHARE) frailty index. The resilience indicators (variance, temporal autocorrelation, and cross-correlation) were calculated using multilevel models. The self-rated health time series of frail elderly exhibited significantly elevated variance in the physical, mental, and social domain, as well as significantly stronger cross-correlations between all three domains, as compared to the nonfrail group (all P < 0.001). Temporal autocorrelation was not significantly associated with frailty. We found supporting evidence for two out of three hypothesized resilience indicators to be related to frailty levels in older adults. By mirroring the dynamical resilience indicators to a frailty index, we delivered a first empirical base to validate and quantify the construct of systemic resilience in older adults in a dynamic way. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, F. Landis
1997-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Jacob, Benjamin J; Krapp, Fiorella; Ponce, Mario; Gottuzzo, Eduardo; Griffith, Daniel A; Novak, Robert J
2010-05-01
Spatial autocorrelation is problematic for classical hierarchical cluster detection tests commonly used in multi-drug resistant tuberculosis (MDR-TB) analyses as considerable random error can occur. Therefore, when MDRTB clusters are spatially autocorrelated the assumption that the clusters are independently random is invalid. In this research, a product moment correlation coefficient (i.e., the Moran's coefficient) was used to quantify local spatial variation in multiple clinical and environmental predictor variables sampled in San Juan de Lurigancho, Lima, Peru. Initially, QuickBird 0.61 m data, encompassing visible bands and the near infra-red bands, were selected to synthesize images of land cover attributes of the study site. Data of residential addresses of individual patients with smear-positive MDR-TB were geocoded, prevalence rates calculated and then digitally overlaid onto the satellite data within a 2 km buffer of 31 georeferenced health centers, using a 10 m2 grid-based algorithm. Geographical information system (GIS)-gridded measurements of each health center were generated based on preliminary base maps of the georeferenced data aggregated to block groups and census tracts within each buffered area. A three-dimensional model of the study site was constructed based on a digital elevation model (DEM) to determine terrain covariates associated with the sampled MDR-TB covariates. Pearson's correlation was used to evaluate the linear relationship between the DEM and the sampled MDR-TB data. A SAS/GIS(R) module was then used to calculate univariate statistics and to perform linear and non-linear regression analyses using the sampled predictor variables. The estimates generated from a global autocorrelation analyses were then spatially decomposed into empirical orthogonal bases using a negative binomial regression with a non-homogeneous mean. Results of the DEM analyses indicated a statistically non-significant, linear relationship between georeferenced health centers and the sampled covariate elevation. The data exhibited positive spatial autocorrelation and the decomposition of Moran's coefficient into uncorrelated, orthogonal map pattern components revealed global spatial heterogeneities necessary to capture latent autocorrelation in the MDR-TB model. It was thus shown that Poisson regression analyses and spatial eigenvector mapping can elucidate the mechanics of MDR-TB transmission by prioritizing clinical and environmental-sampled predictor variables for identifying high risk populations.
Integrated autocorrelator based on superconducting nanowires.
Sahin, Döndü; Gaggero, Alessandro; Hoang, Thang Ba; Frucci, Giulia; Mattioli, Francesco; Leoni, Roberto; Beetz, Johannes; Lermer, Matthias; Kamp, Martin; Höfling, Sven; Fiore, Andrea
2013-05-06
We demonstrate an integrated autocorrelator based on two superconducting single-photon detectors patterned on top of a GaAs ridge waveguide. This device enables the on-chip measurement of the second-order intensity correlation function g(2)(τ). A polarization-independent device quantum efficiency in the 1% range is reported, with a timing jitter of 88 ps at 1300 nm. g(2)(τ) measurements of continuous-wave and pulsed laser excitations are demonstrated with no measurable crosstalk within our measurement accuracy.
Testing algorithms for critical slowing down
NASA Astrophysics Data System (ADS)
Cossu, Guido; Boyle, Peter; Christ, Norman; Jung, Chulwoo; Jüttner, Andreas; Sanfilippo, Francesco
2018-03-01
We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC) algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.
NASA Astrophysics Data System (ADS)
Smith, Tony E.; Lee, Ka Lok
2012-01-01
There is a common belief that the presence of residual spatial autocorrelation in ordinary least squares (OLS) regression leads to inflated significance levels in beta coefficients and, in particular, inflated levels relative to the more efficient spatial error model (SEM). However, our simulations show that this is not always the case. Hence, the purpose of this paper is to examine this question from a geometric viewpoint. The key idea is to characterize the OLS test statistic in terms of angle cosines and examine the geometric implications of this characterization. Our first result is to show that if the explanatory variables in the regression exhibit no spatial autocorrelation, then the distribution of test statistics for individual beta coefficients in OLS is independent of any spatial autocorrelation in the error term. Hence, inferences about betas exhibit all the optimality properties of the classic uncorrelated error case. However, a second more important series of results show that if spatial autocorrelation is present in both the dependent and explanatory variables, then the conventional wisdom is correct. In particular, even when an explanatory variable is statistically independent of the dependent variable, such joint spatial dependencies tend to produce "spurious correlation" that results in over-rejection of the null hypothesis. The underlying geometric nature of this problem is clarified by illustrative examples. The paper concludes with a brief discussion of some possible remedies for this problem.
NASA Technical Reports Server (NTRS)
Davies, Roger
1994-01-01
The spatial autocorrelation functions of broad-band longwave and shortwave radiances measured by the Earth Radiation Budget Experiment (ERBE) are analyzed as a function of view angle in an investigation of the general effects of scene inhomogeneity on radiation. For nadir views, the correlation distance of the autocorrelation function is about 900 km for longwave radiance and about 500 km for shortwave radiance, consistent with higher degrees of freedom in shortwave reflection. Both functions rise monotonically with view angle, but there is a substantial difference in the relative angular dependence of the shortwave and longwave functions, especially for view angles less than 50 deg. In this range, the increase with angle of the longwave functions is found to depend only on the expansion of pixel area with angle, whereas the shortwave functions show an additional dependence on angle that is attributed to the occlusion of inhomogeneities by cloud height variations. Beyond a view angle of about 50 deg, both longwave and shortwave functions appear to be affected by cloud sides. The shortwave autocorrelation functions do not satisfy the principle of directional reciprocity, thereby proving that the average scene is horizontally inhomogeneous over the scale of an ERBE pixel (1500 sq km). Coarse stratification of the measurements by cloud amount, however, indicates that the average cloud-free scene does satisfy directional reciprocity on this scale.
NASA Astrophysics Data System (ADS)
Petrov, Yevgeniy
2009-10-01
Searches for sources of the highest-energy cosmic rays traditionally have included looking for clusters of event arrival directions on the sky. The smallest cluster is a pair of events falling within some angular window. In contrast to the standard two point (2-pt) autocorrelation analysis, this work takes into account influence of the galactic magnetic field (GMF). The highest energy events, those above 50EeV, collected by the surface detector of the Pierre Auger Observatory between January 1, 2004 and May 31, 2009 are used in the analysis. Having assumed protons as primaries, events are backtracked through BSS/S, BSS/A, ASS/S and ASS/A versions of Harari-Mollerach-Roulet (HMR) model of the GMF. For each version of the model, a 2-pt autocorrelation analysis is applied to the backtracked events and to 105 isotropic Monte Carlo realizations weighted by the Auger exposure. Scans in energy, separation angular window and different model parameters reveal clustering at different angular scales. Small angle clustering at 2-3 deg is particularly interesting and it is compared between different field scenarios. The strength of the autocorrelation signal at those angular scales differs between BSS and ASS versions of the HMR model. The BSS versions of the model tend to defocus protons as they arrive to Earth whereas for the ASS, in contrary, it is more likely to focus them.
A method to detect layover and shadow based on distributed spaceborne single-baseline InSAR
NASA Astrophysics Data System (ADS)
Yun, Ren; Huanxin, Zou; Shilin, Zhou; Hao, Sun; Kefeng, Ji
2014-03-01
Layover and Shadow are inevitable phenomenena in InSAR, which seriously destroy the continuity of interferometric phase images and present difficulties in the follow-up phase unwrapping. Thus, it's significant to detect layover and shadow. This paper presents an approach to detect layover and shadow using the auto-correlation matrix and amplitude of the two images. The method can make full use of the spatial information of neighboring pixels and effectively detect layover and shadow regions in the case of low registration accuracy. Experiment result on the simulated data verifies effectiveness of the algorithm.
A Tester for Carbon Nanotube Mode Lockers
NASA Astrophysics Data System (ADS)
Song, Yong-Won; Yamashita, Shinji
2007-05-01
We propose and demonstrate a tester for laser pulsating operation of carbon nanotubes employing a circulator with the extra degree of freedom of the second port to access diversified nanotube samples. The nanotubes are deposited onto the end facet of a dummy optical fiber by spray method that guarantees simple sample loading along with the minimized perturbation of optimized laser cavity condition. Resultant optical spectra, autocorrelation traces and pulse train of the laser outputs with qualified samples are presented.
Auto-correlation in the motor/imaginary human EEG signals: A vision about the FDFA fluctuations.
Zebende, Gilney Figueira; Oliveira Filho, Florêncio Mendes; Leyva Cruz, Juan Alberto
2017-01-01
In this paper we analyzed, by the FDFA root mean square fluctuation (rms) function, the motor/imaginary human activity produced by a 64-channel electroencephalography (EEG). We utilized the Physionet on-line databank, a publicly available database of human EEG signals, as a standardized reference database for this study. Herein, we report the use of detrended fluctuation analysis (DFA) method for EEG analysis. We show that the complex time series of the EEG exhibits characteristic fluctuations depending on the analyzed channel in the scalp-recorded EEG. In order to demonstrate the effectiveness of the proposed technique, we analyzed four distinct channels represented here by F332, F637 (frontal region of the head) and P349, P654 (parietal region of the head). We verified that the amplitude of the FDFA rms function is greater for the frontal channels than for the parietal. To tabulate this information in a better way, we define and calculate the difference between FDFA (in log scale) for the channels, thus defining a new path for analysis of EEG signals. Finally, related to the studied EEG signals, we obtain the auto-correlation exponent, αDFA by DFA method, that reveals self-affinity at specific time scale. Our results shows that this strategy can be applied to study the human brain activity in EEG processing.
Dong, Wen; Yang, Kun; Xu, Quan-Li; Yang, Yu-Lian
2015-01-01
This study investigated the spatial distribution, spatial autocorrelation, temporal cluster, spatial-temporal autocorrelation and probable risk factors of H7N9 outbreaks in humans from March 2013 to December 2014 in China. The results showed that the epidemic spread with significant spatial-temporal autocorrelation. In order to describe the spatial-temporal autocorrelation of H7N9, an improved model was developed by introducing a spatial-temporal factor in this paper. Logistic regression analyses were utilized to investigate the risk factors associated with their distribution, and nine risk factors were significantly associated with the occurrence of A(H7N9) human infections: the spatial-temporal factor φ (OR = 2546669.382, p < 0.001), migration route (OR = 0.993, p < 0.01), river (OR = 0.861, p < 0.001), lake(OR = 0.992, p < 0.001), road (OR = 0.906, p < 0.001), railway (OR = 0.980, p < 0.001), temperature (OR = 1.170, p < 0.01), precipitation (OR = 0.615, p < 0.001) and relative humidity (OR = 1.337, p < 0.001). The improved model obtained a better prediction performance and a higher fitting accuracy than the traditional model: in the improved model 90.1% (91/101) of the cases during February 2014 occurred in the high risk areas (the predictive risk > 0.70) of the predictive risk map, whereas 44.6% (45/101) of which overlaid on the high risk areas (the predictive risk > 0.70) for the traditional model, and the fitting accuracy of the improved model was 91.6% which was superior to the traditional model (86.1%). The predictive risk map generated based on the improved model revealed that the east and southeast of China were the high risk areas of A(H7N9) human infections in February 2014. These results provided baseline data for the control and prevention of future human infections. PMID:26633446
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Pretto, Lucas R., E-mail: lucas.de.pretto@usp.br; Nogueira, Gesse E. C.; Freitas, Anderson Z.
2016-04-28
Functional modalities of Optical Coherence Tomography (OCT) based on speckle analysis are emerging in the literature. We propose a simple approach to the autocorrelation of OCT signal to enable volumetric flow rate differentiation, based on decorrelation time. Our results show that this technique could distinguish flows separated by 3 μl/min, limited by the acquisition speed of the system. We further perform a B-scan of gradient flow inside a microchannel, enabling the visualization of the drag effect on the walls.
Matthews, Luke J; DeWan, Peter; Rula, Elizabeth Y
2013-01-01
Studies of social networks, mapped using self-reported contacts, have demonstrated the strong influence of social connections on the propensity for individuals to adopt or maintain healthy behaviors and on their likelihood to adopt health risks such as obesity. Social network analysis may prove useful for businesses and organizations that wish to improve the health of their populations by identifying key network positions. Health traits have been shown to correlate across friendship ties, but evaluating network effects in large coworker populations presents the challenge of obtaining sufficiently comprehensive network data. The purpose of this study was to evaluate methods for using online communication data to generate comprehensive network maps that reproduce the health-associated properties of an offline social network. In this study, we examined three techniques for inferring social relationships from email traffic data in an employee population using thresholds based on: (1) the absolute number of emails exchanged, (2) logistic regression probability of an offline relationship, and (3) the highest ranked email exchange partners. As a model of the offline social network in the same population, a network map was created using social ties reported in a survey instrument. The email networks were evaluated based on the proportion of survey ties captured, comparisons of common network metrics, and autocorrelation of body mass index (BMI) across social ties. Results demonstrated that logistic regression predicted the greatest proportion of offline social ties, thresholding on number of emails exchanged produced the best match to offline network metrics, and ranked email partners demonstrated the strongest autocorrelation of BMI. Since each method had unique strengths, researchers should choose a method based on the aspects of offline behavior of interest. Ranked email partners may be particularly useful for purposes related to health traits in a social network.
Matthews, Luke J.; DeWan, Peter; Rula, Elizabeth Y.
2013-01-01
Studies of social networks, mapped using self-reported contacts, have demonstrated the strong influence of social connections on the propensity for individuals to adopt or maintain healthy behaviors and on their likelihood to adopt health risks such as obesity. Social network analysis may prove useful for businesses and organizations that wish to improve the health of their populations by identifying key network positions. Health traits have been shown to correlate across friendship ties, but evaluating network effects in large coworker populations presents the challenge of obtaining sufficiently comprehensive network data. The purpose of this study was to evaluate methods for using online communication data to generate comprehensive network maps that reproduce the health-associated properties of an offline social network. In this study, we examined three techniques for inferring social relationships from email traffic data in an employee population using thresholds based on: (1) the absolute number of emails exchanged, (2) logistic regression probability of an offline relationship, and (3) the highest ranked email exchange partners. As a model of the offline social network in the same population, a network map was created using social ties reported in a survey instrument. The email networks were evaluated based on the proportion of survey ties captured, comparisons of common network metrics, and autocorrelation of body mass index (BMI) across social ties. Results demonstrated that logistic regression predicted the greatest proportion of offline social ties, thresholding on number of emails exchanged produced the best match to offline network metrics, and ranked email partners demonstrated the strongest autocorrelation of BMI. Since each method had unique strengths, researchers should choose a method based on the aspects of offline behavior of interest. Ranked email partners may be particularly useful for purposes related to health traits in a social network. PMID:23418436
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Simple Ultraviolet Short-Pulse Intensity Diagnostic Method Using Atmosphere
NASA Astrophysics Data System (ADS)
Aota, Tatsuya; Takahashi, Eiichi; Losev, Leonid L.; Tabuchi, Takeyuki; Kato, Susumu; Matsumoto, Yuji; Okuda, Isao; Owadano, Yoshiro
2005-05-01
An ultraviolet (UV) short-pulse intensity diagnostic method using atmosphere as a nonlinear medium was developed. This diagnostic method is based on evaluating the ion charge of the two-photon ionization of atmospheric oxygen upon irradiation with a UV (238-299 nm) short-pulse laser. The observed ion signal increased proportionally to the input intensity to the power of ˜2.2, during the two-photon ionization of atmospheric oxygen. An autocorrelator was constructed and used to successfully measure a UV laser pulse of ˜400 fs duration. Since this diagnostic system is used in the open-air under windowless conditions, it can be set along the beam path and used as a UV intensity monitor.
NASA Astrophysics Data System (ADS)
Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.
2013-11-01
Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.
Aging Wiener-Khinchin theorem and critical exponents of 1/f^{β} noise.
Leibovich, N; Dechant, A; Lutz, E; Barkai, E
2016-11-01
The power spectrum of a stationary process may be calculated in terms of the autocorrelation function using the Wiener-Khinchin theorem. We here generalize the Wiener-Khinchin theorem for nonstationary processes and introduce a time-dependent power spectrum 〈S_{t_{m}}(ω)〉 where t_{m} is the measurement time. For processes with an aging autocorrelation function of the form 〈I(t)I(t+τ)〉=t^{Υ}ϕ_{EA}(τ/t), where ϕ_{EA}(x) is a nonanalytic function when x is small, we find aging 1/f^{β} noise. Aging 1/f^{β} noise is characterized by five critical exponents. We derive the relations between the scaled autocorrelation function and these exponents. We show that our definition of the time-dependent spectrum retains its interpretation as a density of Fourier modes and discuss the relation to the apparent infrared divergence of 1/f^{β} noise. We illustrate our results for blinking-quantum-dot models, single-file diffusion, and Brownian motion in a logarithmic potential.
Use of autocorrelation scanning in DNA copy number analysis.
Zhang, Liangcai; Zhang, Li
2013-11-01
Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.
Nowosad, J; Stach, A; Kasprzyk, I; Grewling, Ł; Latałowa, M; Puc, M; Myszkowska, D; Weryszko-Chmielewska, E; Piotrowska-Weryszko, K; Chłopek, K; Majkowska-Wojciechowska, B; Uruska, A
The aim of the study was to determine the characteristics of temporal and space-time autocorrelation of pollen counts of Alnus , Betula , and Corylus in the air of eight cities in Poland. Daily average pollen concentrations were monitored over 8 years (2001-2005 and 2009-2011) using Hirst-designed volumetric spore traps. The spatial and temporal coherence of data was investigated using the autocorrelation and cross-correlation functions. The calculation and mathematical modelling of 61 correlograms were performed for up to 25 days back. The study revealed an association between temporal variations in Alnus , Betula , and Corylus pollen counts in Poland and three main groups of factors such as: (1) air mass exchange after the passage of a single weather front (30-40 % of pollen count variation); (2) long-lasting factors (50-60 %); and (3) random factors, including diurnal variations and measurements errors (10 %). These results can help to improve the quality of forecasting models.
Gudmundson, Sara; Eklöf, Anna; Wennergren, Uno
2015-08-07
How species respond to changes in environmental variability has been shown for single species, but the question remains whether these results are transferable to species when incorporated in ecological communities. Here, we address this issue by analysing the same species exposed to a range of environmental variabilities when (i) isolated or (ii) embedded in a food web. We find that all species in food webs exposed to temporally uncorrelated environments (white noise) show the same type of dynamics as isolated species, whereas species in food webs exposed to positively autocorrelated environments (red noise) can respond completely differently compared with isolated species. This is owing to species following their equilibrium densities in a positively autocorrelated environment that in turn enables species-species interactions to come into play. Our results give new insights into species' response to environmental variation. They especially highlight the importance of considering both species' interactions and environmental autocorrelation when studying population dynamics in a fluctuating environment. © 2015 The Author(s).
Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto
2012-05-25
In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.
Gudmundson, Sara; Eklöf, Anna; Wennergren, Uno
2015-01-01
How species respond to changes in environmental variability has been shown for single species, but the question remains whether these results are transferable to species when incorporated in ecological communities. Here, we address this issue by analysing the same species exposed to a range of environmental variabilities when (i) isolated or (ii) embedded in a food web. We find that all species in food webs exposed to temporally uncorrelated environments (white noise) show the same type of dynamics as isolated species, whereas species in food webs exposed to positively autocorrelated environments (red noise) can respond completely differently compared with isolated species. This is owing to species following their equilibrium densities in a positively autocorrelated environment that in turn enables species–species interactions to come into play. Our results give new insights into species' response to environmental variation. They especially highlight the importance of considering both species' interactions and environmental autocorrelation when studying population dynamics in a fluctuating environment. PMID:26224705
NASA Astrophysics Data System (ADS)
Mouas, Mohamed; Gasser, Jean-Georges; Hellal, Slimane; Grosdidier, Benoît; Makradi, Ahmed; Belouettar, Salim
2012-03-01
Molecular dynamics (MD) simulations of liquid tin between its melting point and 1600 °C have been performed in order to interpret and discuss the ionic structure. The interactions between ions are described by a new accurate pair potential built within the pseudopotential formalism and the linear response theory. The calculated structure factor that reflects the main information on the local atomic order in liquids is compared to diffraction measurements. Having some confidence in the ability of this pair potential to give a good representation of the atomic structure, we then focused our attention on the investigation of the atomic transport properties through the MD computations of the velocity autocorrelation function and stress autocorrelation function. Using the Green-Kubo formula (for the first time to our knowledge for liquid tin) we determine the macroscopic transport properties from the corresponding microscopic time autocorrelation functions. The selfdiffusion coefficient and the shear viscosity as functions of temperature are found to be in good agreement with the experimental data.
NASA Technical Reports Server (NTRS)
Yagodinskiy, V. N.; Konovalenko, Z. P.; Druzhinin, I. P.
1974-01-01
An analysis of data from epidemics makes it possible to determine their principal causes, governed by environmental factors (solar activity, etc.) The results of an analysis of the periodicity of the epidemic process in the case of diphtheria are presented which was conducted with the aid of autocorrelation and spectral methods of analysis. Numerical data (annual figures) are used on the dynamics of diphtheria in 50 regions (points) with a total duration of 2,777 years.
Mattsson, Brady J.; Zipkin, Elise F.; Gardner, Beth; Blank, Peter J.; Sauer, John R.; Royle, J. Andrew
2013-01-01
Understanding interactions between mobile species distributions and landcover characteristics remains an outstanding challenge in ecology. Multiple factors could explain species distributions including endogenous evolutionary traits leading to conspecific clustering and endogenous habitat features that support life history requirements. Birds are a useful taxon for examining hypotheses about the relative importance of these factors among species in a community. We developed a hierarchical Bayes approach to model the relationships between bird species occupancy and local landcover variables accounting for spatial autocorrelation, species similarities, and partial observability. We fit alternative occupancy models to detections of 90 bird species observed during repeat visits to 316 point-counts forming a 400-m grid throughout the Patuxent Wildlife Research Refuge in Maryland, USA. Models with landcover variables performed significantly better than our autologistic and null models, supporting the hypothesis that local landcover heterogeneity is important as an exogenous driver for species distributions. Conspecific clustering alone was a comparatively poor descriptor of local community composition, but there was evidence for spatial autocorrelation in all species. Considerable uncertainty remains whether landcover combined with spatial autocorrelation is most parsimonious for describing bird species distributions at a local scale. Spatial structuring may be weaker at intermediate scales within which dispersal is less frequent, information flows are localized, and landcover types become spatially diversified and therefore exhibit little aggregation. Examining such hypotheses across species assemblages contributes to our understanding of community-level associations with conspecifics and landscape composition.
Mattsson, Brady J; Zipkin, Elise F; Gardner, Beth; Blank, Peter J; Sauer, John R; Royle, J Andrew
2013-01-01
Understanding interactions between mobile species distributions and landcover characteristics remains an outstanding challenge in ecology. Multiple factors could explain species distributions including endogenous evolutionary traits leading to conspecific clustering and endogenous habitat features that support life history requirements. Birds are a useful taxon for examining hypotheses about the relative importance of these factors among species in a community. We developed a hierarchical Bayes approach to model the relationships between bird species occupancy and local landcover variables accounting for spatial autocorrelation, species similarities, and partial observability. We fit alternative occupancy models to detections of 90 bird species observed during repeat visits to 316 point-counts forming a 400-m grid throughout the Patuxent Wildlife Research Refuge in Maryland, USA. Models with landcover variables performed significantly better than our autologistic and null models, supporting the hypothesis that local landcover heterogeneity is important as an exogenous driver for species distributions. Conspecific clustering alone was a comparatively poor descriptor of local community composition, but there was evidence for spatial autocorrelation in all species. Considerable uncertainty remains whether landcover combined with spatial autocorrelation is most parsimonious for describing bird species distributions at a local scale. Spatial structuring may be weaker at intermediate scales within which dispersal is less frequent, information flows are localized, and landcover types become spatially diversified and therefore exhibit little aggregation. Examining such hypotheses across species assemblages contributes to our understanding of community-level associations with conspecifics and landscape composition.
Mattsson, Brady J.; Zipkin, Elise F.; Gardner, Beth; Blank, Peter J.; Sauer, John R.; Royle, J. Andrew
2013-01-01
Understanding interactions between mobile species distributions and landcover characteristics remains an outstanding challenge in ecology. Multiple factors could explain species distributions including endogenous evolutionary traits leading to conspecific clustering and endogenous habitat features that support life history requirements. Birds are a useful taxon for examining hypotheses about the relative importance of these factors among species in a community. We developed a hierarchical Bayes approach to model the relationships between bird species occupancy and local landcover variables accounting for spatial autocorrelation, species similarities, and partial observability. We fit alternative occupancy models to detections of 90 bird species observed during repeat visits to 316 point-counts forming a 400-m grid throughout the Patuxent Wildlife Research Refuge in Maryland, USA. Models with landcover variables performed significantly better than our autologistic and null models, supporting the hypothesis that local landcover heterogeneity is important as an exogenous driver for species distributions. Conspecific clustering alone was a comparatively poor descriptor of local community composition, but there was evidence for spatial autocorrelation in all species. Considerable uncertainty remains whether landcover combined with spatial autocorrelation is most parsimonious for describing bird species distributions at a local scale. Spatial structuring may be weaker at intermediate scales within which dispersal is less frequent, information flows are localized, and landcover types become spatially diversified and therefore exhibit little aggregation. Examining such hypotheses across species assemblages contributes to our understanding of community-level associations with conspecifics and landscape composition. PMID:23393564
Modulation transfer function of a fish-eye lens based on the sixth-order wave aberration theory.
Jia, Han; Lu, Lijun; Cao, Yiqing
2018-01-10
A calculation program of the modulation transfer function (MTF) of a fish-eye lens is developed with the autocorrelation method, in which the sixth-order wave aberration theory of ultra-wide-angle optical systems is used to simulate the wave aberration distribution at the exit pupil of the optical systems. The autocorrelation integral is processed with the Gauss-Legendre integral, and the magnification chromatic aberration is discussed to calculate polychromatic MTF. The MTF calculation results of a given example are then compared with those previously obtained based on the fourth-order wave aberration theory of plane-symmetrical optical systems and with those from the Zemax program. The study shows that MTF based on the sixth-order wave aberration theory has satisfactory calculation accuracy even for a fish-eye lens with a large acceptance aperture. And the impacts of different types of aberrations on the MTF of a fish-eye lens are analyzed. Finally, we apply the self-adaptive and normalized real-coded genetic algorithm and the MTF developed in the paper to optimize the Nikon F/2.8 fish-eye lens; consequently, the optimized system shows better MTF performances than those of the original design.
Souza, Michele; Eisenmann, Joey; Chaves, Raquel; Santos, Daniel; Pereira, Sara; Forjaz, Cláudia; Maia, José
2016-10-01
In this paper, three different statistical approaches were used to investigate short-term tracking of cardiorespiratory and performance-related physical fitness among adolescents. Data were obtained from the Oporto Growth, Health and Performance Study and comprised 1203 adolescents (549 girls) divided into two age cohorts (10-12 and 12-14 years) followed for three consecutive years, with annual assessment. Cardiorespiratory fitness was assessed with 1-mile run/walk test; 50-yard dash, standing long jump, handgrip, and shuttle run test were used to rate performance-related physical fitness. Tracking was expressed in three different ways: auto-correlations, multilevel modelling with crude and adjusted model (for biological maturation, body mass index, and physical activity), and Cohen's Kappa (κ) computed in IBM SPSS 20.0, HLM 7.01 and Longitudinal Data Analysis software, respectively. Tracking of physical fitness components was (1) moderate-to-high when described by auto-correlations; (2) low-to-moderate when crude and adjusted models were used; and (3) low according to Cohen's Kappa (κ). These results demonstrate that when describing tracking, different methods should be considered since they provide distinct and more comprehensive views about physical fitness stability patterns.
Land cover mapping at sub-pixel scales
NASA Astrophysics Data System (ADS)
Makido, Yasuyo Kato
One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
NASA Astrophysics Data System (ADS)
Cakir, R.; Walsh, T. J.; Norman, D. K.
2017-12-01
We, Washington Geological Survey (WGS), have been performing multi-method near surface geophysical surveys to help assess potential earthquake damage at public schools in Washington. We have been conducting active and passive seismic surveys, and estimating Shear-wave velocity (Vs) profiles, then determining the NEHRP soil classifications based on Vs30m values at school sites in Washington. The survey methods we have used: 1D and 2D MASW and MAM, P- and S-wave refraction, horizontal-to-vertical spectral ratio (H/V), and 2ST-SPAC to measure Vs and Vp at shallow (0-70m) and greater depths at the sites. We have also run Ground Penetrating Radar (GPR) surveys at the sites to check possible horizontal subsurface variations along and between the seismic survey lines and the actual locations of the school buildings. The seismic survey results were then used to calculate Vs30m for determining the NEHRP soil classifications at school sites, thus soil amplification effects on the ground motions. Resulting shear-wave velocity profiles generated from these studies can also be used for site response and liquefaction potential studies, as well as for improvement efforts of the national Vs30m database, essential information for ShakeMap and ground motion modeling efforts in Washington and Pacific Northwest. To estimate casualties, nonstructural, and structural losses caused by the potential earthquakes in the region, we used these seismic site characterization results associated with structural engineering evaluations based on ASCE41 or FEMA 154 (Rapid Visual Screening) as inputs in FEMA Hazus-Advanced Engineering Building Module (AEBM) analysis. Compelling example surveys will be presented for the school sites in western and eastern Washington.
Multichannel heterodyning for wideband interferometry, correlation and signal processing
Erskine, David J.
1999-01-01
A method of signal processing a high bandwidth signal by coherently subdividing it into many narrow bandwidth channels which are individually processed at lower frequencies in a parallel manner. Autocorrelation and correlations can be performed using reference frequencies which may drift slowly with time, reducing cost of device. Coordinated adjustment of channel phases alters temporal and spectral behavior of net signal process more precisely than a channel used individually. This is a method of implementing precision long coherent delays, interferometers, and filters for high bandwidth optical or microwave signals using low bandwidth electronics. High bandwidth signals can be recorded, mathematically manipulated, and synthesized.
Biased Metropolis Sampling for Rugged Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Berg, Bernd A.
2003-11-01
Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.
Multichannel heterodyning for wideband interferometry, correlation and signal processing
Erskine, D.J.
1999-08-24
A method is disclosed of signal processing a high bandwidth signal by coherently subdividing it into many narrow bandwidth channels which are individually processed at lower frequencies in a parallel manner. Autocorrelation and correlations can be performed using reference frequencies which may drift slowly with time, reducing cost of device. Coordinated adjustment of channel phases alters temporal and spectral behavior of net signal process more precisely than a channel used individually. This is a method of implementing precision long coherent delays, interferometers, and filters for high bandwidth optical or microwave signals using low bandwidth electronics. High bandwidth signals can be recorded, mathematically manipulated, and synthesized. 50 figs.
Power quality analysis based on spatial correlation
NASA Astrophysics Data System (ADS)
Li, Jiangtao; Zhao, Gang; Liu, Haibo; Li, Fenghou; Liu, Xiaoli
2018-03-01
With the industrialization and urbanization, the status of electricity in the production and life is getting higher and higher. So the prediction of power quality is the more potential significance. Traditional power quality analysis methods include: power quality data compression, disturbance event pattern classification, disturbance parameter calculation. Under certain conditions, these methods can predict power quality. This paper analyses the temporal variation of power quality of one provincial power grid in China from time angle. The distribution of power quality was analyzed based on spatial autocorrelation. This paper tries to prove that the research idea of geography is effective for mining the potential information of power quality.
Under-reported data analysis with INAR-hidden Markov chains.
Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David
2016-11-20
In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Self tuning system for industrial surveillance
Stephan, Wegerich W; Jarman, Kristin K.; Gross, Kenneth C.
2000-01-01
A method and system for automatically establishing operational parameters of a statistical surveillance system. The method and system performs a frequency domain transition on time dependent data, a first Fourier composite is formed, serial correlation is removed, a series of Gaussian whiteness tests are performed along with an autocorrelation test, Fourier coefficients are stored and a second Fourier composite is formed. Pseudorandom noise is added, a Monte Carlo simulation is performed to establish SPRT missed alarm probabilities and tested with a synthesized signal. A false alarm test is then emperically evaluated and if less than a desired target value, then SPRT probabilities are used for performing surveillance.
NASA Astrophysics Data System (ADS)
Zelener, B. B.; Zelener, B. V.; Manykin, E. A.; Bronin, S. Ya; Bobrov, A. A.; Khikhlukha, D. R.
2018-01-01
We present results of calculations by the method of molecular dynamics of self-diffusion and conductivity of electron and ion components of ultracold plasma in a comparison with available theoretical and experimental data. For the ion self-diffusion coefficient, good agreement was obtained with experiments on ultracold plasma. The results of the calculation of self-diffusion also agree well with other calculations performed for the same values of the coupling parameter, but at high temperatures. The difference in the results of the conductivity calculations on the basis of the current autocorrelation function and on the basis of the diffusion coefficient is discussed.
Long-term correlations and cross-correlations in IBovespa and constituent companies
NASA Astrophysics Data System (ADS)
de Lima, Neílson F.; Fernandes, Leonardo H. S.; Jale, Jader S.; de Mattos Neto, Paulo S. G.; Stošić, Tatijana; Stošić, Borko; Ferreira, Tiago A. E.
2018-02-01
We study auto-correlations and cross-correlations of IBovespa index and its constituent companies. We use Detrended Fluctuation Analysis (DFA) to quantify auto-correlations and Detrended Cross-Correlation Analysis (DCCA) to quantify cross-correlations in absolute returns of daily closing prices of IBovespa and the individual companies. We find persistent long-term correlations and cross-correlations which are weaker than those found for USA market. Our results indicate that market indices of developing markets exhibit weaker coupling with its constituents than for mature developed markets.
Fernandez, Michael; Abreu, Jose I; Shi, Hongqing; Barnard, Amanda S
2016-11-14
The possibility of band gap engineering in graphene opens countless new opportunities for application in nanoelectronics. In this work, the energy gaps of 622 computationally optimized graphene nanoflakes were mapped to topological autocorrelation vectors using machine learning techniques. Machine learning modeling revealed that the most relevant correlations appear at topological distances in the range of 1 to 42 with prediction accuracy higher than 80%. The data-driven model can statistically discriminate between graphene nanoflakes with different energy gaps on the basis of their molecular topology.
Extracting Damping Ratio from Dynamic Data and Numerical Solutions
NASA Technical Reports Server (NTRS)
Casiano, M. J.
2016-01-01
There are many ways to extract damping parameters from data or models. This Technical Memorandum provides a quick reference for some of the more common approaches used in dynamics analysis. Described are six methods of extracting damping from data: the half-power method, logarithmic decrement (decay rate) method, an autocorrelation/power spectral density fitting method, a frequency response fitting method, a random decrement fitting method, and a newly developed half-quadratic gain method. Additionally, state-space models and finite element method modeling tools, such as COMSOL Multiphysics (COMSOL), provide a theoretical damping via complex frequency. Each method has its advantages which are briefly noted. There are also likely many other advanced techniques in extracting damping within the operational modal analysis discipline, where an input excitation is unknown; however, these approaches discussed here are objective, direct, and can be implemented in a consistent manner.
Perrakis, Konstantinos; Gryparis, Alexandros; Schwartz, Joel; Le Tertre, Alain; Katsouyanni, Klea; Forastiere, Francesco; Stafoggia, Massimo; Samoli, Evangelia
2014-12-10
An important topic when estimating the effect of air pollutants on human health is choosing the best method to control for seasonal patterns and time varying confounders, such as temperature and humidity. Semi-parametric Poisson time-series models include smooth functions of calendar time and weather effects to control for potential confounders. Case-crossover (CC) approaches are considered efficient alternatives that control seasonal confounding by design and allow inclusion of smooth functions of weather confounders through their equivalent Poisson representations. We evaluate both methodological designs with respect to seasonal control and compare spline-based approaches, using natural splines and penalized splines, and two time-stratified CC approaches. For the spline-based methods, we consider fixed degrees of freedom, minimization of the partial autocorrelation function, and general cross-validation as smoothing criteria. Issues of model misspecification with respect to weather confounding are investigated under simulation scenarios, which allow quantifying omitted, misspecified, and irrelevant-variable bias. The simulations are based on fully parametric mechanisms designed to replicate two datasets with different mortality and atmospheric patterns. Overall, minimum partial autocorrelation function approaches provide more stable results for high mortality counts and strong seasonal trends, whereas natural splines with fixed degrees of freedom perform better for low mortality counts and weak seasonal trends followed by the time-season-stratified CC model, which performs equally well in terms of bias but yields higher standard errors. Copyright © 2014 John Wiley & Sons, Ltd.
Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumonteil, E.; Malvagi, F.
2012-07-01
The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solutionmore » is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)« less
Evaluation of site effects in Loja basin (southern Ecuador)
NASA Astrophysics Data System (ADS)
Guartán, J.; Navarro, M.; Soto, J.
2013-05-01
Site effect assessment based on subsurface ground conditions is often crucial for estimating the urban seismic hazard. In order to evaluate the site effects in the intra-mountain basin of Loja (southern Ecuador), geological and geomorphological survey and ambient noise measurements were carried out. A classification of shallow geologic materials was performed through a geological cartography and the use of geotechnical data and geophysical surveys. Seven lithological formations have been analyzed, both in composition and thickness of existing materials. The shear-wave velocity structure in the center of the basin, composed by alluvial materials, was evaluated by means of inversion of Rayleigh wave dispersion data obtained from vertical-component array records of ambient noise. VS30 structure was estimated and an average value of 346 m s-1 was obtained. This value agrees with the results obtained from SPT N-value (306-368 m s-1). Short-period ambient noise observations were performed in 72 sites on a 500m × 500m dimension grid. The horizontal-to-vertical spectral ratio (HVSR) method was applied in order to determine a ground predominant period distribution map. This map reveals an irregular distribution of predominant period values, ranged from 0.1 to 1.0 s, according with the heterogeneity of the basin. Lower values of the period are found in the harder formation (Quillollaco formation), while higher values are predominantly obtained in alluvial formation. These results will be used in the evaluation of ground dynamic properties and will be included in seismic microzoning of Loja basin. Keywords: Landform classification, Ambient noise, SPAC method, Rayleigh waves, Shear velocity profile, Ground predominant period. ;
Structured recording of intraoperative surgical workflows
NASA Astrophysics Data System (ADS)
Neumuth, T.; Durstewitz, N.; Fischer, M.; Strauss, G.; Dietz, A.; Meixensberger, J.; Jannin, P.; Cleary, K.; Lemke, H. U.; Burgert, O.
2006-03-01
Surgical Workflows are used for the methodical and scientific analysis of surgical interventions. The approach described here is a step towards developing surgical assist systems based on Surgical Workflows and integrated control systems for the operating room of the future. This paper describes concepts and technologies for the acquisition of Surgical Workflows by monitoring surgical interventions and their presentation. Establishing systems which support the Surgical Workflow in operating rooms requires a multi-staged development process beginning with the description of these workflows. A formalized description of surgical interventions is needed to create a Surgical Workflow. This description can be used to analyze and evaluate surgical interventions in detail. We discuss the subdivision of surgical interventions into work steps regarding different levels of granularity and propose a recording scheme for the acquisition of manual surgical work steps from running interventions. To support the recording process during the intervention, we introduce a new software architecture. Core of the architecture is our Surgical Workflow editor that is intended to deal with the manifold, complex and concurrent relations during an intervention. Furthermore, a method for an automatic generation of graphs is shown which is able to display the recorded surgical work steps of the interventions. Finally we conclude with considerations about extensions of our recording scheme to close the gap to S-PACS systems. The approach was used to record 83 surgical interventions from 6 intervention types from 3 different surgical disciplines: ENT surgery, neurosurgery and interventional radiology. The interventions were recorded at the University Hospital Leipzig, Germany and at the Georgetown University Hospital, Washington, D.C., USA.
Duncan, Dustin T; Kawachi, Ichiro; White, Kellee; Williams, David R
2013-08-01
The geography of recreational open space might be inequitable in terms of minority neighborhood racial/ethnic composition and neighborhood poverty, perhaps due in part to residential segregation. This study evaluated the association between minority neighborhood racial/ethnic composition, neighborhood poverty, and recreational open space in Boston, Massachusetts (US). Across Boston census tracts, we computed percent non-Hispanic Black, percent Hispanic, and percent families in poverty as well as recreational open space density. We evaluated spatial autocorrelation in study variables and in the ordinary least squares (OLS) regression residuals via the Global Moran's I. We then computed Spearman correlations between the census tract socio-demographic characteristics and recreational open space density, including correlations adjusted for spatial autocorrelation. After this, we computed OLS regressions or spatial regressions as appropriate. Significant positive spatial autocorrelation was found for neighborhood socio-demographic characteristics (all p value = 0.001). We found marginally significant positive spatial autocorrelation in recreational open space (Global Moran's I = 0.082; p value = 0.053). However, we found no spatial autocorrelation in the OLS regression residuals, which indicated that spatial models were not appropriate. There was a negative correlation between census tract percent non-Hispanic Black and recreational open space density (r S = -0.22; conventional p value = 0.005; spatially adjusted p value = 0.019) as well as a negative correlation between predominantly non-Hispanic Black census tracts (>60 % non-Hispanic Black in a census tract) and recreational open space density (r S = -0.23; conventional p value = 0.003; spatially adjusted p value = 0.007). In bivariate and multivariate OLS models, percent non-Hispanic Black in a census tract and predominantly Black census tracts were associated with decreased density of recreational open space (p value < 0.001). Consistent with several previous studies in other geographic locales, we found that Black neighborhoods in Boston were less likely to have recreational open spaces, indicating the need for policy interventions promoting equitable access. Such interventions may contribute to reductions and disparities in obesity.
Dean, Roger T; Dunsmuir, William T M
2016-06-01
Many articles on perception, performance, psychophysiology, and neuroscience seek to relate pairs of time series through assessments of their cross-correlations. Most such series are individually autocorrelated: they do not comprise independent values. Given this situation, an unfounded reliance is often placed on cross-correlation as an indicator of relationships (e.g., referent vs. response, leading vs. following). Such cross-correlations can indicate spurious relationships, because of autocorrelation. Given these dangers, we here simulated how and why such spurious conclusions can arise, to provide an approach to resolving them. We show that when multiple pairs of series are aggregated in several different ways for a cross-correlation analysis, problems remain. Finally, even a genuine cross-correlation function does not answer key motivating questions, such as whether there are likely causal relationships between the series. Thus, we illustrate how to obtain a transfer function describing such relationships, informed by any genuine cross-correlations. We illustrate the confounds and the meaningful transfer functions by two concrete examples, one each in perception and performance, together with key elements of the R software code needed. The approach involves autocorrelation functions, the establishment of stationarity, prewhitening, the determination of cross-correlation functions, the assessment of Granger causality, and autoregressive model development. Autocorrelation also limits the interpretability of other measures of possible relationships between pairs of time series, such as mutual information. We emphasize that further complexity may be required as the appropriate analysis is pursued fully, and that causal intervention experiments will likely also be needed.
Statistical regularities of Carbon emission trading market: Evidence from European Union allowances
NASA Astrophysics Data System (ADS)
Zheng, Zeyu; Xiao, Rui; Shi, Haibo; Li, Guihong; Zhou, Xiaofeng
2015-05-01
As an emerging financial market, the trading value of carbon emission trading market has definitely increased. In recent years, the carbon emission allowances have already become a way of investment. They are bought and sold not only by carbon emitters but also by investors. In this paper, we analyzed the price fluctuations of the European Union allowances (EUA) futures in European Climate Exchange (ECX) market from 2007 to 2011. The symmetric and power-law probability density function of return time series was displayed. We found that there are only short-range correlations in price changes (return), while long-range correlations in the absolute of price changes (volatility). Further, detrended fluctuation analysis (DFA) approach was applied with focus on long-range autocorrelations and Hurst exponent. We observed long-range power-law autocorrelations in the volatility that quantify risk, and found that they decay much more slowly than the autocorrelation of return time series. Our analysis also showed that the significant cross correlations exist between return time series of EUA and many other returns. These cross correlations exist in a wide range of fields, including stock markets, energy concerned commodities futures, and financial futures. The significant cross-correlations between energy concerned futures and EUA indicate the physical relationship between carbon emission and energy production process. Additionally, the cross-correlations between financial futures and EUA indicate that the speculation behavior may become an important factor that can affect the price of EUA. Finally we modeled the long-range volatility time series of EUA with a particular version of the GARCH process, and the result also suggests long-range volatility autocorrelations.
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
Neville, Helen; Isaak, Daniel; Dunham, J.B.; Thurow, Russel; Rieman, B.
2006-01-01
Natal homing is a hallmark of the life history of salmonid fishes, but the spatial scale of homing within local, naturally reproducing salmon populations is still poorly understood. Accurate homing (paired with restricted movement) should lead to the existence of fine-scale genetic structuring due to the spatial clustering of related individuals on spawning grounds. Thus, we explored the spatial resolution of natal homing using genetic associations among individual Chinook salmon (Oncorhynchus tshawytscha) in an interconnected stream network. We also investigated the relationship between genetic patterns and two factors hypothesized to influence natal homing and localized movements at finer scales in this species, localized patterns in the distribution of spawning gravels and sex. Spatial autocorrelation analyses showed that spawning locations in both sub-basins of our study site were spatially clumped, but the upper sub-basin generally had a larger spatial extent and continuity of redd locations than the lower sub-basin, where the distribution of redds and associated habitat conditions were more patchy. Male genotypes were not autocorrelated at any spatial scale in either sub-basin. Female genotypes showed significant spatial autocorrelation and genetic patterns for females varied in the direction predicted between the two sub-basins, with much stronger autocorrelation in the sub-basin with less continuity in spawning gravels. The patterns observed here support predictions about differential constraints and breeding tactics between the two sexes and the potential for fine-scale habitat structure to influence the precision of natal homing and localized movements of individual Chinook salmon on their breeding grounds.
Retrieval of P wave Basin Response from Autocorrelation of Seismic Noise-Jakarta, Indonesia
NASA Astrophysics Data System (ADS)
Saygin, E.; Cummins, P. R.; Lumley, D. E.
2016-12-01
Indonesia's capital city, Jakarta, is home to a very large (over 10 million), vulnerable population and is proximate to known active faults, as well as to the subduction of Australian plate, which has a megathrust at abut 300 km distance, as well as intraslab seismicity extending to directly beneath the city. It is also located in a basin filled with a thick layer of unconsolidated and poorly consolidated sediment, which increases the seismic hazard the city is facing. Therefore, the information on the seismic velocity structure of the basin is crucial for increasing our knowledge of the seismic risk. We undertook a passive deployment of broadband seismographs throughout the city over a 3-month interval in 2013-2014, recording ambient seismic noise at over 90 sites for intervals of 1 month or more. Here we consider autocorrelations of the vertical component of the continuously recorded seismic wavefield across this dense network to image the shallow P wave velocity structure of Jakarta, Indonesia. Unlike the surface wave Green's functions used in ambient noise tomography, the vertical-component autocorrelograms are dominated by body wave energy that is potentially sensitive to sharp velocity contrasts, which makes them useful in seismic imaging. Results show autocorrelograms at different seismic stations with travel time variations that largely reflect changes in sediment thickness across the basin. We also confirm the validity our interpretation of the observed autocorrelation waveforms by conducting 2D finite difference full waveform numerical modeling for randomly distributed seismic sources to retrieve the reflection response through autocorrelation.
Determining the refractive index of particles using glare-point imaging technique
NASA Astrophysics Data System (ADS)
Meng, Rui; Ge, Baozhen; Lu, Qieni; Yu, Xiaoxue
2018-04-01
A method of measuring the refractive index of a particle is presented from a glare-point image. The space of a doublet image of a particle can be determined with high accuracy by using auto-correlation and Gaussian interpolation, and then the refractive index is obtained from glare-point separation, and a factor that may influence the accuracy of glare-point separation is explored. Experiments are carried out for three different kinds of particles, including polystyrene latex particles, glass beads, and water droplets, whose measuring accuracy is improved by the data fitting method. The research results show that the method presented in this paper is feasible and beneficial to applications such as spray and atmospheric composition measurements.
Fluid-fluid interfacial mobility from random walks
NASA Astrophysics Data System (ADS)
Barclay, Paul L.; Lukes, Jennifer R.
2017-12-01
Dual control volume grand canonical molecular dynamics is used to perform the first calculation of fluid-fluid interfacial mobilities. The mobility is calculated from one-dimensional random walks of the interface by relating the diffusion coefficient to the interfacial mobility. Three different calculation methods are employed: one using the interfacial position variance as a function of time, one using the mean-squared interfacial displacement, and one using the time-autocorrelation of the interfacial velocity. The mobility is calculated for two liquid-liquid interfaces and one liquid-vapor interface to examine the robustness of the methods. Excellent agreement between the three calculation methods is shown for all the three interfaces, indicating that any of them could be used to calculate the interfacial mobility.
The spatial structure of chronic morbidity: evidence from UK census returns.
Dutey-Magni, Peter F; Moon, Graham
2016-08-24
Disease prevalence models have been widely used to estimate health, lifestyle and disability characteristics for small geographical units when other data are not available. Yet, knowledge is often lacking about how to make informed decisions around the specification of such models, especially regarding spatial assumptions placed on their covariance structure. This paper is concerned with understanding processes of spatial dependency in unexplained variation in chronic morbidity. 2011 UK census data on limiting long-term illness (LLTI) is used to look at the spatial structure in chronic morbidity across England and Wales. The variance and spatial clustering of the odds of LLTI across local authority districts (LADs) and middle layer super output areas are measured across 40 demographic cross-classifications. A series of adjacency matrices based on distance, contiguity and migration flows are tested to examine the spatial structure in LLTI. Odds are then modelled using a logistic mixed model to examine the association with district-level covariates and their predictive power. The odds of chronic illness are more dispersed than local age characteristics, mortality, hospitalisation rates and chance alone would suggest. Of all adjacency matrices, the three-nearest neighbour method is identified as the best fitting. Migration flows can also be used to construct spatial weights matrices which uncover non-negligible autocorrelation. Once the most important characteristics observable at the LAD-level are taken into account, substantial spatial autocorrelation remains which can be modelled explicitly to improve disease prevalence predictions. Systematic investigation of spatial structures and dependency is important to develop model-based estimation tools in chronic disease mapping. Spatial structures reflecting migration interactions are easy to develop and capture autocorrelation in LLTI. Patterns of spatial dependency in the geographical distribution of LLTI are not comparable across ethnic groups. Ethnic stratification of local health information is needed and there is potential to further address complexity in prevalence models by improving access to disaggregated data.
Scilab software package for the study of dynamical systems
NASA Astrophysics Data System (ADS)
Bordeianu, C. C.; Beşliu, C.; Jipa, Al.; Felea, D.; Grossu, I. V.
2008-05-01
This work presents a new software package for the study of chaotic flows and maps. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well known examples are implemented, with the capability of the users inserting their own ODE. Program summaryProgram title: Chaos Catalogue identifier: AEAP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 885 No. of bytes in distributed program, including test data, etc.: 5925 Distribution format: tar.gz Programming language: Scilab 3.1.1 Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 100 Megabytes Classification: 6.2 Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincaré sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Restrictions: The package routines are normally able to handle ODE systems of high orders (up to order twelve and possibly higher), depending on the nature of the problem. Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE and Lyapunov exponents calculation.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.
Du, Hai-Wen; Wang, Yong; Zhuang, Da-Fang; Jiang, Xiao-San
2017-08-07
The nest flea index of Meriones unguiculatus is a critical indicator for the prevention and control of plague, which can be used not only to detect the spatial and temporal distributions of Meriones unguiculatus, but also to reveal its cluster rule. This research detected the temporal and spatial distribution characteristics of the plague natural foci of Mongolian gerbils by body flea index from 2005 to 2014, in order to predict plague outbreaks. Global spatial autocorrelation was used to describe the entire spatial distribution pattern of the body flea index in the natural plague foci of typical Chinese Mongolian gerbils. Cluster and outlier analysis and hot spot analysis were also used to detect the intensity of clusters based on geographic information system methods. The quantity of M. unguiculatus nest fleas in the sentinel surveillance sites from 2005 to 2014 and host density data of the study area from 2005 to 2010 used in this study were provided by Chinese Center for Disease Control and Prevention. The epidemic focus regions of the Mongolian gerbils remain the same as the hot spot regions relating to the body flea index. High clustering areas possess a similar pattern as the distribution pattern of the body flea index indicating that the transmission risk of plague is relatively high. In terms of time series, the area of the epidemic focus gradually increased from 2005 to 2007, declined rapidly in 2008 and 2009, and then decreased slowly and began trending towards stability from 2009 to 2014. For the spatial change, the epidemic focus regions began moving northward from the southwest epidemic focus of the Mongolian gerbils from 2005 to 2007, and then moved from north to south in 2007 and 2008. The body flea index of Chinese gerbil foci reveals significant spatial and temporal aggregation characteristics through the employing of spatial autocorrelation. The diversity of temporary and spatial distribution is mainly affected by seasonal variation, the human activity and natural factors.
Spatial Distribution of Oak Mistletoe as It Relates to Habits of Oak Woodland Frugivores
Wilson, Ethan A.; Sullivan, Patrick J.; Dickinson, Janis L.
2014-01-01
This study addresses the underlying spatial distribution of oak mistletoe, Phoradendron villosum, a hemi-parasitic plant that provides a continuous supply of berries for frugivorous birds overwintering the oak savanna habitat of California's outer coast range. As the winter community of birds consuming oak mistletoe varies from group-living territorial species to birds that roam in flocks, we asked if mistletoe volume was spatially autocorrelated at the scale of persistent territories or whether the patterns predicted by long-term territory use by western bluebirds are overcome by seed dispersal by more mobile bird species. The abundance of mistletoe was mapped on trees within a 700 ha study site in Carmel Valley, California. Spatial autocorrelation of mistletoe volume was analyzed using the variogram method and spatial distribution of oak mistletoe trees was analyzed using Ripley's K and O-ring statistics. On a separate set of 45 trees, mistletoe volume was highly correlated with the volume of female, fruit-bearing plants, indicating that overall mistletoe volume is a good predictor of fruit availability. Variogram analysis showed that mistletoe volume was spatially autocorrelated up to approximately 250 m, a distance consistent with persistent territoriality of western bluebirds and philopatry of sons, which often breed next door to their parents and are more likely to remain home when their parents have abundant mistletoe. Using Ripley's K and O-ring analyses, we showed that mistletoe trees were aggregated for distances up to 558 m, but for distances between 558 to 724 m the O-ring analysis deviated from Ripley's K in showing repulsion rather than aggregation. While trees with mistletoe were aggregated at larger distances, mistletoe was spatially correlated at a smaller distance, consistent with what is expected based on persistent group territoriality of western bluebirds in winter and the extreme philopatry of their sons. PMID:25389971
NASA Technical Reports Server (NTRS)
Grumet, A.
1981-01-01
An automatic correlation plane processor that can rapidly acquire, identify, and locate the autocorrelation outputs of a bank of multiple optical matched filters is described. The read-only memory (ROM) stored digital silhouette of each image associated with each matched filter allows TV video to be used to collect image energy to provide accurate normalization of autocorrelations. The resulting normalized autocorrelations are independent of the illumination of the matched input. Deviation from unity of a normalized correlation can be used as a confidence measure of correct image identification. Analog preprocessing circuits permit digital conversion and random access memory (RAM) storage of those video signals with the correct amplitude, pulse width, rising slope, and falling slope. TV synchronized addressing of 3 RAMs permits on-line storage of: (1) the maximum unnormalized amplitude, (2) the image x location, and (3) the image y location of the output of each of up to 99 matched filters. A fourth RAM stores all normalized correlations. A normalization approach, normalization for cross correlations, a system's description with block diagrams, and system's applications are discussed.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
Geomagnetic storm under laboratory conditions: randomized experiment
NASA Astrophysics Data System (ADS)
Gurfinkel, Yu I.; Vasin, A. L.; Pishchalnikov, R. Yu; Sarimov, R. M.; Sasonko, M. L.; Matveeva, T. A.
2017-10-01
The influence of the previously recorded geomagnetic storm (GS) on human cardiovascular system and microcirculation has been studied under laboratory conditions. Healthy volunteers in lying position were exposed under two artificially created conditions: quiet (Q) and storm (S). The Q regime playbacks a noise-free magnetic field (MF) which is closed to the natural geomagnetic conditions on Moscow's latitude. The S regime playbacks the initially recorded 6-h geomagnetic storm which is repeated four times sequentially. The cardiovascular response to the GS impact was assessed by measuring capillary blood velocity (CBV) and blood pressure (BP) and by the analysis of the 24-h ECG recording. A storm-to-quiet ratio for the cardio intervals (CI) and the heart rate variability (HRV) was introduced in order to reveal the average over group significant differences of HRV. An individual sensitivity to the GS was estimated using the autocorrelation function analysis of the high-frequency (HF) part of the CI spectrum. The autocorrelation analysis allowed for detection a group of subjects of study which autocorrelation functions (ACF) react differently in the Q and S regimes of exposure.
Geomagnetic storm under laboratory conditions: randomized experiment.
Gurfinkel, Yu I; Vasin, A L; Pishchalnikov, R Yu; Sarimov, R M; Sasonko, M L; Matveeva, T A
2018-04-01
The influence of the previously recorded geomagnetic storm (GS) on human cardiovascular system and microcirculation has been studied under laboratory conditions. Healthy volunteers in lying position were exposed under two artificially created conditions: quiet (Q) and storm (S). The Q regime playbacks a noise-free magnetic field (MF) which is closed to the natural geomagnetic conditions on Moscow's latitude. The S regime playbacks the initially recorded 6-h geomagnetic storm which is repeated four times sequentially. The cardiovascular response to the GS impact was assessed by measuring capillary blood velocity (CBV) and blood pressure (BP) and by the analysis of the 24-h ECG recording. A storm-to-quiet ratio for the cardio intervals (CI) and the heart rate variability (HRV) was introduced in order to reveal the average over group significant differences of HRV. An individual sensitivity to the GS was estimated using the autocorrelation function analysis of the high-frequency (HF) part of the CI spectrum. The autocorrelation analysis allowed for detection a group of subjects of study which autocorrelation functions (ACF) react differently in the Q and S regimes of exposure.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
Spatial design and strength of spatial signal: Effects on covariance estimation
Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.
2007-01-01
In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
Data Analysis Recipes: Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Hogg, David W.; Foreman-Mackey, Daniel
2018-05-01
Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data. In this primarily pedagogical contribution, we give a brief overview of the most basic MCMC method and some practical advice for the use of MCMC in real inference problems. We give advice on method choice, tuning for performance, methods for initialization, tests of convergence, troubleshooting, and use of the chain output to produce or report parameter estimates with associated uncertainties. We argue that autocorrelation time is the most important test for convergence, as it directly connects to the uncertainty on the sampling estimate of any quantity of interest. We emphasize that sampling is a method for doing integrals; this guides our thinking about how MCMC output is best used. .
Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro
2013-01-01
Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.
Spatial and spatiotemporal pattern analysis of coconut lethal yellowing in Mozambique.
Bonnot, F; de Franqueville, H; Lourenço, E
2010-04-01
Coconut lethal yellowing (LY) is caused by a phytoplasma and is a major threat for coconut production throughout its growing area. Incidence of LY was monitored visually on every coconut tree in six fields in Mozambique for 34 months. Disease progress curves were plotted and average monthly disease incidence was estimated. Spatial patterns of disease incidence were analyzed at six assessment times. Aggregation was tested by the coefficient of spatial autocorrelation of the beta-binomial distribution of diseased trees in quadrats. The binary power law was used as an assessment of overdispersion across the six fields. Spatial autocorrelation between symptomatic trees was measured by the BB join count statistic based on the number of pairs of diseased trees separated by a specific distance and orientation, and tested using permutation methods. Aggregation of symptomatic trees was detected in every field in both cumulative and new cases. Spatiotemporal patterns were analyzed with two methods. The proximity of symptomatic trees at two assessment times was investigated using the spatiotemporal BB join count statistic based on the number of pairs of trees separated by a specific distance and orientation and exhibiting the first symptoms of LY at the two times. The semivariogram of times of appearance of LY was calculated to characterize how the lag between times of appearance of LY was related to the distance between symptomatic trees. Both statistics were tested using permutation methods. A tendency for new cases to appear in the proximity of previously diseased trees and a spatially structured pattern of times of appearance of LY within clusters of diseased trees were detected, suggesting secondary spread of the disease.
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
Liu, Jian; Miller, William H
2007-06-21
It is shown how quantum mechanical time correlation functions [defined, e.g., in Eq. (1.1)] can be expressed, without approximation, in the same form as the linearized approximation of the semiclassical initial value representation (LSC-IVR), or classical Wigner model, for the correlation function [cf. Eq. (2.1)], i.e., as a phase space average (over initial conditions for trajectories) of the Wigner functions corresponding to the two operators. The difference is that the trajectories involved in the LSC-IVR evolve classically, i.e., according to the classical equations of motion, while in the exact theory they evolve according to generalized equations of motion that are derived here. Approximations to the exact equations of motion are then introduced to achieve practical methods that are applicable to complex (i.e., large) molecular systems. Four such methods are proposed in the paper--the full Wigner dynamics (full WD) and the second order WD based on "Wigner trajectories" [H. W. Lee and M. D. Scully, J. Chem. Phys. 77, 4604 (1982)] and the full Donoso-Martens dynamics (full DMD) and the second order DMD based on "Donoso-Martens trajectories" [A. Donoso and C. C. Martens, Phys. Rev. Lett. 8722, 223202 (2001)]--all of which can be viewed as generalizations of the original LSC-IVR method. Numerical tests of the four versions of this new approach are made for two anharmonic model problems, and for each the momentum autocorrelation function (i.e., operators linear in coordinate or momentum operators) and the force autocorrelation function (nonlinear operators) have been calculated. These four new approximate treatments are indeed seen to be significant improvements to the original LSC-IVR approximation.
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2018-02-01
To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.
Non-equilibrium dynamics from RPMD and CMD.
Welsch, Ralph; Song, Kai; Shi, Qiang; Althorpe, Stuart C; Miller, Thomas F
2016-11-28
We investigate the calculation of approximate non-equilibrium quantum time correlation functions (TCFs) using two popular path-integral-based molecular dynamics methods, ring-polymer molecular dynamics (RPMD) and centroid molecular dynamics (CMD). It is shown that for the cases of a sudden vertical excitation and an initial momentum impulse, both RPMD and CMD yield non-equilibrium TCFs for linear operators that are exact for high temperatures, in the t = 0 limit, and for harmonic potentials; the subset of these conditions that are preserved for non-equilibrium TCFs of non-linear operators is also discussed. Furthermore, it is shown that for these non-equilibrium initial conditions, both methods retain the connection to Matsubara dynamics that has previously been established for equilibrium initial conditions. Comparison of non-equilibrium TCFs from RPMD and CMD to Matsubara dynamics at short times reveals the orders in time to which the methods agree. Specifically, for the position-autocorrelation function associated with sudden vertical excitation, RPMD and CMD agree with Matsubara dynamics up to O(t 4 ) and O(t 1 ), respectively; for the position-autocorrelation function associated with an initial momentum impulse, RPMD and CMD agree with Matsubara dynamics up to O(t 5 ) and O(t 2 ), respectively. Numerical tests using model potentials for a wide range of non-equilibrium initial conditions show that RPMD and CMD yield non-equilibrium TCFs with an accuracy that is comparable to that for equilibrium TCFs. RPMD is also used to investigate excited-state proton transfer in a system-bath model, and it is compared to numerically exact calculations performed using a recently developed version of the Liouville space hierarchical equation of motion approach; again, similar accuracy is observed for non-equilibrium and equilibrium initial conditions.
Gait symmetry and regularity in transfemoral amputees assessed by trunk accelerations
2010-01-01
Background The aim of this study was to evaluate a method based on a single accelerometer for the assessment of gait symmetry and regularity in subjects wearing lower limb prostheses. Methods Ten transfemoral amputees and ten healthy control subjects were studied. For the purpose of this study, subjects wore a triaxial accelerometer on their thorax, and foot insoles. Subjects were asked to walk straight ahead for 70 m at their natural speed, and at a lower and faster speed. Indices of step and stride regularity (Ad1 and Ad2, respectively) were obtained by the autocorrelation coefficients computed from the three acceleration components. Step and stride durations were calculated from the plantar pressure data and were used to compute two reference indices (SI1 and SI2) for step and stride regularity. Results Regression analysis showed that both Ad1 well correlates with SI1 (R2 up to 0.74), and Ad2 well correlates with SI2 (R2 up to 0.52). A ROC analysis showed that Ad1 and Ad2 has generally a good sensitivity and specificity in classifying amputee's walking trial, as having a normal or a pathologic step or stride regularity as defined by means of the reference indices SI1 and SI2. In particular, the antero-posterior component of Ad1 and the vertical component of Ad2 had a sensitivity of 90.6% and 87.2%, and a specificity of 92.3% and 81.8%, respectively. Conclusions The use of a simple accelerometer, whose components can be analyzed by the autocorrelation function method, is adequate for the assessment of gait symmetry and regularity in transfemoral amputees. PMID:20085653
An improved portmanteau test for autocorrelated errors in interrupted time-series regression models.
Huitema, Bradley E; McKean, Joseph W
2007-08-01
A new portmanteau test for autocorrelation among the errors of interrupted time-series regression models is proposed. Simulation results demonstrate that the inferential properties of the proposed Q(H-M) test statistic are considerably more satisfactory than those of the well known Ljung-Box test and moderately better than those of the Box-Pierce test. These conclusions generally hold for a wide variety of autoregressive (AR), moving averages (MA), and ARMA error processes that are associated with time-series regression models of the form described in Huitema and McKean (2000a, 2000b).
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Non-iterative characterization of few-cycle laser pulses using flat-top gates.
Selm, Romedi; Krauss, Günther; Leitenstorfer, Alfred; Zumbusch, Andreas
2012-03-12
We demonstrate a method for broadband laser pulse characterization based on a spectrally resolved cross-correlation with a narrowband flat-top gate pulse. Excellent phase-matching by collinear excitation in a microscope focus is exploited by degenerate four-wave mixing in a microscope slide. Direct group delay extraction of an octave spanning spectrum which is generated in a highly nonlinear fiber allows for spectral phase retrieval. The validity of the technique is supported by the comparison with an independent second-harmonic fringe-resolved autocorrelation measurement for an 11 fs laser pulse.
Virtual sensor models for real-time applications
NASA Astrophysics Data System (ADS)
Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin
2016-09-01
Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.
The time series approach to short term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, M.T.; Behr, S.M.
The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.
Picosecond pulse measurements using the active laser medium
NASA Technical Reports Server (NTRS)
Bernardin, James P.; Lawandy, N. M.
1990-01-01
A simple method for measuring the pulse lengths of synchronously pumped dye lasers which does not require the use of an external nonlinear medium, such as a doubling crystal or two-photon fluorescence cell, to autocorrelate the pulses is discussed. The technique involves feeding the laser pulses back into the dye jet, thus correlating the output pulses with the intracavity pulses to obtain pulse length signatures in the resulting time-averaged laser power. Experimental measurements were performed using a rhodamine 6G dye laser pumped by a mode-locked frequency-doubled Nd:YAG laser. The results agree well with numerical computations, and the method proves effective in determining lengths of picosecond laser pulses.
Digital Processing Of Young's Fringes In Speckle Photography
NASA Astrophysics Data System (ADS)
Chen, D. J.; Chiang, F. P.
1989-01-01
A new technique for fully automatic diffraction fringe measurement in point-wise speckle photograph analysis is presented in this paper. The fringe orientation and spacing are initially estimated with the help of 1-D FFT. A 2-D convolution filter is then applied to enhance the estimated image . High signal-to-noise rate (SNR) fringe pattern is achieved which makes it feasible for precise determination of the displacement components. The halo-effect is also optimally eliminated in a new way. With the computation time compared favorably with those of 2-D autocorrelation method and the iterative 2-D FFT method. High reliability and accurate determination of displacement components are achieved over a wide range of fringe density.
UWB communication receiver feedback loop
Spiridon, Alex; Benzel, Dave; Dowla, Farid U.; Nekoogar, Faranak; Rosenbury, Erwin T.
2007-12-04
A novel technique and structure that maximizes the extraction of information from reference pulses for UWB-TR receivers is introduced. The scheme efficiently processes an incoming signal to suppress different types of UWB as well as non-UWB interference prior to signal detection. Such a method and system adds a feedback loop mechanism to enhance the signal-to-noise ratio of reference pulses in a conventional TR receiver. Moreover, sampling the second order statistical function such as, for example, the autocorrelation function (ACF) of the received signal and matching it to the ACF samples of the original pulses for each transmitted bit provides a more robust UWB communications method and system in the presence of channel distortions.
Scaling range of power laws that originate from fluctuation analysis
NASA Astrophysics Data System (ADS)
Grech, Dariusz; Mazur, Zygmunt
2013-05-01
We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.
Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale
Nykanen, V.; Raines, G.L.
2006-01-01
A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.
Hollands, Simon; Campbell, M Karen; Gilliland, Jason; Sarma, Sisira
2013-10-01
To investigate the association between fast-food restaurant density and adult body mass index (BMI) in Canada. Individual-level BMI and confounding variables were obtained from the 2007-2008 Canadian Community Health Survey master file. Locations of the fast-food and full-service chain restaurants and other non-chain restaurants were obtained from the 2008 Infogroup Canada business database. Food outlet density (fast-food, full-service and other) per 10,000 population was calculated for each Forward Sortation Area (FSA). Global (Moran's I) and local indicators of spatial autocorrelation of BMI were assessed. Ordinary least squares (OLS) and spatial auto-regressive error (SARE) methods were used to assess the association between local food environment and adult BMI in Canada. Global and local spatial autocorrelation of BMI were found in our univariate analysis. We found that OLS and SARE estimates were very similar in our multivariate models. An additional fast-food restaurant per 10,000 people at the FSA-level is associated with a 0.022kg/m(2) increase in BMI. On the other hand, other restaurant density is negatively related to BMI. Fast-food restaurant density is positively associated with BMI in Canada. Results suggest that restricting availability of fast-food in local neighborhoods may play a role in obesity prevention. © 2013.
First-principles calculation of entropy for liquid metals.
Desjarlais, Michael P
2013-12-01
We demonstrate the accurate calculation of entropies and free energies for a variety of liquid metals using an extension of the two-phase thermodynamic (2PT) model based on a decomposition of the velocity autocorrelation function into gas-like (hard sphere) and solid-like (harmonic) subsystems. The hard sphere model for the gas-like component is shown to give systematically high entropies for liquid metals as a direct result of the unphysical Lorentzian high-frequency tail. Using a memory function framework we derive a generally applicable velocity autocorrelation and frequency spectrum for the diffusive component which recovers the low-frequency (long-time) behavior of the hard sphere model while providing for realistic short-time coherence and high-frequency tails to the spectrum. This approach provides a significant increase in the accuracy of the calculated entropies for liquid metals and is compared to ambient pressure data for liquid sodium, aluminum, gallium, tin, and iron. The use of this method for the determination of melt boundaries is demonstrated with a calculation of the high-pressure bcc melt boundary for sodium. With the significantly improved accuracy available with the memory function treatment for softer interatomic potentials, the 2PT model for entropy calculations should find broader application in high energy density science, warm dense matter, planetary science, geophysics, and material science.
First-principles calculation of entropy for liquid metals
NASA Astrophysics Data System (ADS)
Desjarlais, Michael P.
2013-12-01
We demonstrate the accurate calculation of entropies and free energies for a variety of liquid metals using an extension of the two-phase thermodynamic (2PT) model based on a decomposition of the velocity autocorrelation function into gas-like (hard sphere) and solid-like (harmonic) subsystems. The hard sphere model for the gas-like component is shown to give systematically high entropies for liquid metals as a direct result of the unphysical Lorentzian high-frequency tail. Using a memory function framework we derive a generally applicable velocity autocorrelation and frequency spectrum for the diffusive component which recovers the low-frequency (long-time) behavior of the hard sphere model while providing for realistic short-time coherence and high-frequency tails to the spectrum. This approach provides a significant increase in the accuracy of the calculated entropies for liquid metals and is compared to ambient pressure data for liquid sodium, aluminum, gallium, tin, and iron. The use of this method for the determination of melt boundaries is demonstrated with a calculation of the high-pressure bcc melt boundary for sodium. With the significantly improved accuracy available with the memory function treatment for softer interatomic potentials, the 2PT model for entropy calculations should find broader application in high energy density science, warm dense matter, planetary science, geophysics, and material science.
Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis
2011-11-01
Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.
Gartner, Danielle R.; Taber, Daniel R.; Hirsch, Jana A.; Robinson, Whitney R.
2016-01-01
Purpose While obesity disparities between racial and socioeconomic groups have been well characterized, those based on gender and geography have not been as thoroughly documented. This study describes obesity prevalence by state, gender, and race/ethnicity to (1) characterize obesity gender inequality, (2) determine if the geographic distribution of inequality is spatially clustered and (3) contrast the spatial clustering patterns of obesity gender inequality with overall obesity prevalence. Methods Data from the Centers for Disease Control and Prevention’s 2013 Behavioral Risk Factor Surveillance System (BRFSS) were used to calculate state-specific obesity prevalence and gender inequality measures. Global and Local Moran’s Indices were calculated to determine spatial autocorrelation. Results Age-adjusted, state-specific obesity prevalence difference and ratio measures show spatial autocorrelation (z-score=4.89, p-value <0.001). Local Moran’s Indices indicate the spatial distributions of obesity prevalence and obesity gender inequalities are not the same. High and low values of obesity prevalence and gender inequalities cluster in different areas of the U.S. Conclusion Clustering of gender inequality suggests that spatial processes operating at the state level, such as occupational or physical activity policies or social norms, are involved in the etiology of the inequality and necessitate further attention to the determinates of obesity gender inequality. PMID:27039046
Li, Yifan; Wang, Juanle; Gao, Mengxu; Fang, Liqun; Liu, Changhua; Lyu, Xin; Bai, Yongqing; Zhao, Qiang; Li, Hairong; Yu, Hongjie; Cao, Wuchun; Feng, Liqiang; Wang, Yanjun; Zhang, Bin
2017-05-26
Tick-borne encephalitis (TBE) is one of natural foci diseases transmitted by ticks. Its distribution and transmission are closely related to geographic and environmental factors. Identification of environmental determinates of TBE is of great importance to understanding the general distribution of existing and potential TBE natural foci. Hulunbuir, one of the most severe endemic areas of the disease, is selected as the study area. Statistical analysis, global and local spatial autocorrelation analysis, and regression methods were applied to detect the spatiotemporal characteristics, compare the impact degree of associated factors, and model the risk distribution using the heterogeneity. The statistical analysis of gridded geographic and environmental factors and TBE incidence show that the TBE patients mainly occurred during spring and summer and that there is a significant positive spatial autocorrelation between the distribution of TBE cases and environmental characteristics. The impact degree of these factors on TBE risks has the following descending order: temperature, relative humidity, vegetation coverage, precipitation and topography. A high-risk area with a triangle shape was determined in the central part of Hulunbuir; the low-risk area is located in the two belts next to the outside edge of the central triangle. The TBE risk distribution revealed that the impact of the geographic factors changed depending on the heterogeneity.
Barnett, Adrian Gerard
2016-01-01
Objective Foodborne illnesses in Australia, including salmonellosis, are estimated to cost over $A1.25 billion annually. The weather has been identified as being influential on salmonellosis incidence, as cases increase during summer, however time series modelling of salmonellosis is challenging because outbreaks cause strong autocorrelation. This study assesses whether switching models is an improved method of estimating weather–salmonellosis associations. Design We analysed weather and salmonellosis in South-East Queensland between 2004 and 2013 using 2 common regression models and a switching model, each with 21-day lags for temperature and precipitation. Results The switching model best fit the data, as judged by its substantial improvement in deviance information criterion over the regression models, less autocorrelated residuals and control of seasonality. The switching model estimated a 5°C increase in mean temperature and 10 mm precipitation were associated with increases in salmonellosis cases of 45.4% (95% CrI 40.4%, 50.5%) and 24.1% (95% CrI 17.0%, 31.6%), respectively. Conclusions Switching models improve on traditional time series models in quantifying weather–salmonellosis associations. A better understanding of how temperature and precipitation influence salmonellosis may identify where interventions can be made to lower the health and economic costs of salmonellosis. PMID:26916693
Gabriel, Jan; Petrov, Oleg V; Kim, Youngsik; Martin, Steve W; Vogel, Michael
2015-09-01
We use (7)Li NMR to study the ionic jump motion in ternary 0.5Li2S+0.5[(1-x)GeS2+xGeO2] glassy lithium ion conductors. Exploring the "mixed glass former effect" in this system led to the assumption of a homogeneous and random variation of diffusion barriers in this system. We exploit that combining traditional line-shape analysis with novel field-cycling relaxometry, it is possible to measure the spectral density of the ionic jump motion in broad frequency and temperature ranges and, thus, to determine the distribution of activation energies. Two models are employed to parameterize the (7)Li NMR data, namely, the multi-exponential autocorrelation function model and the power-law waiting times model. Careful evaluation of both of these models indicates a broadly inhomogeneous energy landscape for both the single (x=0.0) and the mixed (x=0.1) network former glasses. The multi-exponential autocorrelation function model can be well described by a Gaussian distribution of activation barriers. Applicability of the methods used and their sensitivity to microscopic details of ionic motion are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
D'Hour, V.; Schimmel, M.; Do Nascimento, A. F.; Ferreira, J. M.; Lima Neto, H. C.
2016-04-01
Ambient noise correlation analyses are largely used in seismology to map heterogeneities and to monitor the temporal evolution of seismic velocity changes associated mostly with stress field variations and/or fluid movements. Here we analyse a small earthquake swarm related to a main mR 3.7 intraplate earthquake in North-East of Brazil to study the corresponding post-seismic effects on the medium. So far, post-seismic effects have been observed mainly for large magnitude events. In our study, we show that we were able to detect localized structural changes even for a small earthquake swarm in an intraplate setting. Different correlation strategies are presented and their performances are also shown. We compare the classical auto-correlation with and without pre-processing, including 1-bit normalization and spectral whitening, and the phase auto-correlation. The worst results were obtained for the pre-processed data due to the loss of waveform details. The best results were achieved with the phase cross-correlation which is amplitude unbiased and sensitive to small amplitude changes as long as there exist waveform coherence superior to other unrelated signals and noise. The analysis of 6 months of data using phase auto-correlation and cross-correlation resulted in the observation of a progressive medium change after the major recorded event. The progressive medium change is likely related to the swarm activity through opening new path ways for pore fluid diffusion. We further observed for the auto-correlations a lag time frequency-dependent change which likely indicates that the medium change is localized in depth. As expected, the main change is observed along the fault.
Time-series network analysis of civil aviation in Japan (1985-2005)
NASA Astrophysics Data System (ADS)
Michishita, Ryo; Xu, Bing; Yamada, Ikuho
2008-10-01
Due to the airline deregulation in 1985, a series of new airport developments in the 1990s and 2000s, and the reorganization of airline companies in the 2000s, Japan's air passenger transportation has been dramatically altered in the last two decades in many ways. In this paper, the authors examine how the network and flow structures of domestic air passenger transportation in Japan have geographically changed since 1985. For this purpose, passenger flow data in 1985, 1995, and 2005 were extracted from the Air Transportation Statistical Survey conducted by the Ministry of Land, Infrastructure and Transport, Japan. First, national and regional hub airports are identified via dominant flow and hub function analysis. Then the roles of the hub airports and individual connections over the network are examined with respect to their spatial and network autocorrelations. Spatial and network autocorrelations were evaluated both globally and locally using Moran's I and LISA statistics. The passenger flow data were first examined as a whole and then divided into 3 airline-based categories. Dominant flow and hub function enabled us to detect the hub airports. Structural processes of the hub-and-spoke network were confirmed in each airline through spatial autocorrelation analysis. Network autocorrelation analysis showed that all airlines ingeniously optimized their networks by connecting their routes with large numbers of passengers to other routes with large numbers of passengers, and routes with small numbers of passengers to other routes with small numbers of passengers. The effects of political events and the changes in the strategies of each airline on the whole networks were strongly reflected in the results of this study.
Le, Kim N; Marsik, Matthew; Daegling, David J; Duque, Ana; McGraw, William Scott
2017-03-01
We investigated how heterogeneity in material stiffness affects structural stiffness in the cercopithecid mandibular cortical bone. We assessed (1) whether this effect changes the interpretation of interspecific structural stiffness variation across four primate species, (2) whether the heterogeneity is random, and (3) whether heterogeneity mitigates bending stress in the jaw associated with food processing. The sample consisted of Taï Forest, Cote d'Ivoire, monkeys: Cercocebus atys, Piliocolobus badius, Colobus polykomos, and Cercopithecus diana. Vickers indentation hardness samples estimated elastic moduli throughout the cortical bone area of each coronal section of postcanine corpus. For each section, we calculated maximum area moment of inertia, I max (structural mechanical property), under three models of material heterogeneity, as well as spatial autocorrelation statistics (Moran's I, I MORAN ). When the model considered material stiffness variation and spatial patterning, I max decreased and individual ranks based on structural stiffness changed. Rank changes were not significant across models. All specimens showed positive (nonrandom) spatial autocorrelation. Differences in I MORAN were not significant among species, and there were no discernable patterns of autocorrelation within species. Across species, significant local I MORAN was often attributed to proximity of low moduli in the alveolar process and high moduli in the basal process. While our sample did not demonstrate species differences in the degree of spatial autocorrelation of elastic moduli, there may be mechanical effects of heterogeneity (relative strength and rigidity) that do distinguish at the species or subfamilial level (i.e., colobines vs. cercopithecines). The potential connections of heterogeneity to diet and/or taxonomy remain to be discovered. © 2016 Wiley Periodicals, Inc.
Possible Noise Nature of Elsässer Variable z- in Highly Alfvénic Solar Wind Fluctuations
NASA Astrophysics Data System (ADS)
Wang, X.; Tu, C.-Y.; He, J.-S.; Wang, L.-H.; Yao, S.; Zhang, L.
2018-01-01
It has been a long-standing debate on the nature of Elsässer variable z- observed in the solar wind fluctuations. It is widely believed that z- represents inward propagating Alfvén waves and interacts nonlinearly with z+ (outward propagating Alfvén waves) to generate energy cascade. However, z- variations sometimes show a feature of convective structures. Here we present a new data analysis on autocorrelation functions of z- in order to get some definite information on its nature. We find that there is usually a large drop on the z- autocorrelation function when the solar wind fluctuations are highly Alfvénic. The large drop observed by Helios 2 spacecraft near 0.3 AU appears at the first nonzero time lag τ = 81 s, where the value of the autocorrelation coefficient drops to 25%-65% of that at τ = 0 s. Beyond the first nonzero time lag, the autocorrelation coefficient decreases gradually to zero. The drop of z- correlation function also appears in the Wind observations near 1 AU. These features of the z- correlation function may suggest that z- fluctuations consist of two components: high-frequency white noise and low-frequency pseudo structures, which correspond to flat and steep parts of z- power spectrum, respectively. This explanation is confirmed by doing a simple test on an artificial time series, which is obtained from the superposition of a random data series on its smoothed sequence. Our results suggest that in highly Alfvénic fluctuations, z- may not contribute importantly to the interactions with z+ to produce energy cascade.
NASA Astrophysics Data System (ADS)
Dong, Fang
1999-09-01
The research described in this dissertation is related to characterization of tissue microstructure using a system- independent spatial autocorrelation function (SAF). The function was determined using a reference phantom method, which employed a well-defined ``point- scatterer'' reference phantom to account for instrumental factors. The SAF's were estimated for several tissue-mimicking (TM) phantoms and fresh dog livers. Both phantom tests and in vitro dog liver measurements showed that the reference phantom method is relatively simple and fairly accurate, providing the bandwidth of the measurement system is sufficient for the size of the scatterer being involved in the scattering process. Implementation of this method in clinical scanner requires that distortions from patient's body wall be properly accounted for. The SAF's were estimated for two phantoms with body-wall-like distortions. The experimental results demonstrated that body wall distortions have little effect if echo data are acquired from a large scattering volume. One interesting application of the SAF is to form a ``scatterer size image''. The scatterer size image may help providing diagnostic tools for those diseases in which the tissue microstructure is different from the normal. Another method, the BSC method, utilizes information contained in the frequency dependence of the backscatter coefficient to estimate the scatterer size. The SAF technique produced accurate scatterer size images of homogeneous TM phantoms and the BSC method was capable of generating accurate size images for heterogeneous phantoms. In the scatterer size image of dog kidneys, the contrast-to-noise-ratio (CNR) between renal cortex and medulla was improved dramatically compared to the gray- scale image. The effect of nonlinear propagation was investigated by using a custom-designed phantom with overlaying TM fat layer. The results showed that the correlation length decreased when the transmitting power increased. The measurement results support the assumption that nonlinear propagation generates harmonic energies and causes underestimation of scatterer diameters. Nonlinear propagation can be further enhanced by those materials with high B/A value-a parameter which characterizes the degree of nonlinearity. Nine versions of TM fat and non-fat materials were measured for their B/A values using a new measurement technique, the ``simplified finite amplitude insertion substitution'' (SFAIS) method.
Mutel, Christopher L; Pfister, Stephan; Hellweg, Stefanie
2012-01-17
We describe a new methodology for performing regionalized life cycle assessment and systematically choosing the spatial scale of regionalized impact assessment methods. We extend standard matrix-based calculations to include matrices that describe the mapping from inventory to impact assessment spatial supports. Uncertainty in inventory spatial data is modeled using a discrete spatial distribution function, which in a case study is derived from empirical data. The minimization of global spatial autocorrelation is used to choose the optimal spatial scale of impact assessment methods. We demonstrate these techniques on electricity production in the United States, using regionalized impact assessment methods for air emissions and freshwater consumption. Case study results show important differences between site-generic and regionalized calculations, and provide specific guidance for future improvements of inventory data sets and impact assessment methods.
NASA Astrophysics Data System (ADS)
Sumi, C.
Previously, we developed three displacement vector measurement methods, i.e., the multidimensional cross-spectrum phase gradient method (MCSPGM), the multidimensional autocorrelation method (MAM), and the multidimensional Doppler method (MDM). To increase the accuracies and stabilities of lateral and elevational displacement measurements, we also developed spatially variant, displacement component-dependent regularization. In particular, the regularization of only the lateral/elevational displacements is advantageous for the lateral unmodulated case. The demonstrated measurements of the displacement vector distributions in experiments using an inhomogeneous shear modulus agar phantom confirm that displacement-component-dependent regularization enables more stable shear modulus reconstruction. In this report, we also review our developed lateral modulation methods that use Parabolic functions, Hanning windows, and Gaussian functions in the apodization function and the optimized apodization function that realizes the designed point spread function (PSF). The modulations significantly increase the accuracy of the strain tensor measurement and shear modulus reconstruction (demonstrated using an agar phantom).
Polarization-correlation study of biotissue multifractal structure
NASA Astrophysics Data System (ADS)
Olar, O. I.; Ushenko, A. G.
2003-09-01
This paper presents the results of polarization-correlation study of multifractal collagen structure of physiologically normal and pathologically changed tissues of women"s reproductive sphere and skin. The technique of polarization selection of coherent images of biotissues with further determination of their autocorrelation functions and spectral densities is suggested. The correlation-optical criteria of early diagnostics of appearance of pathological changes in the cases of myometry (forming the germ of fibromyoma) and skin (psoriasis) are determined. This study is directed to investigate the possibilities of recognition of pathological changes of biotissue morphological structure by determining the polarization-dependent autocorrelation functions (ACF) and corresponding spectral densities of tissue coherent images.
NASA Astrophysics Data System (ADS)
Angelsky, Oleg V.; Pishak, Vasyl P.; Ushenko, Alexander G.; Burkovets, Dimitry N.; Pishak, Olga V.
2001-05-01
The paper presents the results of polarization-correlation investigation of multifractal collagen structure of physiologically normal and pathologically changed tissues of women's reproductive sphere and of skin. The technique of polarization selection of coherent biotissues' images followed by determination of their autocorrelation functions and spectral densities is suggested. The correlation- optical criteria of early diagnostics of pathological changes' appearance of myometry (forming of the germ of fibromyoma) and of skin (psoriasis) are determined. The present paper examines the possibilities of diagnostics of pathological changes of biotissues' morphological structure by means of determining the polarizationally filtered autocorrelation functions (ACF) and corresponding spectral densities of their coherent images.
Synchronous scattering and diffraction from gold nanotextured surfaces with structure factors
NASA Astrophysics Data System (ADS)
Gu, Min-Jhong; Lee, Ming-Tsang; Huang, Chien-Hsun; Wu, Chi-Chun; Chen, Yu-Bin
2018-05-01
Synchronous scattering and diffraction were demonstrated using reflectance from gold nanotextured surfaces at oblique (θi = 15° and 60°) incidence of wavelength λ = 405 nm. Two samples of unique auto-correlation functions were cost-effectively fabricated. Multiple structure factors of their profiles were confirmed with Fourier expansions. Bi-directional reflectance function (BRDF) from these samples provided experimental proofs. On the other hand, standard deviation of height and unique auto-correlation function of each sample were used to generate surfaces numerically. Comparing their BRDF with those of totally random rough surfaces further suggested that structure factors in profile could reduce specular reflection more than totally random roughness.