NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.
Time Series Analysis Based on Running Mann Whitney Z Statistics
USDA-ARS?s Scientific Manuscript database
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Cryo-tomography Tilt-series Alignment with Consideration of the Beam-induced Sample Motion
Fernandez, Jose-Jesus; Li, Sam; Bharat, Tanmay A. M.; Agard, David A.
2018-01-01
Recent evidence suggests that the beam-induced motion of the sample during tilt-series acquisition is a major resolution-limiting factor in electron cryo-tomography (cryoET). It causes suboptimal tilt-series alignment and thus deterioration of the reconstruction quality. Here we present a novel approach to tilt-series alignment and tomographic reconstruction that considers the beam-induced sample motion through the tilt-series. It extends the standard fiducial-based alignment approach in cryoET by introducing quadratic polynomials to model the sample motion. The model can be used during reconstruction to yield a motion-compensated tomogram. We evaluated our method on various datasets with different sample sizes. The results demonstrate that our method could be a useful tool to improve the quality of tomograms and the resolution in cryoET. PMID:29410148
Generalized sample entropy analysis for traffic signals based on similarity measure
NASA Astrophysics Data System (ADS)
Shang, Du; Xu, Mengjia; Shang, Pengjian
2017-05-01
Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
A Story-Based Simulation for Teaching Sampling Distributions
ERIC Educational Resources Information Center
Turner, Stephen; Dabney, Alan R.
2015-01-01
Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…
Transformation-cost time-series method for analyzing irregularly sampled data
NASA Astrophysics Data System (ADS)
Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen
2015-06-01
Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.
Transformation-cost time-series method for analyzing irregularly sampled data.
Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen
2015-06-01
Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Assessment the impact of samplers change on the uncertainty related to geothermalwater sampling
NASA Astrophysics Data System (ADS)
Wątor, Katarzyna; Mika, Anna; Sekuła, Klaudia; Kmiecik, Ewa
2018-02-01
The aim of this study is to assess the impact of samplers change on the uncertainty associated with the process of the geothermal water sampling. The study was carried out on geothermal water exploited in Podhale region, southern Poland (Małopolska province). To estimate the uncertainty associated with sampling the results of determinations of metasilicic acid (H2SiO3) in normal and duplicate samples collected in two series were used (in each series the samples were collected by qualified sampler). Chemical analyses were performed using ICP-OES method in the certified Hydrogeochemical Laboratory of the Hydrogeology and Engineering Geology Department at the AGH University of Science and Technology in Krakow (Certificate of Polish Centre for Accreditation No. AB 1050). To evaluate the uncertainty arising from sampling the empirical approach was implemented, based on double analysis of normal and duplicate samples taken from the same well in the series of testing. The analyses of the results were done using ROBAN software based on technique of robust statistics analysis of variance (rANOVA). Conducted research proved that in the case of qualified and experienced samplers uncertainty connected with the sampling can be reduced what results in small measurement uncertainty.
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
Design considerations for case series models with exposure onset measurement error.
Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V
2013-02-28
The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Combined scanning transmission electron microscopy tilt- and focal series.
Dahmen, Tim; Baudoin, Jean-Pierre; Lupini, Andrew R; Kübel, Christian; Slusallek, Philipp; de Jonge, Niels
2014-04-01
In this study, a combined tilt- and focal series is proposed as a new recording scheme for high-angle annular dark-field scanning transmission electron microscopy (STEM) tomography. Three-dimensional (3D) data were acquired by mechanically tilting the specimen, and recording a through-focal series at each tilt direction. The sample was a whole-mount macrophage cell with embedded gold nanoparticles. The tilt-focal algebraic reconstruction technique (TF-ART) is introduced as a new algorithm to reconstruct tomograms from such combined tilt- and focal series. The feasibility of TF-ART was demonstrated by 3D reconstruction of the experimental 3D data. The results were compared with a conventional STEM tilt series of a similar sample. The combined tilt- and focal series led to smaller "missing wedge" artifacts, and a higher axial resolution than obtained for the STEM tilt series, thus improving on one of the main issues of tilt series-based electron tomography.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Occupational Education Curriculum Development.
Based on the New York State homemaking-family living curriculum, this collection of thirty-six sample food and nutrition modules are the fifth in a series of curriculum planning guides. Organized by instructional level (grades 9-12) and by food and nutrition content emphasis (management, buymanship, leisure, career, health and safety, and…
An application of sample entropy to precipitation in Paraíba State, Brazil
NASA Astrophysics Data System (ADS)
Xavier, Sílvio Fernando Alves; da Silva Jale, Jader; Stosic, Tatijana; dos Santos, Carlos Antonio Costa; Singh, Vijay P.
2018-05-01
A climate system is characterized to be a complex non-linear system. In order to describe the complex characteristics of precipitation series in Paraíba State, Brazil, we aim the use of sample entropy, a kind of entropy-based algorithm, to evaluate the complexity of precipitation series. Sixty-nine meteorological stations are distributed over four macroregions: Zona da Mata, Agreste, Borborema, and Sertão. The results of the analysis show that intricacies of monthly average precipitation have differences in the macroregions. Sample entropy is able to reflect the dynamic change of precipitation series providing a new way to investigate complexity of hydrological series. The complexity exhibits areal variation of local water resource systems which can influence the basis for utilizing and developing resources in dry areas.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
Novel Stool-Based Protein Biomarkers for Improved Colorectal Cancer Screening: A Case-Control Study.
Bosch, Linda J W; de Wit, Meike; Pham, Thang V; Coupé, Veerle M H; Hiemstra, Annemieke C; Piersma, Sander R; Oudgenoeg, Gideon; Scheffer, George L; Mongera, Sandra; Sive Droste, Jochim Terhaar; Oort, Frank A; van Turenhout, Sietze T; Larbi, Ilhame Ben; Louwagie, Joost; van Criekinge, Wim; van der Hulst, Rene W M; Mulder, Chris J J; Carvalho, Beatriz; Fijneman, Remond J A; Jimenez, Connie R; Meijer, Gerrit A
2017-12-19
The fecal immunochemical test (FIT) for detecting hemoglobin is used widely for noninvasive colorectal cancer (CRC) screening, but its sensitivity leaves room for improvement. To identify novel protein biomarkers in stool that outperform or complement hemoglobin in detecting CRC and advanced adenomas. Case-control study. Colonoscopy-controlled referral population from several centers. 315 stool samples from one series of 12 patients with CRC and 10 persons without colorectal neoplasia (control samples) and a second series of 81 patients with CRC, 40 with advanced adenomas, and 43 with nonadvanced adenomas, as well as 129 persons without colorectal neoplasia (control samples); 72 FIT samples from a third independent series of 14 patients with CRC, 16 with advanced adenomas, and 18 with nonadvanced adenomas, as well as 24 persons without colorectal neoplasia (control samples). Stool samples were analyzed by mass spectrometry. Classification and regression tree (CART) analysis and logistic regression analyses were performed to identify protein combinations that differentiated CRC or advanced adenoma from control samples. Antibody-based assays for 4 selected proteins were done on FIT samples. In total, 834 human proteins were identified, 29 of which were statistically significantly enriched in CRC versus control stool samples in both series. Combinations of 4 proteins reached sensitivities of 80% and 45% for detecting CRC and advanced adenomas, respectively, at 95% specificity, which was higher than that of hemoglobin alone (P < 0.001 and P = 0.003, respectively). Selected proteins could be measured in small sample volumes used in FIT-based screening programs and discriminated between CRC and control samples (P < 0.001). Lack of availability of antibodies prohibited validation of the top protein combinations in FIT samples. Mass spectrometry of stool samples identified novel candidate protein biomarkers for CRC screening. Several protein combinations outperformed hemoglobin in discriminating CRC or advanced adenoma from control samples. Proof of concept that such proteins can be detected with antibody-based assays in small sample volumes indicates the potential of these biomarkers to be applied in population screening. Center for Translational Molecular Medicine, International Translational Cancer Research Dream Team, Stand Up to Cancer (American Association for Cancer Research and the Dutch Cancer Society), Dutch Digestive Foundation, and VU University Medical Center.
Mars Sample Handling Protocol Workshop Series: Workshop 2a (Sterilization)
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Brunch, Carl W. (Editor); Setlow, Richard B. (Editor); DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
The Space Studies Board of the National Research Council provided a series of recommendations to NASA on planetary protection requirements for future Mars sample return missions. One of the Board's key findings suggested, although current evidence of the martian surface suggests that life as we know it would not tolerate the planet's harsh environment, there remain 'plausible scenarios for extant microbial life on Mars.' Based on this conclusion, all samples returned from Mars should be considered potentially hazardous until it has been demonstrated that they are not. In response to the National Research Council's findings and recommendations, NASA has undertaken a series of workshops to address issues regarding NASA's proposed sample return missions. Work was previously undertaken at the Mars Sample Handling and Protocol Workshop 1 (March 2000) to formulate recommendations on effective methods for life detection and/or biohazard testing on returned samples. The NASA Planetary Protection Officer convened the Mars Sample Sterilization Workshop, the third in the Mars Sample Handling Protocol Workshop Series, on November 28-30, 2000 at the Holiday Inn Rosslyn Westpark, Arlington, Virginia. Because of the short timeframe between this Workshop and the second Workshop in the Series, which was convened in October 2000 in Bethesda, Maryland, they were developed in parallel, so the Sterilization Workshop and its report have therefore been designated as '2a'). The focus of Workshop 2a was to make recommendations for effective sterilization procedures for all phases of Mars sample return missions, and to answer the question of whether we can sterilize samples in such a way that the geological characteristics of the samples are not significantly altered.
Spatial-dependence recurrence sample entropy
NASA Astrophysics Data System (ADS)
Pham, Tuan D.; Yan, Hong
2018-03-01
Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.
Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji
2016-01-01
The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.
Wood, David B.
2018-03-14
Rock samples have been collected, analyzed, and interpreted from drilling and mining operations at the Nevada National Security Site for over one-half of a century. Records containing geologic and hydrologic analyses and interpretations have been compiled into a series of databases. Rock samples have been photographed and thin sections scanned. Records and images are preserved and available for public viewing and downloading at the U.S. Geological Survey ScienceBase, Mercury Core Library and Data Center Web site at https://www.sciencebase.gov/mercury/ and documented in U.S. Geological Survey Data Series 297. Example applications of these data and images are provided in this report.
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Composite Sampling of a Bacillus anthracis Surrogate with ...
Journal Article A series of experiments were conducted to explore the utility of composite-based collection of surface samples for the detection of a Bacillus anthracis surrogate using cellulose sponge samplers on a stainless steel surface.
Comparison of estimators for rolling samples using Forest Inventory and Analysis data
Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski
2003-01-01
The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...
A window-based time series feature extraction method.
Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife
2017-10-01
This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2015-08-01
In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.
NASA Astrophysics Data System (ADS)
Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2015-08-01
In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.
Wavelet-based tracking of bacteria in unreconstructed off-axis holograms.
Marin, Zach; Wallace, J Kent; Nadeau, Jay; Khalil, Andre
2018-03-01
We propose an automated wavelet-based method of tracking particles in unreconstructed off-axis holograms to provide rough estimates of the presence of motion and particle trajectories in digital holographic microscopy (DHM) time series. The wavelet transform modulus maxima segmentation method is adapted and tailored to extract Airy-like diffraction disks, which represent bacteria, from DHM time series. In this exploratory analysis, the method shows potential for estimating bacterial tracks in low-particle-density time series, based on a preliminary analysis of both living and dead Serratia marcescens, and for rapidly providing a single-bit answer to whether a sample chamber contains living or dead microbes or is empty. Copyright © 2017 Elsevier Inc. All rights reserved.
United States Forest Disturbance Trends Observed Using Landsat Time Series
NASA Technical Reports Server (NTRS)
Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan
2013-01-01
Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Model-based Clustering of Categorical Time Series with Multinomial Logit Classification
NASA Astrophysics Data System (ADS)
Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea
2010-09-01
A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.
Burkhardt, M.R.; Zaugg, S.D.; Burbank, T.L.; Olson, M.C.; Iverson, J.L.
2005-01-01
Polycyclic aromatic hydrocarbons (PAH) are recognized as environmentally relevant for their potential adverse effects on human and ecosystem health. This paper describes a method to determine the distribution of PAH and alkylated homolog groups in sediment samples. Pressurized liquid extraction (PLE), coupled with solid-phase extraction (SPE) cleanup, was developed to decrease sample preparation time, to reduce solvent consumption, and to minimize background interferences for full-scan GC-MS analysis. Recoveries from spiked Ottawa sand, environmental stream sediment, and commercially available topsoil, fortified at 1.5-15 ??g per compound, averaged 94.6 ?? 7.8%, 90.7 ?? 5.8% and 92.8 ?? 12.8%, respectively. Initial method detection limits for single-component compounds ranged from 20 to 302 ??g/kg, based on 25 g samples. Results from 28 environmental sediment samples, excluding homologs, show 35 of 41 compounds (85.4%) were detected in at least one sample with concentrations ranging from 20 to 100,000 ??g/kg. The most frequently detected compound, 2,6-dimethylnaphthalene, was detected in 23 of the 28 (82%) environmental samples with a concentration ranging from 15 to 907 ??g/kg. The results from the 28 environmental sediment samples for the homolog series showed that 27 of 28 (96%) samples had at least one homolog series present at concentrations ranging from 20 to 89,000 ??g/kg. The most frequently detected homolog series, C2-alkylated naphthalene, was detected in 26 of the 28 (93%) environmental samples with a concentration ranging from 25 to 3900 ??g/kg. Results for a standard reference material using dichloromethane Soxhlet-based extraction also are compared. ?? 2005 Elsevier B.V. All rights reserved.
Sellbom, Martin; Sansone, Randy A; Songer, Douglas A
2017-09-01
The current study evaluated the utility of the self-harm inventory (SHI) as a proxy for and screening measure of borderline personality disorder (BPD) using several diagnostic and statistical manual of mental disorders (DSM)-based BPD measures as criteria. We used a sample of 145 psychiatric inpatients, who completed the SHI and a series of well-validated, DSM-based self-report measures of BPD. Using a series of latent trait and latent class analyses, we found that the SHI was substantially associated with a latent construct representing BPD, as well as differentiated latent classes of 'high' vs. 'low' BPD, with good accuracy. The SHI can serve as proxy for and a good screening measure for BPD, but future research needs to replicate these findings using structured interview-based measurement of BPD.
NASA Astrophysics Data System (ADS)
Wier, Timothy P.; Moser, Cameron S.; Grant, Jonathan F.; Riley, Scott C.; Robbins-Wamsley, Stephanie H.; First, Matthew R.; Drake, Lisa A.
2017-10-01
Both L-shaped ("L") and straight ("Straight") sample probes have been used to collect water samples from a main ballast line in land-based or shipboard verification testing of ballast water management systems (BWMS). A series of experiments was conducted to quantify and compare the sampling efficiencies of L and Straight sample probes. The findings from this research-that both L and Straight probes sample organisms with similar efficiencies-permit increased flexibility for positioning sample probes aboard ships.
Local air temperature tolerance: a sensible basis for estimating climate variability
NASA Astrophysics Data System (ADS)
Kärner, Olavi; Post, Piia
2016-11-01
The customary representation of climate using sample moments is generally biased due to the noticeably nonstationary behaviour of many climate series. In this study, we introduce a moment-free climate representation based on a statistical model fitted to a long-term daily air temperature anomaly series. This model allows us to separate the climate and weather scale variability in the series. As a result, the climate scale can be characterized using the mean annual cycle of series and local air temperature tolerance, where the latter is computed using the fitted model. The representation of weather scale variability is specified using the frequency and the range of outliers based on the tolerance. The scheme is illustrated using five long-term air temperature records observed by different European meteorological stations.
W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang
2014-01-01
The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...
Detection of cystic fibrosis mutations in a GeneChip{trademark} assay format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyada, C.G.; Cronin, M.T.; Kim, S.M.
1994-09-01
We are developing assays for the detection of cystic fibrosis mutations based on DNA hybridization. A DNA sample is amplified by PCR, labeled by incorporating a fluorescein-tagged dNTP, enzymatically treated to produce smaller fragments and hybridized to a series of short (13-16 bases) oligonucleotides synthesized on a glass surface via photolithography. The hybrids are detected by eqifluorescence and mutations are identified by the specific pattern of hybridization. In a GeneChip assay, the chip surface is composed of a series of subarrays, each being specific for a particular mutation. Each subarray is further subdivided into a series of probes (40 total),more » half based on the mutant sequence and the remainder based on the wild-type sequence. For each of the subarrays, there is a redundancy in the number of probes that should hybridize to either a wild-type or a mutant target. The multiple probe strategy provides sequence information for a short five base region overlapping the mutation site. In addition, homozygous wild-type and mutant as well as heterozygous samples are each identified by a specific pattern of hybridization. The small size of each probe feature (250 x 250 {mu}m{sup 2}) permits the inclusion of additional probes required to generate sequence information by hybridization.« less
ERIC Educational Resources Information Center
Ipek, Hava; Calik, Muammer
2008-01-01
Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…
Indian Craniometric Variability and Affinities
Raghavan, Pathmanathan; Bulbeck, David; Pathmanathan, Gayathiri; Rathee, Suresh Kanta
2013-01-01
Recently published craniometric and genetic studies indicate a predominantly indigenous ancestry of Indian populations. We address this issue with a fuller coverage of Indian craniometrics than any done before. We analyse metrical variability within Indian series, Indians' sexual dimorphism, differences between northern and southern Indians, index-based differences of Indian males from other series, and Indians' multivariate affinities. The relationship between a variable's magnitude and its variability is log-linear. This relationship is strengthened by excluding cranial fractions and series with a sample size less than 30. Male crania are typically larger than female crania, but there are also shape differences. Northern Indians differ from southern Indians in various features including narrower orbits and less pronounced medial protrusion of the orbits. Indians resemble Veddas in having small crania and similar cranial shape. Indians' wider geographic affinities lie with “Caucasoid” populations to the northwest, particularly affecting northern Indians. The latter finding is confirmed from shape-based Mahalanobis-D distances calculated for the best sampled male and female series. Demonstration of a distinctive South Asian craniometric profile and the intermediate status of northern Indians between southern Indians and populations northwest of India confirm the predominantly indigenous ancestry of northern and especially southern Indians. PMID:24455409
Trépout, Sylvain; Bastin, Philippe; Marco, Sergio
2017-03-12
This report describes a protocol for preparing thick biological specimens for further observation using a scanning transmission electron microscope. It also describes an imaging method for studying the 3D structure of thick biological specimens by scanning transmission electron tomography. The sample preparation protocol is based on conventional methods in which the sample is fixed using chemical agents, treated with a heavy atom salt contrasting agent, dehydrated in a series of ethanol baths, and embedded in resin. The specific imaging conditions for observing thick samples by scanning transmission electron microscopy are then described. Sections of the sample are observed using a through-focus method involving the collection of several images at various focal planes. This enables the recovery of in-focus information at various heights throughout the sample. This particular collection pattern is performed at each tilt angle during tomography data collection. A single image is then generated, merging the in-focus information from all the different focal planes. A classic tilt-series dataset is then generated. The advantage of the method is that the tilt-series alignment and reconstruction can be performed using standard tools. The collection of through-focal images allows the reconstruction of a 3D volume that contains all of the structural details of the sample in focus.
Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.
Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J
2016-02-01
It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.
Modular microfluidic system for biological sample preparation
Rose, Klint A.; Mariella, Jr., Raymond P.; Bailey, Christopher G.; Ness, Kevin Dean
2015-09-29
A reconfigurable modular microfluidic system for preparation of a biological sample including a series of reconfigurable modules for automated sample preparation adapted to selectively include a) a microfluidic acoustic focusing filter module, b) a dielectrophoresis bacteria filter module, c) a dielectrophoresis virus filter module, d) an isotachophoresis nucleic acid filter module, e) a lyses module, and f) an isotachophoresis-based nucleic acid filter.
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nohara, D.
2017-12-01
The shorter temporal scale variation in the downward solar irradiance at the ground level (DSI) is not understood well because researches in the shorter-scale variation in the DSI is based on the ground observation and ground observation stations are located coarsely. Use of dataset derived from satellite observation will overcome such defect. DSI data and MODIS cloud properties product are analyzed simultaneously. Three metrics: mean, standard deviation and sample entropy, are used to evaluate time-series properties of the DSI. Three metrics are computed from two-hours time-series centered at the observation time of MODIS over the ground observation stations. We apply the regression methods to design prediction models of each three metrics from cloud properties. The validation of the model accuracy show that mean and standard deviation are predicted with a higher degree of accuracy and that the accuracy of prediction of sample entropy, which represents the complexity of time-series, is not high. One of causes of lower prediction skill of sample entropy is the resolution of the MODIS cloud properties. Higher sample entropy is corresponding to the rapid fluctuation, which is caused by the small and unordered cloud. It seems that such clouds isn't retrieved well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashida, Misa; Malac, Marek; Egerton, Ray F.
Electron tomography is a method whereby a three-dimensional reconstruction of a nanoscale object is obtained from a series of projected images measured in a transmission electron microscope. We developed an electron-diffraction method to measure the tilt and azimuth angles, with Kikuchi lines used to align a series of diffraction patterns obtained with each image of the tilt series. Since it is based on electron diffraction, the method is not affected by sample drift and is not sensitive to sample thickness, whereas tilt angle measurement and alignment using fiducial-marker methods are affected by both sample drift and thickness. The accuracy ofmore » the diffraction method benefits reconstructions with a large number of voxels, where both high spatial resolution and a large field of view are desired. The diffraction method allows both the tilt and azimuth angle to be measured, while fiducial marker methods typically treat the tilt and azimuth angle as an unknown parameter. The diffraction method can be also used to estimate the accuracy of the fiducial marker method, and the sample-stage accuracy. A nano-dot fiducial marker measurement differs from a diffraction measurement by no more than ±1°.« less
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan Huang
2015-01-01
We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
Wardlaw, Bruce R.; Ellwood, Brooks B.; Lambert, Lance L.; Tomkin, Jonathan H.; Bell, Gordon L.; Nestell, Galina P.
2012-01-01
Here we establish a magnetostratigraphy susceptibility zonation for the three Middle Permian Global boundary Stratotype Sections and Points (GSSPs) that have recently been defined, located in Guadalupe Mountains National Park, West Texas, USA. These GSSPs, all within the Middle Permian Guadalupian Series, define (1) the base of the Roadian Stage (base of the Guadalupian Series), (2) the base of the Wordian Stage and (3) the base of the Capitanian Stage. Data from two additional stratigraphic successions in the region, equivalent in age to the Kungurian–Roadian and Wordian–Capitanian boundary intervals, are also reported. Based on low-field, mass specific magnetic susceptibility (χ) measurements of 706 closely spaced samples from these stratigraphic sections and time-series analysis of one of these sections, we (1) define the magnetostratigraphy susceptibility zonation for the three Guadalupian Series Global boundary Stratotype Sections and Points; (2) demonstrate that χ datasets provide a proxy for climate cyclicity; (3) give quantitative estimates of the time it took for some of these sediments to accumulate; (4) give the rates at which sediments were accumulated; (5) allow more precise correlation to equivalent sections in the region; (6) identify anomalous stratigraphic horizons; and (7) give estimates for timing and duration of geological events within sections.
Selected Characteristics of Persons in Environmental Science: 1978.
ERIC Educational Resources Information Center
Palumbo, Thomas J.; And Others
1982-01-01
This report is the third of a series of reports based on data collected in the 1978 National Sample of Scientists and Engineers survey. Profiled are the characteristics of 29,775 persons represented in the national sample's field of environmental scientists: 24,615 earth scientists, 3,481 atmospheric scientists, and 1,678 oceanographers.…
Blood oxygen saturation of frozen tissue determined by hyper spectral imaging
NASA Astrophysics Data System (ADS)
Braaf, Boy; Nadort, Annemarie; Faber, Dirk; ter Wee, Rene; van Leeuwen, Ton; Aalders, Maurice
2008-02-01
A method is proposed for determining blood oxygen saturation in frozen tissue. The method is based on a spectral camera system equipped with an Acoustic-Optical-Tuneable-Filter. The HSI-setup is validated by measuring series of unfrozen and frozen samples of a hemoglobin-solution, a hemoglobin-intralipid mixture and whole blood with varying oxygen saturation. The theoretically predicted linear relation between oxygen saturation and absorbance was observed in both the frozen sample series and the unfrozen series. In a final proof of principal, frozen myocardial tissue was measured. Higher saturation values were recorded for ventricle and atria tissue compared to the septum and connective tissue. These results are not validated by measurements with another method. The formation of methemoglobin during freezing and the presence of myoglobin in the tissue turned out to be possible sources of error.
Biogeochemistry from Gliders at the Hawaii Ocean Times-Series
NASA Astrophysics Data System (ADS)
Nicholson, D. P.; Barone, B.; Karl, D. M.
2016-02-01
At the Hawaii Ocean Time-series (HOT) autonomous, underwater gliders equipped with biogeochemical sensors observe the oceans for months at a time, sampling spatiotemporal scales missed by the ship-based programs. Over the last decade, glider data augmented by a foundation of time-series observations have shed light on biogeochemical dynamics occuring spatially at meso- and submesoscales and temporally on scales from diel to annual. We present insights gained from the synergy between glider observations, time-series measurements and remote sensing in the subtropical North Pacific. We focus on diel variability observed in dissolved oxygen and bio-optics and approaches to autonomously quantify net community production and gross primary production (GPP) as developed during the 2012 Hawaii Ocean Experiment - DYnamics of Light And Nutrients (HOE-DYLAN). Glider-based GPP measurements were extended to explore the relationship between GPP and mesoscale context over multiple years of Seaglider deployments.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
[Winter wheat area estimation with MODIS-NDVI time series based on parcel].
Li, Le; Zhang, Jin-shui; Zhu, Wen-quan; Hu, Tan-gao; Hou, Dong
2011-05-01
Several attributes of MODIS (moderate resolution imaging spectrometer) data, especially the short temporal intervals and the global coverage, provide an extremely efficient way to map cropland and monitor its seasonal change. However, the reliability of their measurement results is challenged because of the limited spatial resolution. The parcel data has clear geo-location and obvious boundary information of cropland. Also, the spectral differences and the complexity of mixed pixels are weak in parcels. All of these make that area estimation based on parcels presents more advantage than on pixels. In the present study, winter wheat area estimation based on MODIS-NDVI time series has been performed with the support of cultivated land parcel in Tongzhou, Beijing. In order to extract the regional winter wheat acreage, multiple regression methods were used to simulate the stable regression relationship between MODIS-NDVI time series data and TM samples in parcels. Through this way, the consistency of the extraction results from MODIS and TM can stably reach up to 96% when the amount of samples accounts for 15% of the whole area. The results shows that the use of parcel data can effectively improve the error in recognition results in MODIS-NDVI based multi-series data caused by the low spatial resolution. Therefore, with combination of moderate and low resolution data, the winter wheat area estimation became available in large-scale region which lacks completed medium resolution images or has images covered with clouds. Meanwhile, it carried out the preliminary experiments for other crop area estimation.
Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977
Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.
Adaptive Sensing of Time Series with Application to Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.; Cabrol, Nathalie A.; Furlong, Michael; Hardgrove, Craig; Low, Bryan K. H.; Moersch, Jeffrey; Wettergreen, David
2013-01-01
We address the problem of adaptive informationoptimal data collection in time series. Here a remote sensor or explorer agent throttles its sampling rate in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility -- all collected datapoints lie in the past, but its resource allocation decisions require predicting far into the future. Our solution is to continually fit a Gaussian process model to the latest data and optimize the sampling plan on line to maximize information gain. We compare the performance characteristics of stationary and nonstationary Gaussian process models. We also describe an application based on geologic analysis during planetary rover exploration. Here adaptive sampling can improve coverage of localized anomalies and potentially benefit mission science yield of long autonomous traverses.
Society Membership Survey: 1986 Salaries.
ERIC Educational Resources Information Center
Skelton, W. Keith; And Others
The fourth in a series of reports produced by the Education and Employment Statistics division of the American Insititute of Physics (AIP) is presented. Data are based on a stratified random sample survey of one-sixth of the U.S. and Canadian membership of the AIP member societies. In the spring of 1986, every individual in the sample received a…
Spanish Multicenter Normative Studies (NEURONORMA Project): methods and sample characteristics.
Peña-Casanova, Jordi; Blesa, Rafael; Aguilar, Miquel; Gramunt-Fombuena, Nina; Gómez-Ansón, Beatriz; Oliva, Rafael; Molinuevo, José Luis; Robles, Alfredo; Barquero, María Sagrario; Antúnez, Carmen; Martínez-Parra, Carlos; Frank-García, Anna; Fernández, Manuel; Alfonso, Verónica; Sol, Josep M
2009-06-01
This paper describes the methods and sample characteristics of a series of Spanish normative studies (The NEURONORMA project). The primary objective of our research was to collect normative and psychometric information on a sample of people aged over 49 years. The normative information was based on a series of selected, but commonly used, neuropsychological tests covering attention, language, visuo-perceptual abilities, constructional tasks, memory, and executive functions. A sample of 356 community dwelling individuals was studied. Demographics, socio-cultural, and medical data were collected. Cognitive normality was validated via informants and a cognitive screening test. Norms were calculated for midpoint age groups. Effects of age, education, and sex were determined. The use of these norms should improve neuropsychological diagnostic accuracy in older Spanish subjects. These data may also be of considerable use for comparisons with other normative studies. Limitations of these normative data are also commented on.
Giassi, Pedro; Okida, Sergio; Oliveira, Maurício G; Moraes, Raimes
2013-11-01
Short-term cardiovascular regulation mediated by the sympathetic and parasympathetic branches of the autonomic nervous system has been investigated by multivariate autoregressive (MVAR) modeling, providing insightful analysis. MVAR models employ, as inputs, heart rate (HR), systolic blood pressure (SBP) and respiratory waveforms. ECG (from which HR series is obtained) and respiratory flow waveform (RFW) can be easily sampled from the patients. Nevertheless, the available methods for acquisition of beat-to-beat SBP measurements during exams hamper the wider use of MVAR models in clinical research. Recent studies show an inverse correlation between pulse wave transit time (PWTT) series and SBP fluctuations. PWTT is the time interval between the ECG R-wave peak and photoplethysmography waveform (PPG) base point within the same cardiac cycle. This study investigates the feasibility of using inverse PWTT (IPWTT) series as an alternative input to SBP for MVAR modeling of the cardiovascular regulation. For that, HR, RFW, and IPWTT series acquired from volunteers during postural changes and autonomic blockade were used as input of MVAR models. Obtained results show that IPWTT series can be used as input of MVAR models, replacing SBP measurements in order to overcome practical difficulties related to the continuous sampling of the SBP during clinical exams.
Time-Resolved Transposon Insertion Sequencing Reveals Genome-Wide Fitness Dynamics during Infection.
Yang, Guanhua; Billings, Gabriel; Hubbard, Troy P; Park, Joseph S; Yin Leung, Ka; Liu, Qin; Davis, Brigid M; Zhang, Yuanxing; Wang, Qiyao; Waldor, Matthew K
2017-10-03
Transposon insertion sequencing (TIS) is a powerful high-throughput genetic technique that is transforming functional genomics in prokaryotes, because it enables genome-wide mapping of the determinants of fitness. However, current approaches for analyzing TIS data assume that selective pressures are constant over time and thus do not yield information regarding changes in the genetic requirements for growth in dynamic environments (e.g., during infection). Here, we describe structured analysis of TIS data collected as a time series, termed pattern analysis of conditional essentiality (PACE). From a temporal series of TIS data, PACE derives a quantitative assessment of each mutant's fitness over the course of an experiment and identifies mutants with related fitness profiles. In so doing, PACE circumvents major limitations of existing methodologies, specifically the need for artificial effect size thresholds and enumeration of bacterial population expansion. We used PACE to analyze TIS samples of Edwardsiella piscicida (a fish pathogen) collected over a 2-week infection period from a natural host (the flatfish turbot). PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a cutoff at a terminal sampling point, and it identified subpopulations of mutants with distinct fitness profiles, one of which informed the design of new live vaccine candidates. Overall, PACE enables efficient mining of time series TIS data and enhances the power and sensitivity of TIS-based analyses. IMPORTANCE Transposon insertion sequencing (TIS) enables genome-wide mapping of the genetic determinants of fitness, typically based on observations at a single sampling point. Here, we move beyond analysis of endpoint TIS data to create a framework for analysis of time series TIS data, termed pattern analysis of conditional essentiality (PACE). We applied PACE to identify genes that contribute to colonization of a natural host by the fish pathogen Edwardsiella piscicida. PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a terminal sampling point, and its clustering of mutants with related fitness profiles informed design of new live vaccine candidates. PACE yields insights into patterns of fitness dynamics and circumvents major limitations of existing methodologies. Finally, the PACE method should be applicable to additional "omic" time series data, including screens based on clustered regularly interspaced short palindromic repeats with Cas9 (CRISPR/Cas9). Copyright © 2017 Yang et al.
The Savannah River Site`s groundwater monitoring program. Third quarter 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-05-06
The Environmental Protection Department/Environmental Monitoring Section (EPD/EMS) administers the Savannah River Site`s (SRS) Groundwater Monitoring Program. During third quarter 1990 (July through September) EPD/EMS conducted routine sampling of monitoring wells and drinking water locations. EPD/EMS established two sets of flagging criteria in 1986 to assist in the management of sample results. The flagging criteria do not define contamination levels; instead they aid personnel in sample scheduling, interpretation of data, and trend identification. The flagging criteria are based on detection limits, background levels in SRS groundwater, and drinking water standards. All analytical results from third quarter 1990 are listed in thismore » report, which is distributed to all site custodians. One or more analytes exceeded Flag 2 in 87 monitoring well series. Analytes exceeded Flat 2 for the first since 1984 in 14 monitoring well series. In addition to groundwater monitoring, EPD/EMS collected drinking water samples from SRS drinking water systems supplied by wells. The drinking water samples were analyzed for radioactive constituents.« less
The Savannah River Site's groundwater monitoring program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-05-06
The Environmental Protection Department/Environmental Monitoring Section (EPD/EMS) administers the Savannah River Site's (SRS) Groundwater Monitoring Program. During third quarter 1990 (July through September) EPD/EMS conducted routine sampling of monitoring wells and drinking water locations. EPD/EMS established two sets of flagging criteria in 1986 to assist in the management of sample results. The flagging criteria do not define contamination levels; instead they aid personnel in sample scheduling, interpretation of data, and trend identification. The flagging criteria are based on detection limits, background levels in SRS groundwater, and drinking water standards. All analytical results from third quarter 1990 are listed in thismore » report, which is distributed to all site custodians. One or more analytes exceeded Flag 2 in 87 monitoring well series. Analytes exceeded Flat 2 for the first since 1984 in 14 monitoring well series. In addition to groundwater monitoring, EPD/EMS collected drinking water samples from SRS drinking water systems supplied by wells. The drinking water samples were analyzed for radioactive constituents.« less
Li, Yue Ru; Marschilok, Amy C.; Takeuchi, Esther S.; ...
2015-11-24
This report describes the first detailed electrochemical examination of a series of copper birnessite samples under lithium-based battery conditions, allowing a structure/function analysis of the electrochemistry and related material properties. To obtain the series of copper birnessite samples, a novel synthetic approach for the preparation of copper birnessite, Cu xMnO y·nH 2O is reported. The copper content (x) in Cu xMnO y·nH 2O, 0.28 >= x >= 0.20, was inversely proportional to crystallite size, which ranged from 12 to 19 nm. The electrochemistry under lithium-based battery conditions showed that the higher copper content (x = 0.28) and small crystallite sizemore » (similar to 12 nm) sample delivered similar to 194 mAh/g, about 20% higher capacity than the low copper content (x = 0.22) and larger crystallite size (similar to 19 nm) material. In addition, Cu xMnO y·nH 2O displays quasi-reversible electrochemistry in magnesium based electrolytes, indicating that copper birnessite could be a candidate for future application in magnesium-ion batteries.« less
High School and Beyond First Follow-Up (1982). Sample Design Report.
ERIC Educational Resources Information Center
Tourangeau, Roger; And Others
This report documents the major technical aspects of the sample selection and implementation of the 1982 High School and Beyond First Follow Up, the first in a series of planned resurveys of the students and schools in the 1980 High School and Beyond Base Year Survey. The First Follow-Up included subsamples of nearly 30,000 sophomore cohort and…
NASA Astrophysics Data System (ADS)
Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.
2018-02-01
River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.
High-Temperature Cyclic Oxidation Data, Volume 1
NASA Technical Reports Server (NTRS)
Barrett, C. A.; Garlick, R. G.; Lowell, C. E.
1984-01-01
This first in a series of cyclic oxidation handbooks contains specific-weight-change-versus-time data and X-ray diffraction results derived from high-temperature cyclic tests on high-temperature, high-strength nickel-base gamma/gamma' and cobalt-base turbine alloys. Each page of data summarizes a complete test on a given alloy sample.
Forest habitat types of central Idaho
Robert Steele; Robert D. Pfister; Russell A. Ryker; Jay A. Kittams
1981-01-01
A land-classification system based upon potential natural vegetation is presented for the forests of central Idaho. It is based on reconnaissance sampling of about 800 stands. A hierarchical taxonomic classification of forest sites was developed using the habitat type concept. A total of eight climax series, 64 habitat types, and 55 additional phases of habitat types...
Need States Based on Eating Occasions Experienced by Midlife Women
ERIC Educational Resources Information Center
Vue, Houa; Degeneffe, Dennis; Reicks, Marla
2008-01-01
Objective: To identify a comprehensive set of distinct "need states" based on the eating occasions experienced by midlife women. Design: Series of 7 focus group interviews. Setting: Meeting room on a university campus. Participants: A convenience sample of 34 multi-ethnic women (mean age = 46 years). Phenomenon of Interest: Descriptions of eating…
Aur, Dorian; Vila-Rodriguez, Fidel
2017-01-01
Complexity measures for time series have been used in many applications to quantify the regularity of one dimensional time series, however many dynamical systems are spatially distributed multidimensional systems. We introduced Dynamic Cross-Entropy (DCE) a novel multidimensional complexity measure that quantifies the degree of regularity of EEG signals in selected frequency bands. Time series generated by discrete logistic equations with varying control parameter r are used to test DCE measures. Sliding window DCE analyses are able to reveal specific period doubling bifurcations that lead to chaos. A similar behavior can be observed in seizures triggered by electroconvulsive therapy (ECT). Sample entropy data show the level of signal complexity in different phases of the ictal ECT. The transition to irregular activity is preceded by the occurrence of cyclic regular behavior. A significant increase of DCE values in successive order from high frequencies in gamma to low frequencies in delta band reveals several phase transitions into less ordered states, possible chaos in the human brain. To our knowledge there are no reliable techniques able to reveal the transition to chaos in case of multidimensional times series. In addition, DCE based on sample entropy appears to be robust to EEG artifacts compared to DCE based on Shannon entropy. The applied technique may offer new approaches to better understand nonlinear brain activity. Copyright © 2016 Elsevier B.V. All rights reserved.
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
A.R. Mason; H.G. Paul
1994-01-01
Procedures for monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm are recommended based on many years experience in sampling these species in eastern Oregon and Washington. It is shown that statistically reliable estimates of larval density can be made for a population by sampling host trees in a series of permanent plots in a...
NASA Astrophysics Data System (ADS)
Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.
2017-11-01
In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.
Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P
2016-06-01
Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract Using time trends to filter out emerging contaminants from large peak lists.
ERIC Educational Resources Information Center
Gafoor, K. Abdul; Narayan, Smitha
2012-01-01
In view of student shift away from science at advanced levels, and gender and locale based divergence in interest in studying physics, chemistry and biology, this study explores experience categories that significantly contribute to interest in science on a sample of upper primary school students from Kerala, India. A series of multiple regression…
The role of skeletal micro-architecture in diagenesis and dating of Acropora palmata
NASA Astrophysics Data System (ADS)
Tomiak, P. J.; Andersen, M. B.; Hendy, E. J.; Potter, E. K.; Johnson, K. G.; Penkman, K. E. H.
2016-06-01
Past variations in global sea-level reflect continental ice volume, a crucial factor for understanding the Earth's climate system. The Caribbean coral Acropora palmata typically forms dense stands in very shallow water and therefore fossil samples mark past sea-level. Uranium-series methods are commonly used to establish a chronology for fossil coral reefs, but are compromised by post mortem diagenetic changes to coral skeleton. Current screening approaches are unable to identify all altered samples, whilst models that attempt to correct for 'open-system' behaviour are not applicable across all diagenetic scenarios. In order to better understand how U-series geochemistry varies spatially with respect to diagenetic textures, we examine these aspects in relation to skeletal micro-structure and intra-crystalline amino acids, comparing an unaltered modern coral with a fossil A. palmata colony containing zones of diagenetic alteration (secondary overgrowth of aragonite, calcite cement and dissolution features). We demonstrate that the process of skeletogenesis in A. palmata causes heterogeneity in porosity, which can account for the observed spatial distribution of diagenetic features; this in turn explains the spatially-systematic trends in U-series geochemistry and consequently, U-series age. We propose a scenario that emphasises the importance of through-flow of meteoric waters, invoking both U-loss and absorption of mobilised U and Th daughter isotopes. We recommend selective sampling of low porosity A. palmata skeleton to obtain the most reliable U-series ages. We demonstrate that intra-crystalline amino acid racemisation (AAR) can be applied as a relative dating tool in Pleistocene A. palmata samples that have suffered heavy dissolution and are therefore unsuitable for U-series analyses. Based on relatively high intra-crystalline concentrations and appropriate racemisation rates, glutamic acid and valine are most suited to dating mid-late Pleistocene A. palmata. Significantly, the best-preserved material in the fossil specimen yields a U-series age of 165 ± 8 ka, recording a paleo sea-level of -35 ± 7 msl during the MIS 6.5 interstadial on Barbados.
Forest habitat types of eastern Idaho-western Wyoming
Robert Steele; Stephen V. Cooper; David M. Ondov; David W. Roberts; Robert D. Pfister
1983-01-01
A land-classification system based upon potential natural vegetation is presented for the forests of central Idaho. It is based on reconnaissance sampling of about 980 stands. A hierarchical taxonomic classification of forest sites was developed using the habitat type concept. A total of six climax series, 58 habitat types, and 24 additional phases of habitat types are...
Coniferous forest habitat types of central and southern Utah
Andrew P. Youngblood; Ronald L. Mauk
1985-01-01
A land-classification system based upon potential natural vegetation is presented for the coniferous forests of central and southern Utah. It is based on reconnaissance sampling of about 720 stands. A hierarchical taxonomic classification of forest sites was developed using the habitat type concept. Seven climax series, 37 habitat types, and six additional phases of...
Forest habitat types of Montana
Robert D. Pfister; Bernard L. Kovalchik; Stephen F. Arno; Richard C. Presby
1977-01-01
A land-classification system based upon potential natural vegetation is presented for the forests of Montana. It is based on an intensive 4-year study and reconnaissance sampling of about 1,500 stands. A hierarchical classification of forest sites was developed using the habitat type concept. A total of 9 climax series, 64 habitat types, and 37 additional phases of...
Routine sampling and the control of Legionella spp. in cooling tower water systems.
Bentham, R H
2000-10-01
Cooling water samples from 31 cooling tower systems were cultured for Legionella over a 16-week summer period. The selected systems were known to be colonized by Legionella. Mean Legionella counts and standard deviations were calculated and time series correlograms prepared for each system. The standard deviations of Legionella counts in all the systems were very large, indicating great variability in the systems over the time period. Time series analyses demonstrated that in the majority of cases there was no significant relationship between the Legionella counts in the cooling tower at time of collection and the culture result once it was available. In the majority of systems (25/28), culture results from Legionella samples taken from the same systems 2 weeks apart were not statistically related. The data suggest that determinations of health risks from cooling towers cannot be reliably based upon single or infrequent Legionella tests.
Potential and Pitfalls of High-Rate GPS
NASA Astrophysics Data System (ADS)
Smalley, R.
2008-12-01
With completion of the Plate Boundary Observatory (PBO), we are poised to capture a dense sampling of strong motion displacement time series from significant earthquakes in western North America with High-Rate GPS (HRGPS) data collected at 1 and 5 Hz. These data will provide displacement time series at potentially zero epicentral distance that, if valid, have great potential to contribute to understanding earthquake rupture processes. The caveat relates to whether or not the data are aliased: is the sampling rate fast enough to accurately capture the displacement's temporal history? Using strong motion recordings in the immediate epicentral area of several 6.77.5 events, which can be reasonably expected in the PBO footprint, even the 5 Hz data may be aliased. Some sort of anti-alias processing, currently not applied, will therefore necessary at the closest stations to guarantee the veracity of the displacement time series. We discuss several solutions based on a-priori knowledge of the expected ground motion and practicality of implementation.
Wang, Zhuo; Jin, Shuilin; Liu, Guiyou; Zhang, Xiurui; Wang, Nan; Wu, Deliang; Hu, Yang; Zhang, Chiping; Jiang, Qinghua; Xu, Li; Wang, Yadong
2017-05-23
The development of single-cell RNA sequencing has enabled profound discoveries in biology, ranging from the dissection of the composition of complex tissues to the identification of novel cell types and dynamics in some specialized cellular environments. However, the large-scale generation of single-cell RNA-seq (scRNA-seq) data collected at multiple time points remains a challenge to effective measurement gene expression patterns in transcriptome analysis. We present an algorithm based on the Dynamic Time Warping score (DTWscore) combined with time-series data, that enables the detection of gene expression changes across scRNA-seq samples and recovery of potential cell types from complex mixtures of multiple cell types. The DTWscore successfully classify cells of different types with the most highly variable genes from time-series scRNA-seq data. The study was confined to methods that are implemented and available within the R framework. Sample datasets and R packages are available at https://github.com/xiaoxiaoxier/DTWscore .
Focant, Jean-François; Eppe, Gauthier; Massart, Anne-Cécile; Scholl, Georges; Pirard, Catherine; De Pauw, Edwin
2006-10-13
We report on the use of a state-of-the-art method for the measurement of selected polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and polychlorinated biphenyls in human serum specimens. The sample preparation procedure is based on manual small size solid-phase extraction (SPE) followed by automated clean-up and fractionation using multi-sorbent liquid chromatography columns. SPE cartridges and all clean-up columns are disposable. Samples are processed in batches of 20 units, including one blank control (BC) sample and one quality control (QC) sample. The analytical measurement is performed using gas chromatography coupled to isotope dilution high-resolution mass spectrometry. The sample throughput corresponds to one series of 20 samples per day, from sample reception to data quality cross-check and reporting, once the procedure has been started and series of samples keep being produced. Four analysts are required to ensure proper performances of the procedure. The entire procedure has been validated under International Organization for Standardization (ISO) 17025 criteria and further tested over more than 1500 unknown samples during various epidemiological studies. The method is further discussed in terms of reproducibility, efficiency and long-term stability regarding the 35 target analytes. Data related to quality control and limit of quantification (LOQ) calculations are also presented and discussed.
Characterizing and estimating noise in InSAR and InSAR time series with MODIS
Barnhart, William D.; Lohman, Rowena B.
2013-01-01
InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.
Ram Deo; Matthew Russell; Grant Domke; Hans-Erik Andersen; Warren Cohen; Christopher Woodall
2017-01-01
Large-area assessment of aboveground tree biomass (AGB) to inform regional or national forest monitoring programs can be efficiently carried out by combining remotely sensed data and field sample measurements through a generic statistical model, in contrast to site-specific models. We integrated forest inventory plot data with spatial predictors from Landsat time-...
Stem Cubic-Foot Volume Tables for Tree Species in the South
Alexander Clark; Ray A. Souter
1994-01-01
Stemwood cubic-foot volume tables were presented for 44 species and 10 species groups based on equations used to estimate timber sale volumes on national forests in the South. Tables are based on taper data for 13,469 trees sampled from Virginia to Texas. A series of tables are presented for each species based on diameter at breast height (d.b.h.) in combination with...
Scanning transmission electron microscopy through-focal tilt-series on biological specimens.
Trepout, Sylvain; Messaoudi, Cédric; Perrot, Sylvie; Bastin, Philippe; Marco, Sergio
2015-10-01
Since scanning transmission electron microscopy can produce high signal-to-noise ratio bright-field images of thick (≥500 nm) specimens, this tool is emerging as the method of choice to study thick biological samples via tomographic approaches. However, in a convergent-beam configuration, the depth of field is limited because only a thin portion of the specimen (from a few nanometres to tens of nanometres depending on the convergence angle) can be imaged in focus. A method known as through-focal imaging enables recovery of the full depth of information by combining images acquired at different levels of focus. In this work, we compare tomographic reconstruction with the through-focal tilt-series approach (a multifocal series of images per tilt angle) with reconstruction with the classic tilt-series acquisition scheme (one single-focus image per tilt angle). We visualised the base of the flagellum in the protist Trypanosoma brucei via an acquisition and image-processing method tailored to obtain quantitative and qualitative descriptors of reconstruction volumes. Reconstructions using through-focal imaging contained more contrast and more details for thick (≥500 nm) biological samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cross-sample entropy of foreign exchange time series
NASA Astrophysics Data System (ADS)
Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao
2010-11-01
The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.
US forests are showing increased rates of decline in response to a changing climate
Warren B. Cohen; Zhiqiang Yang; David M. Bell; Stephen V. Stehman
2015-01-01
How vulnerable are US forest to a changing climate? We answer this question using Landsat time series data and a unique interpretation approach, TimeSync, a plot-based Landsat visualization and data collection tool. Original analyses were based on a stratified two-stage cluster sample design that included interpretation of 3858 forested plots. From these data, we...
ERIC Educational Resources Information Center
Wheldall, Kevin; Wheldall, Robyn; Madelaine, Alison; Reynolds, Meree; Arakelian, Sarah
2017-01-01
An earlier series of pilot studies and small-scale experimental studies had previously provided some evidence for the efficacy of a small group early literacy intervention program for young struggling readers. The present paper provides further evidence for efficacy based on a much larger sample of young, socially disadvantaged, at-risk readers.…
ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
Spatio-temporal representativeness of ground-based downward solar radiation measurements
NASA Astrophysics Data System (ADS)
Schwarz, Matthias; Wild, Martin; Folini, Doris
2017-04-01
Surface solar radiation (SSR) is most directly observed with ground based pyranometer measurements. Besides measurement uncertainties, which arise from the pyranometer instrument itself, also errors attributed to the limited spatial representativeness of observations from single sites for their large-scale surrounding have to be taken into account when using such measurements for energy balance studies. In this study the spatial representativeness of 157 homogeneous European downward surface solar radiation time series from the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN) were examined for the period 1983-2015 by using the high resolution (0.05°) surface solar radiation data set from the Satellite Application Facility on Climate Monitoring (CM-SAF SARAH) as a proxy for the spatiotemporal variability of SSR. By correlating deseasonalized monthly SSR time series form surface observations against single collocated satellite derived SSR time series, a mean spatial correlation pattern was calculated and validated against purely observational based patterns. Generally decreasing correlations with increasing distance from station, with high correlations (R2 = 0.7) in proximity to the observational sites (±0.5°), was found. When correlating surface observations against time series from spatially averaged satellite derived SSR data (and thereby simulating coarser and coarser grids), very high correspondence between sites and the collocated pixels has been found for pixel sizes up to several degrees. Moreover, special focus was put on the quantification of errors which arise in conjunction to spatial sampling when estimating the temporal variability and trends for a larger region from a single surface observation site. For 15-year trends on a 1° grid, errors due to spatial sampling in the order of half of the measurement uncertainty for monthly mean values were found.
Antarctic Testing of the European Ultrasonic Planetary Core Drill (UPCD)
NASA Astrophysics Data System (ADS)
Timoney, R.; Worrall, K.; Li, X.; Firstbrook, D.; Harkness, P.
2018-04-01
An overview of a series of field testing in Antarctica where the Ultrasonic Planetary Core Drill (UPCD) architecture was tested. The UPCD system is the product an EC FP7 award to develop a Mars Sample Return architecture based around the ultrasonic technique.
Use of Naturally Available Reference Targets to Calibrate Airborne Laser Scanning Intensity Data
Vain, Ants; Kaasalainen, Sanna; Pyysalo, Ulla; Krooks, Anssi; Litkey, Paula
2009-01-01
We have studied the possibility of calibrating airborne laser scanning (ALS) intensity data, using land targets typically available in urban areas. For this purpose, a test area around Espoonlahti Harbor, Espoo, Finland, for which a long time series of ALS campaigns is available, was selected. Different target samples (beach sand, concrete, asphalt, different types of gravel) were collected and measured in the laboratory. Using tarps, which have certain backscattering properties, the natural samples were calibrated and studied, taking into account the atmospheric effect, incidence angle and flying height. Using data from different flights and altitudes, a time series for the natural samples was generated. Studying the stability of the samples, we could obtain information on the most ideal types of natural targets for ALS radiometric calibration. Using the selected natural samples as reference, the ALS points of typical land targets were calibrated again and examined. Results showed the need for more accurate ground reference data, before using natural samples in ALS intensity data calibration. Also, the NIR camera-based field system was used for collecting ground reference data. This system proved to be a good means for collecting in situ reference data, especially for targets with inhomogeneous surface reflection properties. PMID:22574045
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Hao, Pengyu; Wang, Li; Niu, Zheng
2015-01-01
A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
Jasper Seamount: seven million years of volcanism
Pringle, M.S.; Staudigel, H.; Gee, J.
1991-01-01
Jasper Seamount is a young, mid-sized (690 km3) oceanic intraplate volcano located about 500 km west-southwest of San Diego, California. Reliable 40Ar/39Ar age data were obtained for several milligram-sized samples of 4 to 10 Ma plagioclase by using a defocused laser beam to clean the samples before fusion. Gee and Staudigel suggested that Jasper Seamount consists of a transitional to tholeiitic shield volcano formed by flank transitional series lavas, overlain by flank alkalic series lavas and summit alkalic series lavas. Twenty-nine individual 40Ar/39Ar laser fusion analyses on nine samples confirm the stratigraphy: 10.3-10.0 Ma for the flank transitional series, 8.7-7.5 Ma for the flank alkalic series, and 4.8-4.1 Ma for the summit alkalic series. The alkalinity of the lavas clearly increases with time, and there appear to be 1 to 3 m.y. hiatuses between each series. -from Authors
NASA Astrophysics Data System (ADS)
Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao
2017-04-01
Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the use of monthly NDVI time series imagery. These results show the importance of considering the phenological stage for image selection for mapping S. alterniflora using GF-1 WFV imagery. Furthermore, in light of the better tradeoff between the number of images and classification accuracy when using multitemporal GF-1 WFV imagery, we suggest using multitemporal imagery acquired at appropriate phenological windows for S. alterniflora mapping at regional scales.
Alaska Geochemical Database - Mineral Exploration Tool for the 21st Century - PDF of presentation
Granitto, Matthew; Schmidt, Jeanine M.; Labay, Keith A.; Shew, Nora B.; Gamble, Bruce M.
2012-01-01
The U.S. Geological Survey has created a geochemical database of geologic material samples collected in Alaska. This database is readily accessible to anyone with access to the Internet. Designed as a tool for mineral or environmental assessment, land management, or mineral exploration, the initial version of the Alaska Geochemical Database - U.S. Geological Survey Data Series 637 - contains geochemical, geologic, and geospatial data for 264,158 samples collected from 1962-2009: 108,909 rock samples; 92,701 sediment samples; 48,209 heavy-mineral-concentrate samples; 6,869 soil samples; and 7,470 mineral samples. In addition, the Alaska Geochemical Database contains mineralogic data for 18,138 nonmagnetic-fraction heavy mineral concentrates, making it the first U.S. Geological Survey database of this scope that contains both geochemical and mineralogic data. Examples from the Alaska Range will illustrate potential uses of the Alaska Geochemical Database in mineral exploration. Data from the Alaska Geochemical Database have been extensively checked for accuracy of sample media description, sample site location, and analytical method using U.S. Geological Survey sample-submittal archives and U.S. Geological Survey publications (plus field notebooks and sample site compilation base maps from the Alaska Technical Data Unit in Anchorage, Alaska). The database is also the repository for nearly all previously released U.S. Geological Survey Alaska geochemical datasets. Although the Alaska Geochemical Database is a fully relational database in Microsoft® Access 2003 and 2010 formats, these same data are also provided as a series of spreadsheet files in Microsoft® Excel 2003 and 2010 formats, and as ASCII text files. A DVD version of the Alaska Geochemical Database was released in October 2011, as U.S. Geological Survey Data Series 637, and data downloads are available at http://pubs.usgs.gov/ds/637/. Also, all Alaska Geochemical Database data have been incorporated into the interactive U.S. Geological Survey Mineral Resource Data web portal, available at http://mrdata.usgs.gov/.
Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A
2016-08-01
The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data basis for multivariate analysis methods, equivalent to data resulting from chromatographic separations. The alternative evaluation of very large data series based on linear regression analysis produced information equivalent to results obtained through application of PCA an CA. Copyright © 2016 Elsevier B.V. All rights reserved.
Uranium series dating of human skeletal remains from the Del Mar and Sunnyvale sites, California
Bischoff, J.L.; Rosenbauer, R.J.
1981-01-01
Uranium series analyses of human bone samples from the Del Mar and Sunnyvale sites indicate ages of 11,000 and 8,300 years, respectively. The dates are supported by internal concordancy between thorium-230 and protactinium-231 decay systems. These ages are significantly younger than the estimates of 48,000 and 70,000 years based on amino acid racemization, and indicate that the individuals could derive from the population waves that came across the Bering Strait during the last sea-level low. Copyright ?? 1981 AAAS.
Warren B. Cohen; Sean P. Healey; Samuel Goward; Gretchen G. Moisen; Jeffrey G. Masek; Robert E. Kennedy; Scott L. Powell; Chengquan Huang; Nancy Thomas; Karen Schleeweis; Michael A. Wulder
2007-01-01
The exchange of carbon between forests and the atmosphere is a function of forest type, climate, and disturbance history, with previous studies illustrating that forests play a key role in the terrestrial carbon cycle. The North American Carbon Program (NACP) has supported the acquisition of biennial Landsat image time-series for sample locations throughout much of...
Hidden discriminative features extraction for supervised high-order time series modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2016-11-01
In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Minaudo, Camille; Dupas, Rémi; Moatar, Florentina; Gascuel-Odoux, Chantal
2016-04-01
Phosphorus fluxes in streams are subjected to high temporal variations, questioning the relevance of the monitoring strategies (generally monthly sampling) chosen to assist EU Directives to capture phosphorus fluxes and their variations over time. The objective of this study was to estimate the annual and seasonal P flux uncertainties depending on several monitoring strategies, with varying sampling frequencies, but also taking into account simultaneous and continuous time-series of parameters such as turbidity, conductivity, groundwater level and precipitation. Total Phosphorus (TP), Soluble Reactive Phosphorus (SRP) and Total Suspended Solids (TSS) concentrations were surveyed at a fine temporal frequency between 2007 and 2015 at the outlet of a small agricultural catchment in Brittany (Naizin, 5 km2). Sampling occurred every 3 to 6 days between 2007 and 2012 and daily between 2013 and 2015. Additionally, 61 storms were intensively surveyed (1 sample every 30 minutes) since 2007. Besides, water discharge, turbidity, conductivity, groundwater level and precipitation were monitored on a sub-hourly basis. A strong temporal decoupling between SRP and particulate P (PP) was found (Dupas et al., 2015). The phosphorus-discharge relationships displayed two types of hysteretic patterns (clockwise and counterclockwise). For both cases, time-series of PP and SRP were estimated continuously for the whole period using an empirical model linking P concentrations with the hydrological and physic-chemical variables. The associated errors of the estimated P concentrations were also assessed. These « synthetic » PP and SRP time-series allowed us to discuss the most efficient monitoring strategies, first taking into account different sampling strategies based on Monte Carlo random simulations, and then adding the information from continuous data such as turbidity, conductivity and groundwater depth based on empirical modelling. Dupas et al., (2015, Distinct export dynamics for dissolved and particulate phosphorus reveal independent transport mechanisms in an arable headwater catchment, Hydrological Processes, 29(14), 3162-3178
NASA Astrophysics Data System (ADS)
Aka, Festus T.; Yokoyama, Tetsuya; Kusakabe, Minoru; Nakamura, Eizo; Tanyileke, Gregory; Ateba, Bekoa; Ngako, Vincent; Nnange, Joseph; Hell, Joseph
2008-09-01
From previously published 14C and K-Ar data, the age of formation of Lake Nyos maar in Cameroon is still in dispute. Lake Nyos exploded in 1986, releasing CO 2 that killed 1750 people and over 3000 cattle. Here we report results of the first measurements of major elements, trace elements and U-series disequilibria in ten basanites/trachy-basalts and two olivine tholeiites from Lake Nyos. It is the first time tholeiites are described in Lake Nyos. But for the tholeiites which are in 238U- 230Th equilibrium, all the other samples possess 238U- 230Th disequilibrium with 15 to 28% enrichment of 230Th over 238U. The ( 226Ra/ 230Th) activity ratios of these samples indicate small (2 to 4%) but significant 226Ra excesses. U-Th systematics and evidence from oxygen isotopes of the basalts and Lake Nyos granitic quartz separates show that the U-series disequilibria in these samples are source-based and not due to crustal contamination or post-eruptive alteration. Enrichment of 230Th is strong prima facie evidence that Lake Nyos is younger than 350 ka. The 230Th- 226Ra age of Nyos samples calculated with the ( 226Ra/ 230Th) ratio for zero-age Mt. Cameroon samples is 3.7 ± 0.5 ka, although this is a lower limit as the actual age is estimated to be older than 5 ka, based on the measured mean 230Th/ 238U activity ratio. The general stability of the Lake Nyos pyroclastic dam is a cause for concern, but judging from its 230Th- 226Ra formation age, we do not think that in the absence of a big rock fall or landslide into the lake, a big earthquake or volcanic eruption close to the lake, collapse of the dam from erosion alone is as imminent and alarming as has been suggested.
The rationale for chemical time-series sampling has its roots in the same fundamental relationships as govern well hydraulics. Samples of ground water are collected as a function of increasing time of pumpage. The most efficient pattern of collection consists of logarithmically s...
Stem Cubic-Foot Volume Tables for Tree Species in the Piedmont
Alexander Clark; Ray A. Souter
1996-01-01
Stemwood cubic-foot volume inside bark tables are presented for 16 species and 8 species groups based on equations used to estimate timber sale volumes on national forests in the Piedmont. Tables are based on form class measurement data for 2,753 trees sampled in the Piedmont and taper data collected across the South. A series of tables is presented for each species...
Stem Cubic-Foot Volume Tables for Tree Species in the Upper Coastal Plain
Alexander Clark; Ray A. Souter
1996-01-01
Stemwood cubic-foot volume inside bark tables are presented for 11 species and 8 species groups based on equations used to estimate timber sale volumes on national forests in the Upper Coastal Plain. Tables are based on form class measurement data for 521 trees sampled in the Upper Coastal Plain and taper data collected across the South. A series of tables is...
Stem Cubic-Foot Volume Tables for Tree Species in the Appalachian Area
Alexander Clark; Ray A. Souter
1996-01-01
Stemwood cubic-foot volume inside bark tables are presented for 20 species and 8 species groups based on equations used to estimate timber sale volumes on national forests in the Appalachian Area. Tables are based on form class measurement data for 2,870 trees sampled in the Appalachian Area and taper data collected across the South. A series of tables is presented...
pH Testing. Youth Training Scheme. Core Exemplar Work Based Project.
ERIC Educational Resources Information Center
Further Education Staff Coll., Blagdon (England).
This trainer's guide is intended to assist supervisors of work-based career training projects in teaching students how to sample and analyze soil to determine its pH value. The guide is one in a series of core curriculum modules that is intended for use in combination on- and off-the-job programs to familiarize youth with the skills, knowledge,…
ERIC Educational Resources Information Center
Breland, Hunter M.; Carlton, Sydell T.; Taylor, Susan
Based on the results of a Phase 1 investigation into the nature of legal writing, a prototype writing assessment, the Diagnostic Writing Skills Test (DWST) for entering law students was developed. The DWST is composed of two multiple-choice testlets based on prompts and responses to the Law School Admission Test (LSAT) Writing Sample. It contains…
Wood, Henry M; Belvedere, Ornella; Conway, Caroline; Daly, Catherine; Chalkley, Rebecca; Bickerdike, Melissa; McKinley, Claire; Egan, Phil; Ross, Lisa; Hayward, Bruce; Morgan, Joanne; Davidson, Leslie; MacLennan, Ken; Ong, Thian K; Papagiannopoulos, Kostas; Cook, Ian; Adams, David J; Taylor, Graham R; Rabbitts, Pamela
2010-08-01
The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.
Developmental validation of the PowerPlex(®) Fusion 6C System.
Ensenberger, Martin G; Lenz, Kristy A; Matthies, Learden K; Hadinoto, Gregory M; Schienman, John E; Przech, Angela J; Morganti, Michael W; Renstrom, Daniel T; Baker, Victoria M; Gawrys, Kori M; Hoogendoorn, Marlijn; Steffen, Carolyn R; Martín, Pablo; Alonso, Antonio; Olson, Hope R; Sprecher, Cynthia J; Storts, Douglas R
2016-03-01
The PowerPlex(®) Fusion 6C System is a 27-locus, six-dye, multiplex that includes all markers in the expanded CODIS core loci and increases overlap with STR database standards throughout the world. Additionally, it contains two, rapidly mutating, Y-STRs and is capable of both casework and database workflows, including direct amplification. A multi-laboratory developmental validation study was performed on the PowerPlex(®) Fusion 6C System. Here, we report the results of that study which followed SWGDAM guidelines and includes data for: species specificity, sensitivity, stability, precision, reproducibility and repeatability, case-type samples, concordance, stutter, DNA mixtures, and PCR-based procedures. Where appropriate we report data from both extracted DNA samples and direct amplification samples from various substrates and collection devices. Samples from all studies were separated on both Applied Biosystems 3500 series and 6-dye capable 3130 series Genetic Analyzers and data is reported for each. Together, the data validate the design and demonstrate the performance of the PowerPlex(®) Fusion 6C System. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Technology Acceptance among Pre-Service Teachers: Does Gender Matter?
ERIC Educational Resources Information Center
Teo, Timothy; Fan, Xitao; Du, Jianxia
2015-01-01
This study examined possible gender differences in pre-service teachers' perceived acceptance of technology in their professional work under the framework of the technology acceptance model (TAM). Based on a sample of pre-service teachers, a series of progressively more stringent measurement invariance tests (configural, metric, and scalar…
The High School & Beyond Data Set: Academic Self-Concept Measures.
ERIC Educational Resources Information Center
Strein, William
A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…
Formation of Organic Tracers for Isoprene SOA under Acidic Conditions
The chemical compositions of a series of secondary organic aerosol (SOA) samples, formed by irradiating mixtures of isoprene and NO in a smog chamber in the absence or presence of acidic aerosols, were analyzed using derivatization-based GC-MS methods. In addition to the known is...
Krstacic, Goran; Krstacic, Antonija; Smalcelj, Anton; Milicic, Davor; Jembrek-Gostovic, Mirjana
2007-04-01
Dynamic analysis techniques may quantify abnormalities in heart rate variability (HRV) based on nonlinear and fractal analysis (chaos theory). The article emphasizes clinical and prognostic significance of dynamic changes in short-time series applied on patients with coronary heart disease (CHD) during the exercise electrocardiograph (ECG) test. The subjects were included in the series after complete cardiovascular diagnostic data. Series of R-R and ST-T intervals were obtained from exercise ECG data after sampling digitally. The range rescaled analysis method determined the fractal dimension of the intervals. To quantify fractal long-range correlation's properties of heart rate variability, the detrended fluctuation analysis technique was used. Approximate entropy (ApEn) was applied to quantify the regularity and complexity of time series, as well as unpredictability of fluctuations in time series. It was found that the short-term fractal scaling exponent (alpha(1)) is significantly lower in patients with CHD (0.93 +/- 0.07 vs 1.09 +/- 0.04; P < 0.001). The patients with CHD had higher fractal dimension in each exercise test program separately, as well as in exercise program at all. ApEn was significant lower in CHD group in both RR and ST-T ECG intervals (P < 0.001). The nonlinear dynamic methods could have clinical and prognostic applicability also in short-time ECG series. Dynamic analysis based on chaos theory during the exercise ECG test point out the multifractal time series in CHD patients who loss normal fractal characteristics and regularity in HRV. Nonlinear analysis technique may complement traditional ECG analysis.
Heart rate time series characteristics for early detection of infections in critically ill patients.
Tambuyzer, T; Guiza, F; Boonen, E; Meersseman, P; Vervenne, H; Hansen, T K; Bjerre, M; Van den Berghe, G; Berckmans, D; Aerts, J M; Meyfroidt, G
2017-04-01
It is difficult to make a distinction between inflammation and infection. Therefore, new strategies are required to allow accurate detection of infection. Here, we hypothesize that we can distinguish infected from non-infected ICU patients based on dynamic features of serum cytokine concentrations and heart rate time series. Serum cytokine profiles and heart rate time series of 39 patients were available for this study. The serum concentration of ten cytokines were measured using blood sampled every 10 min between 2100 and 0600 hours. Heart rate was recorded every minute. Ten metrics were used to extract features from these time series to obtain an accurate classification of infected patients. The predictive power of the metrics derived from the heart rate time series was investigated using decision tree analysis. Finally, logistic regression methods were used to examine whether classification performance improved with inclusion of features derived from the cytokine time series. The AUC of a decision tree based on two heart rate features was 0.88. The model had good calibration with 0.09 Hosmer-Lemeshow p value. There was no significant additional value of adding static cytokine levels or cytokine time series information to the generated decision tree model. The results suggest that heart rate is a better marker for infection than information captured by cytokine time series when the exact stage of infection is not known. The predictive value of (expensive) biomarkers should always be weighed against the routinely monitored data, and such biomarkers have to demonstrate added value.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
Progress in tropical isotope dendroclimatology
NASA Astrophysics Data System (ADS)
Evans, M. N.; Schrag, D. P.; Poussart, P. F.; Anchukaitis, K. J.
2005-12-01
The terrestrial tropics remain an important gap in the growing high resolution proxy network used to characterize the mean state and variability of the hydrological cycle. Here we review early efforts to develop a new class of proxy paleorainfall/humidity indicators using intraseasonal to interannual-resolution stable isotope data from tropical trees. The approach invokes a recently published model of oxygen isotopic composition of alpha-cellulose, rapid methods for cellulose extraction from raw wood, and continuous flow isotope ratio mass spectrometry to develop proxy chronological, rainfall and growth rate estimates from tropical trees, even those lacking annual rings. Isotopically-derived age models may be confirmed for modern intervals using trees of known age, radiocarbon measurements, direct measurements of tree diameter, and time series replication. Studies are now underway at a number of laboratories on samples from Costa Rica, northwestern coastal Peru, Indonesia, Thailand, New Guinea, Paraguay, Brazil, India, and the South American Altiplano. Improved sample extraction chemistry and online pyrolysis techniques should increase sample throughput, precision, and time series replication. Statistical calibration together with simple forward modeling based on the well-observed modern period can provide for objective interpretation of the data. Ultimately, replicated data series with well-defined uncertainties can be entered into multiproxy efforts to define aspects of tropical hydrological variability associated with ENSO, the meridional overturning circulation, and the monsoon systems.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Twenty-four cases of imported zika virus infections diagnosed by molecular methods.
Alejo-Cancho, Izaskun; Torner, Nuria; Oliveira, Inés; Martínez, Ana; Muñoz, José; Jane, Mireia; Gascón, Joaquim; Requena-Méndez, Ana; Vilella, Anna; Marcos, M Ángeles; Pinazo, María Jesús; Gonzalo, Verónica; Rodriguez, Natalia; Martínez, Miguel J
2016-10-01
Zika virus is an emerging flavivirus widely spreading through Latin America. Molecular diagnosis of the infection can be performed using serum, urine and saliva samples, although a well-defined diagnostic algorithm is not yet established. We describe a series of 24 cases of imported zika virus infection into Catalonia (northeastern Spain). Based on our findings, testing of paired serum and urine samples is recommended. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Z.; Klann, R. T.; Nuclear Engineering Division
2007-08-03
An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.
Predicting disease progression from short biomarker series using expert advice algorithm
NASA Astrophysics Data System (ADS)
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-01
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of ``prediction with expert advice'' to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Predicting disease progression from short biomarker series using expert advice algorithm.
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-20
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of "prediction with expert advice" to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Adaptive Associative Scale-Free Maps for Fusing Human and Robotic Intelligences
2006-06-01
negative documents are left out from the series . . . . . . . . 50 7.5 Sample runs Blue: Accuracy, red: Recall, black: the ratio of new docu- ments to all...The true negative documents are left out from the series . . . . . . . . 51 7.6 Sample runs Blue: Accuracy, red: Recall, black: the ratio of new docu...right corner. The true negative documents are left out from the series . . . . . . . . 52 7.7 Sample map 1 from topic ‘Language
NASA Astrophysics Data System (ADS)
Song, Biao; Lu, Dan; Peng, Ming; Li, Xia; Zou, Ye; Huang, Meizhen; Lu, Feng
2017-02-01
Raman spectroscopy is developed as a fast and non-destructive method for the discrimination and classification of hydroxypropyl methyl cellulose (HPMC) samples. 44 E series and 41 K series of HPMC samples are measured by a self-developed portable Raman spectrometer (Hx-Raman) which is excited by a 785 nm diode laser and the spectrum range is 200-2700 cm-1 with a resolution (FWHM) of 6 cm-1. Multivariate analysis is applied for discrimination of E series from K series. By methods of principal components analysis (PCA) and Fisher discriminant analysis (FDA), a discrimination result with sensitivity of 90.91% and specificity of 95.12% is achieved. The corresponding receiver operating characteristic (ROC) is 0.99, indicting the accuracy of the predictive model. This result demonstrates the prospect of portable Raman spectrometer for rapid, non-destructive classification and discrimination of E series and K series samples of HPMC.
Johnson, Alicia S.; Anderson, Kari B.; Halpin, Stephen T.; Kirkpatrick, Douglas C.; Spence, Dana M.; Martin, R. Scott
2012-01-01
In Part I of a two-part series, we describe a simple, and inexpensive approach to fabricate polystyrene devices that is based upon melting polystyrene (from either a Petri dish or powder form) against PDMS molds or around electrode materials. The ability to incorporate microchannels in polystyrene and integrate the resulting device with standard laboratory equipment such as an optical plate reader for analyte readout and micropipettors for fluid propulsion is first described. A simple approach for sample and reagent delivery to the device channels using a standard, multi-channel micropipette and a PDMS-based injection block is detailed. Integration of the microfluidic device with these off-chip functions (sample delivery and readout) enables high throughput screens and analyses. An approach to fabricate polystyrene-based devices with embedded electrodes is also demonstrated, thereby enabling the integration of microchip electrophoresis with electrochemical detection through the use of a palladium electrode (for a decoupler) and carbon-fiber bundle (for detection). The device was sealed against a PDMS-based microchannel and used for the electrophoretic separation and amperometric detection of dopamine, epinephrine, catechol, and 3,4-dihydroxyphenylacetic acid. Finally, these devices were compared against PDMS-based microchips in terms of their optical transparency and absorption of an anti-platelet drug, clopidogrel. Part I of this series lays the foundation for Part II, where these devices were utilized for various on-chip cellular analysis. PMID:23120747
Time Series Data Analysis of Wireless Sensor Network Measurements of Temperature.
Bhandari, Siddhartha; Bergmann, Neil; Jurdak, Raja; Kusy, Branislav
2017-05-26
Wireless sensor networks have gained significant traction in environmental signal monitoring and analysis. The cost or lifetime of the system typically depends on the frequency at which environmental phenomena are monitored. If sampling rates are reduced, energy is saved. Using empirical datasets collected from environmental monitoring sensor networks, this work performs time series analyses of measured temperature time series. Unlike previous works which have concentrated on suppressing the transmission of some data samples by time-series analysis but still maintaining high sampling rates, this work investigates reducing the sampling rate (and sensor wake up rate) and looks at the effects on accuracy. Results show that the sampling period of the sensor can be increased up to one hour while still allowing intermediate and future states to be estimated with interpolation RMSE less than 0.2 °C and forecasting RMSE less than 1 °C.
NASA Astrophysics Data System (ADS)
Zhang, Caiyun; Smith, Molly; Lv, Jie; Fang, Chaoyang
2017-05-01
Mapping plant communities and documenting their changes is critical to the on-going Florida Everglades restoration project. In this study, a framework was designed to map dominant vegetation communities and inventory their changes in the Florida Everglades Water Conservation Area 2A (WCA-2A) using time series Landsat images spanning 1996-2016. The object-based change analysis technique was combined in the framework. A hybrid pixel/object-based change detection approach was developed to effectively collect training samples for historical images with sparse reference data. An object-based quantification approach was also developed to assess the expansion/reduction of a specific class such as cattail (an invasive species in the Everglades) from the object-based classifications of two dates of imagery. The study confirmed the results in the literature that cattail was largely expanded during 1996-2007. It also revealed that cattail expansion was constrained after 2007. Application of time series Landsat data is valuable to document vegetation changes for the WCA-2A impoundment. The digital techniques developed will benefit global wetland mapping and change analysis in general, and the Florida Everglades WCA-2A in particular.
Stress corrosion in silica optical fibers: Review of fatigue testing procedures
NASA Astrophysics Data System (ADS)
Severin, Irina; Borda, Claudia; Dumitrache-Rujinski, Alexandru; Caramihai, Mihai; Abdi, Rochdi El
2018-02-01
The expected lifetime of optical fibers used either in telecommunication technologies or smart applications are closely related to the chemical reaction on the silica network. Due to the manufacturing processes or the handling procedures, the flaws spread on the fiber surface are inherently present. The aging mechanism is assumed to enlarge or to extend these flaws. Based on systematic experiments one may notice that water may induce a certain curing effect. Silica optical fibers have been aged in water; series of samples have been subjected to overlapped stretching or bending. Other series have been subjected to overlapped aging effect of microwaves and hot water. Finally, samples were submitted to dynamic tensile testing. The Weibull's diagram analysis shows mono or bimodal dispersions of flaws on the fiber surface, but the polymer coating appears vital for fiber lifetime. While humidity usually affects the fiber strength, the series of testing has revealed that in controlled conditions of chemical environment and controlled applied stress, fiber strength may be increased. A similar effect may be obtained by external factors such as microwaves or previous elongation, too.
Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs
Stewart, David R.; Long, James M.
2012-01-01
Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, Orville T.; Olsen, Khris B.; Thomas, May-Lin P.
2008-05-01
A method for the separation and determination of total and isotopic uranium and plutonium by ICP-MS was developed for IAEA samples on cellulose-based media. Preparation of the IAEA samples involved a series of redox chemistries and separations using TRU® resin (Eichrom). The sample introduction system, an APEX nebulizer (Elemental Scientific, Inc), provided enhanced nebulization for a several-fold increase in sensitivity and reduction in background. Application of mass bias (ALPHA) correction factors greatly improved the precision of the data. By combining the enhancements of chemical separation, instrumentation and data processing, detection levels for uranium and plutonium approached high attogram levels.
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
Gas Emissions Acquired during the Aircraft Particle Emission Experiment (APEX) Series
NASA Technical Reports Server (NTRS)
Changlie, Wey; Chowen, Chou Wey
2007-01-01
NASA, in collaboration with other US federal agencies, engine/airframe manufacturers, airlines, and airport authorities, recently sponsored a series of 3 ground-based field investigations to examine the particle and gas emissions from a variety of in-use commercial aircraft. Emissions parameters were measured at multiple engine power settings, ranging from idle to maximum thrust, in samples collected at 3 different down stream locations of the exhaust. Sampling rakes at nominally 1 meter down stream contained multiple probes to facilitate a study of the spatial variation of emissions across the engine exhaust plane. Emission indices measured at 1 m were in good agreement with the engine certification data as well as predictions provided by the engine company. However at low power settings, trace species emissions were observed to be highly dependent on ambient conditions and engine temperature.
Micklash. II, Kenneth James; Dutton, Justin James; Kaye, Steven
2014-06-03
An apparatus for testing of multiple material samples includes a gas delivery control system operatively connectable to the multiple material samples and configured to provide gas to the multiple material samples. Both a gas composition measurement device and pressure measurement devices are included in the apparatus. The apparatus includes multiple selectively openable and closable valves and a series of conduits configured to selectively connect the multiple material samples individually to the gas composition device and the pressure measurement devices by operation of the valves. A mixing system is selectively connectable to the series of conduits and is operable to cause forced mixing of the gas within the series of conduits to achieve a predetermined uniformity of gas composition within the series of conduits and passages.
NASA Astrophysics Data System (ADS)
Aptikaeva, O. I.; Gamburtsev, A. G.; Martyushov, A. N.
2012-12-01
We have investigated the numbers of emergency hospitalizations in mental and drug-treatment hospitals in Kazan in 1996-2006 and in Moscow in 1984-1996. Samples have been analyzed by disease type, sex, age, and place of residence (city or village). This study aims to discover differences and common traits in various structures of series of hospitalizations in these samples and their possible relationships with the changing parameters of the environment. We have found similar structures of series of samples of the same type both in Moscow and in Kazan. In some cases, cyclic structures of series of numbers of hospitalizations and series of changes in solar activity and the rate of rotation of the earth change simultaneously.
Stem Cubic-Volume Tables for Tree Species in the Deep South Area
Alexander Clark; Ray A. Souter
1996-01-01
Stemwood cubic-foot volume inside bark tables are presented for 21 species and 8 species groups based on equations used to estimate timber sale volumes on national forests in the Deep South Area. Tables are based on form class measurement data for 2,390 trees sampled in the Deep South Area and taper data collected across the South. A series of tables is presented for...
Stem Cubic-Foot Volume Tables for Tree Species in the Arkansas Area
Alexander Clark; Ray A. Souter
1996-01-01
Stemwood cubic-foot volume inside bark tables are presented for 9 species and 6 species groups based on equations used to estimate timber sale volumes on national forests in the Arkansas Area. Tables are based on form class measurement data for 1,417 trees sampled in the Arkansas Area and taper data collected across the South. A series of tables is presented for each...
Stem Cubic-Foot Volume Tabies for Tree Species in the Delta Area
Alexander Clark; Ray A. Souter
1996-01-01
Stemwood cubic-foot volume inside bark tables are presented for 13 species and 8 species groups based on equations used to estimate timber sale volumes on national forests in the Delta Area. Tables are based on form class measurement data for 990 trees sampled in the Delta Area and taper data collected across the South. A series of tables is presented for each species...
Ortiz, M C; Sarabia, L A; Sánchez, M S; Giménez, D
2009-05-29
Due to the second-order advantage, calibration models based on parallel factor analysis (PARAFAC) decomposition of three-way data are becoming important in routine analysis. This work studies the possibility of fitting PARAFAC models with excitation-emission fluorescence data for the determination of ciprofloxacin in human urine. The finally chosen PARAFAC decomposition is built with calibration samples spiked with ciprofloxacin, and with other series of urine samples that were also spiked. One of the series of samples has also another drug because the patient was taking mesalazine. The mesalazine is a fluorescent substance that interferes with the ciprofloxacin. Finally, the procedure is applied to samples of a patient who was being treated with ciprofloxacin. The trueness has been established by the regression "predicted concentration versus added concentration". The recovery factor is 88.3% for ciprofloxacin in urine, and the mean of the absolute value of the relative errors is 4.2% for 46 test samples. The multivariate sensitivity of the fit calibration model is evaluated by a regression between the loadings of PARAFAC linked to ciprofloxacin versus the true concentration in spiked samples. The multivariate capability of discrimination is near 8 microg L(-1) when the probabilities of false non-compliance and false compliance are fixed at 5%.
NASA Astrophysics Data System (ADS)
Helama, S.; Lindholm, M.; Timonen, M.; Eronen, M.
2004-12-01
Tree-ring standardization methods were compared. Traditional methods along with the recently introduced approaches of regional curve standardization (RCS) and power-transformation (PT) were included. The difficulty in removing non-climatic variation (noise) while simultaneously preserving the low-frequency variability in the tree-ring series was emphasized. The potential risk of obtaining inflated index values was analysed by comparing methods to extract tree-ring indices from the standardization curve. The material for the tree-ring series, previously used in several palaeoclimate predictions, came from living and dead wood of high-latitude Scots pine in northernmost Europe. This material provided a useful example of a long composite tree-ring chronology with the typical strengths and weaknesses of such data, particularly in the context of standardization. PT stabilized the heteroscedastic variation in the original tree-ring series more efficiently than any other standardization practice expected to preserve the low-frequency variability. RCS showed great potential in preserving variability in tree-ring series at centennial time scales; however, this method requires a homogeneous sample for reliable signal estimation. It is not recommended to derive indices by subtraction without first stabilizing the variance in the case of series of forest-limit tree-ring data. Index calculation by division did not seem to produce inflated chronology values for the past one and a half centuries of the chronology (where mean sample cambial age is high). On the other hand, potential bias of high RCS chronology values was observed during the period of anomalously low mean sample cambial age. An alternative technique for chronology construction was proposed based on series age decomposition, where indices in the young vigorously behaving part of each series are extracted from the curve by division and in the mature part by subtraction. Because of their specific nature, the dendrochronological data here should not be generalized to all tree-ring records. The examples presented should be used as guidelines for detecting potential sources of bias and as illustrations of the usefulness of tree-ring records as palaeoclimate indicators.
On the construction of a time base and the elimination of averaging errors in proxy records
NASA Astrophysics Data System (ADS)
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.
Population Education in Mathematics: Some Sample Lessons for the Secondary Level.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.
This booklet consists of five sample lessons integrating population education into mathematics instruction. It is one of four in a series. Materials differ from those in an earlier series (1980) in that lessons are presented at the secondary level only; there is no duplication of lessons from the earlier series in content and teaching strategies.…
Population Education in Science: Some Sample Lessons for the Secondary Level.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.
This booklet consists of six sample lessons integrating population education into science instruction. It is one of four in a series. Materials differ from those in an earlier series (1980) in that lessons are presented at the secondary level only; there is no duplication of lessons from the earlier series in terms of content and teaching…
NASA Astrophysics Data System (ADS)
Lewis, J.; Perfit, M. R.; Kamenov, G.
2006-12-01
Several eruptive centers of Pliocene-Quaternary age occur across southern Hispaniola that constitutes the youngest land-based magmatic activity in the Greater Antilles. Two main rock suites can be delineated based on petrography, geochemistry and location. The older larger centers in the Dominican Republic (DR) consist of basalts (45.81-53% SiO2 with TiO2 <1.2%), basaltic andesites and trachybasalts (54-55% SiO2) and trachyandesites (56-62% SiO2). These constitute a consanguineous high-K calc-alkaline (CA) series. Younger centers of Quaternary age (all probably < 1.0 Ma) occur to the west in Haiti, at San Juan de la Maguana (DR) and two small centers to the south of Yayas de Viajama (DR). The rocks are alkali-olivine basalts, limburgites and nephelenites (38.6-47.6% SiO2 with TiO2 >1.7 at MgO<12%) and are termed the mafic alkaline (MA) series. Although there is an overall similarity in the trace and minor element patterns of normalized multi-element plots of the rocks samples the CA series shows distinct depletions in the HFS elements Ta, Nb, Hf, Zr, and Ti compared to lavas in the MA series. MA series samples exhibit strong enrichment in LREE (Ce/Ybn = > 30) compared to the CA series basalts (Ce/Ybn = < 30) and greater HREE depletions. The CA suite has higher 143Nd/144Nd (0.51286 ? 0.5126) and lower 87Sr/86Sr (0.7040 ? 0.7053) than the MA suite (0.5126-0.51196; 0.7063- 0.7078). MA series lavas have unusually non-radiogenic Pb isotopic values (206Pb/204Pb < 17.9) whereas the CA suite has low but values more typical of the Greater Antilles. Incompatible trace element ratios such as Ba/Nb, Sr/Nd, Ce/Yb and Ba/La are well correlated with isotopes but the data form near continuous arrays suggesting mixing between sources. The data suggest the young alkaline lavas are derived from enriched mantle source similar to EM1 but that they are also mixing with a component reflected in the composition of the CA series that is related to previous subduction- related enrichment of the sub-arc mantle beneath Hispaniola. The presence of an EM1 component in the Greater Antilles has not been previously recognized and is unusual for an arc environment.
Generalization bounds of ERM-based learning processes for continuous-time Markov chains.
Zhang, Chao; Tao, Dacheng
2012-12-01
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.
Adult Literacy: An International Perspective. Working Paper Series.
ERIC Educational Resources Information Center
Binkley, Marilyn; Matheson, Nancy; Williams, Trevor
The comparison of adult literacy in the United States and in other countries is based on data gathered in interviews with a sample of individuals representative of the population aged 16-65 in twelve countries: Sweden, the Netherlands, Canada, Germany, New Zealand, Australia, the United States, Belgium, the United Kingdom, Ireland, Switzerland,…
Differences in Psychological Distress and Esteem Based on Sexual Identity Development
ERIC Educational Resources Information Center
Shepler, Dustin; Perrone-McGovern, Kristin
2016-01-01
A sample of 791 college students between the ages of 18 and 25 years were administered a series of measures to determine their sexual identity development status, global self-esteem, global psychological distress, sexual-esteem and sexual distress. As hypothesized, results indicated no significant difference in terms of psychological distress,…
Preparation and Analysis of Solid Solutions in the Potassium Perchlorate-Permanganate System.
ERIC Educational Resources Information Center
Johnson, Garrett K.
1979-01-01
Describes an experiment, designed for and tested in an advanced inorganic laboratory methods course for college seniors and graduate students, that prepares and analyzes several samples in the nearly ideal potassium perchlorate-permanganate solid solution series. The results are accounted for by a theoretical treatment based upon aqueous…
ERIC Educational Resources Information Center
Hatcher, Robert L.; Rogers, Daniel T.
2009-01-01
An Inventory of Interpersonal Strengths (IIS) was developed and validated in a series of large college student samples. Based on interpersonal theory and associated methods, the IIS was designed to assess positive characteristics representing the full range of interpersonal domains, including those generally thought to have negative qualities…
Problem-Based Learning Pedagogy Fosters Students' Critical Thinking about Writing
ERIC Educational Resources Information Center
Kumar, Rita; Refaei, Brenda
2017-01-01
Convinced of the power of PBL to promote students' critical thinking as demonstrated by its application across disciplines, we designed a series of problems for students in a second-year writing course. We collected samples of their writing before and after implementation of the problems. We were concerned about whether PBL pedagogy would…
Barber, Jessica; Palmese, Laura; Reutenauer, Erin L.; Grilo, Carlos; Tek, Cenk
2011-01-01
Obesity has been associated with significant stigma and weight-related self-bias in community and clinical studies, but these issues have not been studied among individuals with schizophrenia. A consecutive series of 70 obese individuals with schizophrenia or schizoaffective disorder underwent assessment for perceptions of weight-based stigmatization, self-directed weight-bias, negative affect, medication compliance, and quality of life. Levels of weight-based stigmatization and self-bias were compared to levels reported for non-psychiatric overweight/obese samples. Weight measures were unrelated to stigma, self-bias, affect, and quality of life. Weight-based stigmatization was lower than published levels for non-psychiatric samples, whereas levels of weight-based self-bias did not differ. After controlling for negative affect, weight-based self-bias predicted an additional 11% of the variance in the quality of life measure. Individuals with schizophrenia and schizoaffective disorder reported weight-based self-bias to the same extent as non-psychiatric samples despite reporting less weight stigma. Weight-based self-bias was associated with poorer quality of life after controlling for negative affect. PMID:21716053
NASA Astrophysics Data System (ADS)
Magri, Alphonso William
This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
Population Education in Health and Home Economics: Some Sample Lessons for the Secondary Level.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.
This booklet contains five sample lessons integrating population education into health and home economics instruction. It is one of four in a series. Materials differ from those in an earlier series (1980) in that lessons are presented at the secondary level only; there is no duplication of lessons from the earlier series in content and teaching…
Population Education in Social Studies: Some Sample Lessons for the Secondary Level.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.
This booklet consists of 10 sample lessons integrating population education into the social studies. It is one of four in a series. Materials differ from those in an earlier series (1980) in that lessons are presented at the secondary level only; there is no duplication of lessons from the earlier series in terms of content and teaching…
An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1996-01-01
Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.
Seiwert, Bettina; Karst, Uwe
2007-09-15
A method for the simultaneous determination of a series of thiols and disulfides in urine samples has been developed based on the sequential labeling of free and bound thiol functionalities with two ferrocene-based maleimide reagents. The sample is first exposed to N-(2-ferroceneethyl)maleimide, thus leading to the derivatization of free thiol groups in the sample. After quantitative reaction and subsequent reduction of the disulfide-bound thiols by tris(2-carboxyethyl)phosphine, the newly formed thiol functionalities are reacted with ferrocenecarboxylic acid-(2-maleimidoyl)ethylamide. The reaction products are determined by LC/MS/MS in the multiple reaction mode, and precursor ion scan as well as neutral loss scan is applied to detect unknown further thiols. The method was successfully applied to the analysis of free and disulfide-bound thiols in urine samples. Limits of detection are 30 to 110 nM, and the linear range comprises two decades of concentration, thus covering the relevant concentration range of thiols in urine samples. The thiol and disulfide concentrations were referred to the creatinine content to compensate for different sample volumes. As some calibration standards for the disulfides are not commercially available, they were synthesized in an electrochemical flow-through cell. This allowed the synthesis of hetero- and homodimeric disulfides.
Zhao, Yan-Hong; Zhang, Xue-Fang; Zhao, Yan-Qiu; Bai, Fan; Qin, Fan; Sun, Jing; Dong, Ying
2017-08-01
Chronic myeloid leukemia (CML) is characterized by the accumulation of active BCR-ABL protein. Imatinib is the first-line treatment of CML; however, many patients are resistant to this drug. In this study, we aimed to compare the differences in expression patterns and functions of time-series genes in imatinib-resistant CML cells under different drug treatments. GSE24946 was downloaded from the GEO database, which included 17 samples of K562-r cells with (n=12) or without drug administration (n=5). Three drug treatment groups were considered for this study: arsenic trioxide (ATO), AMN107, and ATO+AMN107. Each group had one sample at each time point (3, 12, 24, and 48 h). Time-series genes with a ratio of standard deviation/average (coefficient of variation) >0.15 were screened, and their expression patterns were revealed based on Short Time-series Expression Miner (STEM). Then, the functional enrichment analysis of time-series genes in each group was performed using DAVID, and the genes enriched in the top ten functional categories were extracted to detect their expression patterns. Different time-series genes were identified in the three groups, and most of them were enriched in the ribosome and oxidative phosphorylation pathways. Time-series genes in the three treatment groups had different expression patterns and functions. Time-series genes in the ATO group (e.g. CCNA2 and DAB2) were significantly associated with cell adhesion, those in the AMN107 group were related to cellular carbohydrate metabolic process, while those in the ATO+AMN107 group (e.g. AP2M1) were significantly related to cell proliferation and antigen processing. In imatinib-resistant CML cells, ATO could influence genes related to cell adhesion, AMN107 might affect genes involved in cellular carbohydrate metabolism, and the combination therapy might regulate genes involved in cell proliferation.
Comparison of Methods for Estimating Low Flow Characteristics of Streams
Tasker, Gary D.
1987-01-01
Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.
Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong
2017-12-01
Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.
Test Series 2. 4: detailed test plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Test Series 2.4 comprises the fourth sub-series of tests to be scheduled as a part of Test Series 2, the second stage of the combustion research program to be carried out at the Grimethorpe Experimental Pressurized Fluidized Bed Combustion Facility. Test Series 2.1, the first sub-series of tests, was completed in February 1983, and the first part of the second sub-series, Test Series 2.3, in October 1983. Test Series 2.2 was completed in February 1984 after which the second part of Test Series 2.3 commenced. The Plan for Test Series 2.4 consists of 350 data gathering hours to be completedmore » within 520 coal burning hours. This document provides a brief description of the Facility and modifications which have been made following the completion of Test Series 2.1. No further modifications were made following the completion of the first part of Test Series 2.3 or Test Series 2.2. The operating requirements for Test Series 2.4 are specified. The tests will be performed using a UK coal (Lady Windsor), and a UK limestone (Middleton) both nominated by the FRG. Seven objectives are proposed which are to be fulfilled by thirteen test conditions. Six part load tests based on input supplied by Kraftwerk Union AG are included. The cascade is expected to be on line for each test condition and total cascade exposure is expected to be in excess of 450 hours. Details of sampling and special measurements are given. A test plan schedule envisages the full test series being completed within a two month calendar period. Finally, a number of contingency strategies are proposed. 3 figures, 14 tables.« less
W. Miller; D. Weise; S. Mahalingam; M. Princevac; R. Yokelson; W. Hao; D. Cocker; H. Jung; G. Tonnesen; S. Urbanski; I. Burling; S. Hosseini; S. Akagi
2013-01-01
Gaseous and particulate emissions were measured for a variety of chaparral and Madrean oak woodland fuel types in a series of laboratory and field experiments in California and Arizona. Emissions were measured using state of the art ground-based and aircraft-based sampling systems. Emission factors were determined for many new chemical species for the fuels....
NASA Astrophysics Data System (ADS)
Tay, J.; Hood, R. R.
2016-02-01
Although jellyfish exert strong control over marine plankton dynamics (Richardson et al. 2009, Robison et al. 2014) and negatively impact human commercial and recreational activities (Purcell et al. 2007, Purcell 2012), jellyfish biomass is not well quantified due primarily to sampling difficulties with plankton nets or fisheries trawls (Haddock 2004). As a result, some of the longest records of jellyfish are visual shore-based surveys, such as the fixed-station time series of Chrysaora quinquecirrha that began in 1960 in the Patuxent River in Chesapeake Bay, USA (Cargo and King 1990). Time series counts from fixed-station surveys capture two signals: 1) demographic change at timescales on the order of reproductive processes and 2) spatial patchiness at shorter timescales as different parcels of water move in and out of the survey area by tidal and estuarine advection and turbulent mixing (Lee and McAlice 1979). In this study, our goal was to separate these two signals using a 4-year time series of C. quinquecirrha medusa counts from a fixed-station in the Choptank River, Chesapeake Bay. Idealized modeling of tidal and estuarine advection was used to conceptualize the sampling scheme. Change point and time series analysis was used to detect demographic changes. Indices of aggregation (Negative Binomial coefficient, Taylor's Power Law coefficient, and Morisita's Index) were calculated to describe the spatial patchiness of the medusae. Abundance estimates revealed a bloom cycle that differed in duration and magnitude for each of the study years. Indices of aggregation indicated that medusae were aggregated and that patches grew in the number of individuals, and likely in size, as abundance increased. Further inference from the conceptual modeling suggested that medusae patch structure was generally homogenous over the tidal extent. This study highlights the benefits of using fixed-station shore-based surveys for understanding the biology and ecology of jellyfish.
ODM2 (Observation Data Model): The EarthChem Use Case
NASA Astrophysics Data System (ADS)
Lehnert, Kerstin; Song, Lulin; Hsu, Leslie; Horsburgh, Jeffrey S.; Aufdenkampe, Anthony K.; Mayorga, Emilio; Tarboton, David; Zaslavsky, Ilya
2014-05-01
PetDB is an online data system that was created in the late 1990's to serve online a synthesis of published geochemical and petrological data of igneous and metamorphic rocks. PetDB has today reached a volume of 2.5 million analytical values for nearly 70,000 rock samples. PetDB's data model (Lehnert et al., G-Cubed 2000) was designed to store sample-based observational data generated by the analysis of rocks, together with a wide range of metadata documenting provenance of the samples, analytical procedures, data quality, and data source. Attempts to store additional types of geochemical data such as time-series data of seafloor hydrothermal springs and volcanic gases, depth-series data for marine sediments and soils, and mineral or mineral inclusion data revealed the limitations of the schema: the inability to properly record sample hierarchies (for example, a garnet that is included in a diamond that is included in a xenolith that is included in a kimberlite rock sample), inability to properly store time-series data, inability to accommodate classification schemes other than rock lithologies, deficiencies of identifying and documenting datasets that are not part of publications. In order to overcome these deficiencies, PetDB has been developing a new data schema using the ODM2 information model (ODM=Observation Data Model). The development of ODM2 is a collaborative project that leverages the experience of several existing information representations, including PetDB and EarthChem, and the CUAHSI HIS Observations Data Model (ODM), as well as the general specification for encoding observational data called Observations and Measurements (O&M) to develop a uniform information model that seamlessly manages spatially discrete, feature-based earth observations from environmental samples and sample fractions as well as in-situ sensors, and to test its initial implementation in a variety of user scenarios. The O&M model, adopted as an international standard by the Open Geospatial Consortium, and later by ISO, is the foundation of several domain markup languages such as OGC WaterML 2, used for exchanging hydrologic time series. O&M profiles for samples and sample fractions have not been standardized yet, and there is a significant variety in sample data representations used across agencies and academic projects. The intent of the ODM2 project is to create a unified relational representation for different types of spatially discrete observational data, ensuring that the data can be efficiently stored, transferred, catalogued and queried within a variety of earth science applications. We will report on the initial design and implementation of the new model for PetDB, and results of testing the model against a set of common queries. We have explored several aspects of the model, including: semantic consistency, validation and integrity checking, portability and maintainability, query efficiency, and scalability. The sample datasets from PetDB have been loaded in the initial physical implementation for testing. The results of the experiments point to both benefits and challenges of the initial design, and illustrate the key trade-off between the generality of design, ease of interpretation, and query efficiency, especially as the system needs to scale to millions of records.
NASA Astrophysics Data System (ADS)
Zhou, L. P.; McDermott, F.; Rhodes, E. J.; Marseglia, E. A.; Mellars, P. A.
The age of the Channel Deposits at Stanton Harcourt, Oxfordshire, England, has been a topic of debate with important implications for British Pleistocene stratigraphy. Recent excavations led by K. Scott reveal ample evidence for ancient environmental conditions characteristic of an interglacial. However, the question remains on the assignment of its age. At present it is thought to represent an interglacial corresponding to either marine OI Stage 7 or 5e. In an attempt to constrain the chronology of the site, and to assess the techniques' reliability, we have made electron spin resonance (ESR) measurements on enamel and mass-spectrometric U-series measurements on both enamel and dentine from a mammoth tooth buried in the Channel Deposits at Stanton Harcourt. Four dentine samples gave U-series dates between 65.4±0.4 and 146.5±1.0 ka and two enamel samples between these dentine layers were dated to 53.3±0.2 and 61.1±0.6 ka. The corresponding ESR age estimates for the enamel samples are 59±6 and 62±4 ka (early U-uptake, EU) and 95±11 and 98±7 ka (linear U-uptake, LU). The recent U-uptake (RU) dates are 245±38 and 238±31 ka, but in light of the U-series data we would not expect these to represent realistic age estimates. Similar ESR results were obtained from two other adjacent enamel samples. The effect of the large size of the mammoth tooth on the external gamma dose, and the internal gamma contribution from the high U content of the dentine, are considered. While the recent uptake ESR dates appear to coincide with OI Stage 7, all the early and linear uptake ESR and mass-spectrometric U-series dates are younger than the expected age estimation based on recent geological interpretation and amino acid racemisation measurements (>200 ka) and optical dating studies (200-450 ka). Possible causes of the unexpected dating results are discussed. We conclude that our mass-spectrometric U-series and EU and LU ESR measurements represent minimum age estimates for the tooth analysed. Our results seem to suggest that the tooth and hence the Channel Deposits are at least 147 ka in age. i.e. predating the last interglacial.
Mahoney, Christine M; Kelly, Ryan T; Alexander, Liz; Newburn, Matt; Bader, Sydney; Ewing, Robert G; Fahey, Albert J; Atkinson, David A; Beagley, Nathaniel
2016-04-05
Time-of-flight-secondary ion mass spectrometry (TOF-SIMS) and laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) were used for characterization and identification of unique signatures from a series of 18 Composition C-4 plastic explosives. The samples were obtained from various commercial and military sources around the country. Positive and negative ion TOF-SIMS data were acquired directly from the C-4 residue on Si surfaces, where the positive ion mass spectra obtained were consistent with the major composition of organic additives, and the negative ion mass spectra were more consistent with explosive content in the C-4 samples. Each series of mass spectra was subjected to partial least squares-discriminant analysis (PLS-DA), a multivariate statistical analysis approach which serves to first find the areas of maximum variance within different classes of C-4 and subsequently to classify unknown samples based on correlations between the unknown data set and the original data set (often referred to as a training data set). This method was able to successfully classify test samples of C-4, though with a limited degree of certainty. The classification accuracy of the method was further improved by integrating the positive and negative ion data using a Bayesian approach. The TOF-SIMS data was combined with a second analytical method, LA-ICPMS, which was used to analyze elemental signatures in the C-4. The integrated data were able to classify test samples with a high degree of certainty. Results indicate that this Bayesian integrated approach constitutes a robust classification method that should be employable even in dirty samples collected in the field.
NASA Astrophysics Data System (ADS)
Baisden, W. T.
2011-12-01
Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.
Complexity analysis of the turbulent environmental fluid flow time series
NASA Astrophysics Data System (ADS)
Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.
2014-02-01
We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
Li, Xin; Kaattari, Stephen L; Vogelbein, Mary A; Vadas, George G; Unger, Michael A
2016-03-01
Immunoassays based on monoclonal antibodies (mAbs) are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs) and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC 50 values between 1.68-31 μg/L (ppb). 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3-5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC-MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples.
Complexity multiscale asynchrony measure and behavior for interacting financial dynamics
NASA Astrophysics Data System (ADS)
Yang, Ge; Wang, Jun; Niu, Hongli
2016-08-01
A stochastic financial price process is proposed and investigated by the finite-range multitype contact dynamical system, in an attempt to study the nonlinear behaviors of real asset markets. The viruses spreading process in a finite-range multitype system is used to imitate the interacting behaviors of diverse investment attitudes in a financial market, and the empirical research on descriptive statistics and autocorrelation behaviors of return time series is performed for different values of propagation rates. Then the multiscale entropy analysis is adopted to study several different shuffled return series, including the original return series, the corresponding reversal series, the random shuffled series, the volatility shuffled series and the Zipf-type shuffled series. Furthermore, we propose and compare the multiscale cross-sample entropy and its modification algorithm called composite multiscale cross-sample entropy. We apply them to study the asynchrony of pairs of time series under different time scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiaoran, E-mail: sxr0806@gmail.com; School of Mathematics and Statistics, The University of Western Australia, Crawley WA 6009; Small, Michael, E-mail: michael.small@uwa.edu.au
In this work, we propose a novel method to transform a time series into a weighted and directed network. For a given time series, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given time series: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the timemore » series has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that time series with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics.« less
Wild, Lauren A; Chenoweth, Ellen M; Mueter, Franz J; Straley, Janice M
2018-05-18
Stable isotope analysis integrates diet information over a time period specific to the type of tissue sampled. For metabolically active skin of free-ranging cetaceans, cells are generated at the basal layer of the skin and migrate outward until they eventually slough off, suggesting potential for a dietary time series. Skin samples from cetaceans were analyzed using continuous-flow elemental analyzer isotope ratio mass spectrometery (EA-IRMS). We used ANOVAs to compare the variability of δ 13 C and δ 15 N values within and among layers and columns ("cores") of the skin of a fin, humpback, and sperm whale. We then used mixed-effects models to analyze isotopic variability among layers of 28 sperm whale skin samples, over the course of a season and among years. We found layer to be a significant predictor of δ 13 C values in the sperm whale's skin, and δ 15 N values the humpback whale's skin. There was no evidence for significant differences in δ 15 N or δ 13 C values among cores for any species. Mixed effects models selected layer and day of the year as significant predictors of δ 13 C and δ 15 N values in sperm whale skin across individuals sampled during the summer months in the Gulf of Alaska. These results suggest that skin samples from cetaceans may be subsampled to reflect diet during a narrower time period; specifically different layers of skin may contain a dietary time series. This underscores the importance of selecting an appropriate portion of skin to analyze based on the species and objectives of the study. This article is protected by copyright. All rights reserved.
Zhao, Yan; Bai, Linyan; Feng, Jianzhong; Lin, Xiaosong; Wang, Li; Xu, Lijun; Ran, Qiyun; Wang, Kui
2016-04-19
Multiple cropping provides China with a very important system of intensive cultivation, and can effectively enhance the efficiency of farmland use while improving regional food production and security. A multiple cropping index (MCI), which represents the intensity of multiple cropping and reflects the effects of climate change on agricultural production and cropping systems, often serves as a useful parameter. Therefore, monitoring the dynamic changes in the MCI of farmland over a large area using remote sensing data is essential. For this purpose, nearly 30 years of MCIs related to dry land in the North China Plain (NCP) were efficiently extracted from remotely sensed leaf area index (LAI) data from the Global LAnd Surface Satellite (GLASS). Next, the characteristics of the spatial-temporal change in MCI were analyzed. First, 2162 typical arable sample sites were selected based on a gridded spatial sampling strategy, and then the LAI information was extracted from the samples. Second, the Savizky-Golay filter was used to smooth the LAI time-series data of the samples, and then the MCIs of the samples were obtained using a second-order difference algorithm. Finally, the geo-statistical Kriging method was employed to map the spatial distribution of the MCIs and to obtain a time-series dataset of the MCIs of dry land over the NCP. The results showed that all of the MCIs in the NCP showed an increasing trend over the entire study period and increased most rapidly from 1982 to 2002. Spatially, MCIs decreased from south to north; also, high MCIs were mainly concentrated in the relatively flat areas. In addition, the partial spatial changes of MCIs had clear geographical characteristics, with the largest change in Henan Province.
Zhao, Yan; Bai, Linyan; Feng, Jianzhong; Lin, Xiaosong; Wang, Li; Xu, Lijun; Ran, Qiyun; Wang, Kui
2016-01-01
Multiple cropping provides China with a very important system of intensive cultivation, and can effectively enhance the efficiency of farmland use while improving regional food production and security. A multiple cropping index (MCI), which represents the intensity of multiple cropping and reflects the effects of climate change on agricultural production and cropping systems, often serves as a useful parameter. Therefore, monitoring the dynamic changes in the MCI of farmland over a large area using remote sensing data is essential. For this purpose, nearly 30 years of MCIs related to dry land in the North China Plain (NCP) were efficiently extracted from remotely sensed leaf area index (LAI) data from the Global LAnd Surface Satellite (GLASS). Next, the characteristics of the spatial-temporal change in MCI were analyzed. First, 2162 typical arable sample sites were selected based on a gridded spatial sampling strategy, and then the LAI information was extracted from the samples. Second, the Savizky-Golay filter was used to smooth the LAI time-series data of the samples, and then the MCIs of the samples were obtained using a second-order difference algorithm. Finally, the geo-statistical Kriging method was employed to map the spatial distribution of the MCIs and to obtain a time-series dataset of the MCIs of dry land over the NCP. The results showed that all of the MCIs in the NCP showed an increasing trend over the entire study period and increased most rapidly from 1982 to 2002. Spatially, MCIs decreased from south to north; also, high MCIs were mainly concentrated in the relatively flat areas. In addition, the partial spatial changes of MCIs had clear geographical characteristics, with the largest change in Henan Province. PMID:27104536
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboul, S. H.; King, W. D.; Coleman, C. J.
2017-05-09
Two March 2017 Tank 15 slurry samples (HTF-15-17-28 and HTF-15-17-29) were collected during the second bulk waste removal campaign and submitted to SRNL for characterization. At SRNL, the two samples were combined and then characterized by a series of physical, elemental, radiological, and ionic analysis methods. Sludge settling as a function of time was also quantified. The characterization results reported in this document are consistent with expectations based upon waste type, process knowledge, comparisons between alternate analysis techniques, and comparisons with the characterization results obtained for the November 2016 Tank 15 slurry sample (the sample collected during the first bulkmore » waste removal campaign).« less
Actinomycetal complex of light sierozem on the Kopet-Dag piedmont plain
NASA Astrophysics Data System (ADS)
Zenova, G. M.; Zvyagintsev, D. G.; Manucharova, N. A.; Stepanova, O. A.; Chernov, I. Yu.
2016-10-01
The population density of actinomycetes in the samples of light sierozem from the Kopet Dag piedmont plain (75 km from Ashkhabad, Turkmenistan) reaches hundreds of thousand CFU/g soil. The actinomycetal complex is represented by two genera: Streptomyces and Micromonospora. Representatives of the Streptomyces genus predominate and comprise 73 to 87% of the actinomycetal complex. In one sample, representatives of the Micromonospora genus predominated in the complex (75%). The Streptomyces genus in the studied soil samples is represented by the species from several sections and series: the species of section Helvolo-Flavus series Helvolus represent the dominant component of the streptomycetal complex; their portion is up to 77% of all isolated actinomycetes. The species of other sections and series are much less abundant. Thus, the percentage of the Cinereus Achromogenes section in the actinomycetal complex does not exceed 28%; representatives of the Albus section Albus series, Roseus section Lavendulae-Roseus series, and Imperfectus section belong to rare species; they have been isolated not from all the studied samples of light sierozem, and their portion does not exceed 10% of the actinomycetal complex.
Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.
Jiang, Zhixing; Zhang, David; Lu, Guangming
2018-04-19
Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kelley, Todd R.; Wicklein, Robert C.
2009-01-01
Based on the efforts to infuse engineering practices within the technology education curriculum it is appropriate to now investigate how technology education teachers are assessing engineering design activities within their classrooms. This descriptive study drew a full sample of high school technology teachers from the current International…
ERIC Educational Resources Information Center
Heilmann, John J.; Rojas, Raúl; Iglesias, Aquiles; Miller, Jon F.
2016-01-01
Background: Language sampling, recognized as a gold standard for expressive language assessment, is often elicited using wordless picture storybooks. A series of wordless storybooks, commonly referred to as "Frog" stories, have been frequently used in language-based research with children from around the globe. Aims: To examine the…
ERIC Educational Resources Information Center
Li, Lijuan; Hallinger, Philip; Kennedy, Kerry John; Walker, Allan
2017-01-01
This study tests mediated principal leadership effects on teacher professional learning through collegial trust, communication and collaboration in Hong Kong primary schools. It is based on a series of single mediator studies, and uses the same convenience sample of 970 teachers from 32 local primary schools. It also adopts regression-based…
Mars Sample Handling Protocol Workshop Series: Workshop 4
NASA Technical Reports Server (NTRS)
Race Margaret S. (Editor); DeVincenzi, Donald L. (Editor); Rummel, John D. (Editor); Acevedo, Sara E. (Editor)
2001-01-01
In preparation for missions to Mars that will involve the return of samples to Earth, it will be necessary to prepare for the receiving, handling, testing, distributing, and archiving of martian materials here on Earth. Previous groups and committees have studied selected aspects of sample return activities, but specific detailed protocols for the handling and testing of returned samples must still be developed. To further refine the requirements for sample hazard testing and to develop the criteria for subsequent release of sample materials from quarantine, the NASA Planetary Protection Officer convened a series of workshops in 2000-2001. The overall objective of the Workshop Series was to produce a Draft Protocol by which returned martian sample materials can be assessed for biological hazards and examined for evidence of life (extant or extinct) while safeguarding the purity of the samples from possible terrestrial contamination. This report also provides a record of the proceedings of Workshop 4, the final Workshop of the Series, which was held in Arlington, Virginia, June 5-7, 2001. During Workshop 4, the sub-groups were provided with a draft of the protocol compiled in May 2001 from the work done at prior Workshops in the Series. Then eight sub-groups were formed to discuss the following assigned topics: Review and Assess the Draft Protocol for Physical/Chemical Testing Review and Assess the Draft Protocol for Life Detection Testing Review and Assess the Draft Protocol for Biohazard Testing Environmental and Health/Monitoring and Safety Issues Requirements of the Draft Protocol for Facilities and Equipment Contingency Planning for Different Outcomes of the Draft Protocol Personnel Management Considerations in Implementation of the Draft Protocol Draft Protocol Implementation Process and Update Concepts This report provides the first complete presentation of the Draft Protocol for Mars Sample Handling to meet planetary protection needs. This Draft Protocol, which was compiled from deliberations and recommendations from earlier Workshops in the Series, represents a consensus that emerged from the discussions of all the sub-groups assembled over the course of the five Workshops of the Series. These discussions converged on a conceptual approach to sample handling, as well as on specific analytical requirements. Discussions also identified important issues requiring attention, as well as research and development needed for protocol implementation.
Determining an Appropriate Sampling Method. School Accountability Series. Monograph 3.
ERIC Educational Resources Information Center
McCallon, Earl; McClaran, Rutledge
This is one of a series of eight short monographs intended to aid practicing educators in planning and conducting accountability programs in schools. This booklet discusses how to determine a sampling method that is appropriate to the objectives of a particular research or evaluation effort. Short sections focus in turn on why and when to sample,…
Assessing paleo-biodiversity using low proxy influx.
Blarquez, Olivier; Finsinger, Walter; Carcaillet, Christopher
2013-01-01
We developed an algorithm to improve richness assessment based on paleoecological series, considering sample features such as their temporal resolutions or their volumes. Our new method can be applied to both high- and low-count size proxies, i.e. pollen and plant macroremain records, respectively. While pollen generally abounds in sediments, plant macroremains are generally rare, thus leading to difficulties to compute paleo-biodiversity indices. Our approach uses resampled macroremain influxes that enable the computation of the rarefaction index for the low influx records. The raw counts are resampled to a constant resolution and sample volume by interpolating initial sample ages at a constant time interval using the age∼depth model. Then, the contribution of initial counts and volume to each interpolated sample is determined by calculating a proportion matrix that is in turn used to obtain regularly spaced time series of pollen and macroremain influx. We applied this algorithm to sedimentary data from a subalpine lake situated in the European Alps. The reconstructed total floristic richness at the study site increased gradually when both pollen and macroremain records indicated a decrease in relative abundances of shrubs and an increase in trees from 11,000 to 7,000 cal BP. This points to an ecosystem change that favored trees against shrubs, whereas herb abundance remained stable. Since 6,000 cal BP, local richness decreased based on plant macroremains, while pollen-based richness was stable. The reconstructed richness and evenness are interrelated confirming the difficulty to distinguish these two aspects for the studies in paleo-biodiversity. The present study shows that low-influx bio-proxy records (here macroremains) can be used to reconstruct stand diversity and address ecological issues. These developments on macroremain and pollen records may contribute to bridge the gap between paleoecology and biodiversity studies.
Reversible susceptibility studies of magnetization switching in FeCoB synthetic antiferromagnets
NASA Astrophysics Data System (ADS)
Radu, Cosmin; Cimpoesu, Dorin; Girt, Erol; Ju, Ganping; Stancu, Alexandru; Spinu, Leonard
2007-05-01
In this paper we present a study of switching characteristics of a series of synthetic antiferromagnet (SAF) structures using reversible susceptibility experiments. Three series of SAF samples were considered in our study with (t1, t2), the thickness of the FeCoB layers of (80nm, 80nm), (50nm, 50nm), and (80nm, 20nm) and with the interlayer of Ru ranging from 0to2nm. A vector vibrating sample magnetometer was used to measure the hysteresis loops along the different directions in the plane of the samples. The reversible susceptibility experiments were performed using a resonant method based on a tunnel diode oscillator. We showed that the switching peaks in the susceptibility versus field plots obtained for different orientations of the applied dc field can be used to construct the switching diagram of the SAF structure. The critical curve constitutes the fingerprint of the switching behavior and provides information about micromagnetic and structural properties of SAF which is an essential component of modern magnetic random access memories.
Isotopic composition of atmospheric moisture from pan water evaporation measurements.
Devi, Pooja; Jain, Ashok Kumar; Rao, M Someshwer; Kumar, Bhishm
2015-01-01
A continuous and reliable time series data of the stable isotopic composition of atmospheric moisture is an important requirement for the wider applicability of isotope mass balance methods in atmospheric and water balance studies. This requires routine sampling of atmospheric moisture by an appropriate technique and analysis of moisture for its isotopic composition. We have, therefore, used a much simpler method based on an isotope mass balance approach to derive the isotopic composition of atmospheric moisture using a class-A drying evaporation pan. We have carried out the study by collecting water samples from a class-A drying evaporation pan and also by collecting atmospheric moisture using the cryogenic trap method at the National Institute of Hydrology, Roorkee, India, during a pre-monsoon period. We compared the isotopic composition of atmospheric moisture obtained by using the class-A drying evaporation pan method with the cryogenic trap method. The results obtained from the evaporation pan water compare well with the cryogenic based method. Thus, the study establishes a cost-effective means of maintaining time series data of the isotopic composition of atmospheric moisture at meteorological observatories. The conclusions drawn in the present study are based on experiments conducted at Roorkee, India, and may be examined at other regions for its general applicability.
Regolith Volatile Recovery at Simulated Lunar Environments
NASA Technical Reports Server (NTRS)
Kleinhenz, Julie; Paulsen, Gale; Zacny, Kris; Schmidt, Sherry; Boucher, Dale
2016-01-01
Lunar Polar Volatiles: Permanently shadowed craters at the lunar poles contain water, 5 wt according to LCROSS. Interest in water for ISRU applications. Desire to ground truth water using surface prospecting e.g. Resource Prospector and RESOLVE. How to access subsurface water resources and accurately measure quantity. Excavation operations and exposure to lunar environment may affect the results. Volatile capture tests: A series a ground based dirty thermal vacuum tests are being conducted to better understand the subsurface sampling operations. Sample removal and transfer. Volatiles loss during sampling operations. Concept of operations, Instrumentation. This presentation is a progress report on volatiles capture results from these tests with lunar polar drill prototype hardware.
Holocene monsoon variability as resolved in small complex networks from palaeodata
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Breitenbach, S.; Kurths, J.
2012-04-01
To understand the impacts of Holocene precipitation and/or temperature changes in the spatially extensive and complex region of Asia, it is promising to combine the information from palaeo archives, such as e.g. stalagmites, tree rings and marine sediment records from India and China. To this end, complex networks present a powerful and increasingly popular tool for the description and analysis of interactions within complex spatially extended systems in the geosciences and therefore appear to be predestined for this task. Such a network is typically constructed by thresholding a similarity matrix which in turn is based on a set of time series representing the (Earth) system dynamics at different locations. Looking into the pre-instrumental past, information about the system's processes and thus its state is available only through the reconstructed time series which -- most often -- are irregularly sampled in time and space. Interpolation techniques are often used for signal reconstruction, but they introduce additional errors, especially when records have large gaps. We have recently developed and extensively tested methods to quantify linear (Pearson correlation) and non-linear (mutual information) similarity in presence of heterogeneous and irregular sampling. To illustrate our approach we derive small networks from significantly correlated, linked, time series which are supposed to capture the underlying Asian Monsoon dynamics. We assess and discuss whether and where links and directionalities in these networks from irregularly sampled time series can be soundly detected. Finally, we investigate the role of the Northern Hemispheric temperature with respect to the correlation patterns and find that those derived from warm phases (e.g. Medieval Warm Period) are significantly different from patterns found in cold phases (e.g. Little Ice Age).
A probabilistic seismic risk assessment procedure for nuclear power plants: (II) Application
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
This paper presents the procedures and results of intensity- and time-based seismic risk assessments of a sample nuclear power plant (NPP) to demonstrate the risk-assessment methodology proposed in its companion paper. The intensity-based assessments include three sets of sensitivity studies to identify the impact of the following factors on the seismic vulnerability of the sample NPP, namely: (1) the description of fragility curves for primary and secondary components of NPPs, (2) the number of simulations of NPP response required for risk assessment, and (3) the correlation in responses between NPP components. The time-based assessment is performed as a series of intensity-based assessments. The studies illustrate the utility of the response-based fragility curves and the inclusion of the correlation in the responses of NPP components directly in the risk computation. ?? 2011 Published by Elsevier B.V.
Burby, Joshua W.; Lacker, Daniel
2016-01-01
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or the number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems. PMID:27689714
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Solving satisfiability problems using a novel microarray-based DNA computer.
Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning
2007-01-01
An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.
Sampling rare fluctuations of discrete-time Markov chains
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
Sampling rare fluctuations of discrete-time Markov chains.
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
Aydın, Ahmet Alper; Ilberg, Vladimir
2016-01-20
A series of gelatinized polyvinyl alcohol (PVA):starch blends were prepared with various polyol-based plasticizers in 5 wt%, 15 wt% and 25 wt% ratios via solution casting method. The obtained films were analyzed by Fourier transform infrared (FT-IR) spectroscopy, differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). Remarkable changes have been observed in glass-transition temperature (Tg) and thermal stability of the samples containing varying concentrations of different plasticizers and they have been discussed in detail with respect to the conducted thermal and chemical analyses. The observed order of Tg point depression of the samples containing 15 wt% plasticizer is 1,4-butanediol - 1,2,6-hexanetriol--pentaerythriyol--xylitol--mannitol, which is similar to the sequence of the thermal stability changes of the samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alchemical prediction of hydration free energies for SAMPL
Mobley, David L.; Liu, Shaui; Cerutti, David S.; Swope, William C.; Rice, Julia E.
2013-01-01
Hydration free energy calculations have become important tests of force fields. Alchemical free energy calculations based on molecular dynamics simulations provide a rigorous way to calculate these free energies for a particular force field, given sufficient sampling. Here, we report results of alchemical hydration free energy calculations for the set of small molecules comprising the 2011 Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) challenge. Our calculations are largely based on the Generalized Amber Force Field (GAFF) with several different charge models, and we achieved RMS errors in the 1.4-2.2 kcal/mol range depending on charge model, marginally higher than what we typically observed in previous studies1-5. The test set consists of ethane, biphenyl, and a dibenzyl dioxin, as well as a series of chlorinated derivatives of each. We found that, for this set, using high-quality partial charges from MP2/cc-PVTZ SCRF RESP fits provided marginally improved agreement with experiment over using AM1-BCC partial charges as we have more typically done, in keeping with our recent findings5. Switching to OPLS Lennard-Jones parameters with AM1-BCC charges also improves agreement with experiment. We also find a number of chemical trends within each molecular series which we can explain, but there are also some surprises, including some that are captured by the calculations and some that are not. PMID:22198475
Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory
2016-05-12
valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A
Multivariate exploration of non-intrusive load monitoring via spatiotemporal pattern network
Liu, Chao; Akintayo, Adedotun; Jiang, Zhanhong; ...
2017-12-18
Non-intrusive load monitoring (NILM) of electrical demand for the purpose of identifying load components has thus far mostly been studied using univariate data, e.g., using only whole building electricity consumption time series to identify a certain type of end-use such as lighting load. However, using additional variables in the form of multivariate time series data may provide more information in terms of extracting distinguishable features in the context of energy disaggregation. In this work, a novel probabilistic graphical modeling approach, namely the spatiotemporal pattern network (STPN) is proposed for energy disaggregation using multivariate time-series data. The STPN framework is shownmore » to be capable of handling diverse types of multivariate time-series to improve the energy disaggregation performance. The technique outperforms the state of the art factorial hidden Markov models (FHMM) and combinatorial optimization (CO) techniques in multiple real-life test cases. Furthermore, based on two homes' aggregate electric consumption data, a similarity metric is defined for the energy disaggregation of one home using a trained model based on the other home (i.e., out-of-sample case). The proposed similarity metric allows us to enhance scalability via learning supervised models for a few homes and deploying such models to many other similar but unmodeled homes with significantly high disaggregation accuracy.« less
Liu, Datong; Peng, Yu; Peng, Xiyuan
2018-01-01
Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372
Multivariate exploration of non-intrusive load monitoring via spatiotemporal pattern network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chao; Akintayo, Adedotun; Jiang, Zhanhong
Non-intrusive load monitoring (NILM) of electrical demand for the purpose of identifying load components has thus far mostly been studied using univariate data, e.g., using only whole building electricity consumption time series to identify a certain type of end-use such as lighting load. However, using additional variables in the form of multivariate time series data may provide more information in terms of extracting distinguishable features in the context of energy disaggregation. In this work, a novel probabilistic graphical modeling approach, namely the spatiotemporal pattern network (STPN) is proposed for energy disaggregation using multivariate time-series data. The STPN framework is shownmore » to be capable of handling diverse types of multivariate time-series to improve the energy disaggregation performance. The technique outperforms the state of the art factorial hidden Markov models (FHMM) and combinatorial optimization (CO) techniques in multiple real-life test cases. Furthermore, based on two homes' aggregate electric consumption data, a similarity metric is defined for the energy disaggregation of one home using a trained model based on the other home (i.e., out-of-sample case). The proposed similarity metric allows us to enhance scalability via learning supervised models for a few homes and deploying such models to many other similar but unmodeled homes with significantly high disaggregation accuracy.« less
Point detection of bacterial and viral pathogens using oral samples
NASA Astrophysics Data System (ADS)
Malamud, Daniel
2008-04-01
Oral samples, including saliva, offer an attractive alternative to serum or urine for diagnostic testing. This is particularly true for point-of-use detection systems. The various types of oral samples that have been reported in the literature are presented here along with the wide variety of analytes that have been measured in saliva and other oral samples. The paper focuses on utilizing point-detection of infectious disease agents, and presents work from our group on a rapid test for multiple bacterial and viral pathogens by monitoring a series of targets. It is thus possible in a single oral sample to identify multiple pathogens based on specific antigens, nucleic acids, and host antibodies to those pathogens. The value of such a technology for detecting agents of bioterrorism at remote sites is discussed.
C. S., Lim; M. S., Shaharuddin; W. Y., Sam
2013-01-01
Introduction: A cross sectional study was conducted to estimate risk of exposure to lead via tap water ingestion pathway for the population of Seri Kembangan (SK). Methodology: By using purposive sampling method, 100 respondents who fulfilled the inclusive criteria were selected from different housing areas of SK based on geographical population distribution. Residents with filtration systems installed were excluded from the study. Questionnaires were administered to determine water consumption-related information and demographics. Two water samples (first-flushed and fully-flushed samples) were collected from kitchen tap of each household using HDPE bottles. A total of 200 water samples were collected and lead concentrations were determined using a Graphite Furnace Atomic Absorption Spectrophotometer (GFAAS). Results: Mean lead concentration in first-flushed samples was 3.041± SD 6.967µg/L and 1.064± SD 1.103µg/L for fully-flushed samples. Of the first-flushed samples, four (4) had exceeded the National Drinking Water Quality Standard (NDWQS) lead limit value of 10µg/L while none of the fully-flushed samples had lead concentration exceeded the limit. There was a significant difference between first-flushed samples and fully-flushed samples and flushing had elicited a significant change in lead concentration in the water (Z = -5.880, p<0.05). It was also found that lead concentration in both first-flushed and fully flushed samples was not significantly different across nine (9) areas of Seri Kembangan (p>0.05). Serdang Jaya was found to have the highest lead concentration in first-flushed water (mean= 10.44± SD 17.83µg/L) while Taman Universiti Indah had the highest lead concentration in fully-flushed water (mean=1.45± SD 1.83µg/L). Exposure assessment found that the mean chronic daily intake (CDI) was 0.028± SD 0.034µgday-1kg-1. None of the hazard quotient (HQ) value was found to be greater than 1. Conclusion: The overall quality of water supply in SK was satisfactory because most of the parameters tested in this study were within the range of permissible limit and only a few samples had exceeded the standard values for lead and pH. Non-carcinogenic risk attributed to ingestion of lead in SK tap water was found to be negligible. PMID:23445691
Lim, C S; Shaharuddin, M S; Sam, W Y
2012-11-21
A cross sectional study was conducted to estimate risk of exposure to lead via tap water ingestion pathway for the population of Seri Kembangan (SK). By using purposive sampling method, 100 respondents who fulfilled the inclusive criteria were selected from different housing areas of SK based on geographical population distribution. Residents with filtration systems installed were excluded from the study. Questionnaires were administered to determine water consumption-related information and demographics. Two water samples (first-flushed and fully-flushed samples) were collected from kitchen tap of each household using HDPE bottles. A total of 200 water samples were collected and lead concentrations were determined using a Graphite Furnace Atomic Absorption Spectrophotometer (GFAAS). Mean lead concentration in first-flushed samples was 3.041± SD 6.967µg/L and 1.064± SD 1.103µg/L for fully-flushed samples. Of the first-flushed samples, four (4) had exceeded the National Drinking Water Quality Standard (NDWQS) lead limit value of 10µg/L while none of the fully-flushed samples had lead concentration exceeded the limit. There was a significant difference between first-flushed samples and fully-flushed samples and flushing had elicited a significant change in lead concentration in the water (Z = -5.880, p<0.05). It was also found that lead concentration in both first-flushed and fully flushed samples was not significantly different across nine (9) areas of Seri Kembangan (p>0.05). Serdang Jaya was found to have the highest lead concentration in first-flushed water (mean= 10.44± SD 17.83µg/L) while Taman Universiti Indah had the highest lead concentration in fully-flushed water (mean=1.45± SD 1.83µg/L). Exposure assessment found that the mean chronic daily intake (CDI) was 0.028± SD 0.034µgday-1kg-1. None of the hazard quotient (HQ) value was found to be greater than 1. The overall quality of water supply in SK was satisfactory because most of the parameters tested in this study were within the range of permissible limit and only a few samples had exceeded the standard values for lead and pH. Non-carcinogenic risk attributed to ingestion of lead in SK tap water was found to be negligible.
Occupational Commonalities: A Base for Course Construction. Paper No. 2219, Journal Series.
ERIC Educational Resources Information Center
Dillon, Roy D.; Horner, James T.
To determine competencies and activities used by workers in a cross section of the statewide labor force, data were obtained from a random sample of 1,500 employed persons drawn from 14 purposively selected index counties in Nebraska. An interview-questionnaire procedure yielded an 87.7 percent response to a checklist of 144 activities, duties,…
We the People: Women and Men in the United States. Census 2000 Special Reports. CENSR-20.
ERIC Educational Resources Information Center
Spraggins, Renee E.
2005-01-01
This report provides a portrait of women in the United States and highlights comparisons with men at the national level. It is part of the Census 2000 Special Reports series that presents several demographic, social, and economic characteristics collected from Census 2000. The data contained in this report are based on the samples of households…
ERIC Educational Resources Information Center
Chapman, Anne
A 2-year curriculum transformation project for 12 humanities teachers from seven independent schools sought to help pre-college teachers integrate new information and insights based on women's studies and gender scholarship into their teaching. Topics covered during the workshops included the history of concern with women and gender; engenderment…
Practical Issues in Field Based Testing of Oral Reading Fluency at Upper Elementary Grades
ERIC Educational Resources Information Center
Duesbery, Luke; Braun-Monegan, Jenelle; Werblow, Jacob; Braun, Drew
2012-01-01
In this series of studies, we explore the ideal frequency, duration, and relative effectiveness of measuring oral reading fluency. In study one, a sample of 389 fifth graders read out loud for 1 min and then took a traditional state-level standardized reading test. Results suggest administering three passages and using the median yields the…
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Ratio-based estimators for a change point in persistence.
Halunga, Andreea G; Osborn, Denise R
2012-11-01
We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
A Comparison of the Plastic Flow Response of a Powder Metallurgy Nickel Base Superalloy (Postprint)
2017-04-01
average diameter of 315 nm. The c¢-solvus tempera- ture, Tc0 , was 1430 K (1157 C). As determined by a series of long- time heat treatments followed...obtained in a mode of simple shear via the torsion of tubular samples. Similar in design to that employed by various researchers in the 1980s,[28,29] the...INTL (STINFO COPY) AIR FORCE RESEARCH LABORATORY MATERIALS AND MANUFACTURING DIRECTORATE WRIGHT-PATTERSON AIR FORCE BASE, OH 45433-7750 AIR
[A capillary blood flow velocity detection system based on linear array charge-coupled devices].
Zhou, Houming; Wang, Ruofeng; Dang, Qi; Yang, Li; Wang, Xiang
2017-12-01
In order to detect the flow characteristics of blood samples in the capillary, this paper introduces a blood flow velocity measurement system based on field-programmable gate array (FPGA), linear charge-coupled devices (CCD) and personal computer (PC) software structure. Based on the analysis of the TCD1703C and AD9826 device data sheets, Verilog HDL hardware description language was used to design and simulate the driver. Image signal acquisition and the extraction of the real-time edge information of the blood sample were carried out synchronously in the FPGA. Then a series of discrete displacement were performed in a differential operation to scan each of the blood samples displacement, so that the sample flow rate could be obtained. Finally, the feasibility of the blood flow velocity detection system was verified by simulation and debugging. After drawing the flow velocity curve and analyzing the velocity characteristics, the significance of measuring blood flow velocity is analyzed. The results show that the measurement of the system is less time-consuming and less complex than other flow rate monitoring schemes.
Dai, Liping; Cheng, Jing; Matsadiq, Guzalnur; Liu, Lu; Li, Jun-Kai
2010-08-03
In the proposed method, an extraction solvent with a lower toxicity and density than the solvents typically used in dispersive liquid-liquid microextraction was used to extract seven polychlorinated biphenyls (PCBs) from aqueous samples. Due to the density and melting point of the extraction solvent, the extract which forms a layer on top of aqueous sample can be collected by solidifying it at low temperatures, which form a layer on top of the aqueous sample. Furthermore, the solidified phase can be easily removed from the aqueous phase. Based on preliminary studies, 1-undecanol was selected as the extraction solvent, and a series of parameters that affect the extraction efficiency were systematically investigated. Under the optimized conditions, enrichment factors for PCBs ranged between 494 and 606. Based on a signal-to-noise ratio of 3, the limit of detection for the method ranged between 3.3 and 5.4 ng L(-1). Good linearity, reproducibility and recovery were also obtained. 2010 Elsevier B.V. All rights reserved.
A Draft Test Protocol for Detecting Possible Biohazards in Martian Samples Returned to Earth
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Race, Margaret S.; DeVincenzi, Donald L.; Schad, P. Jackson; Stabekis, Pericles D.; Viso, Michel; Acevedo, Sara E.
2002-01-01
This document presents the first complete draft of a protocol for detecting possible biohazards in Mars samples returned to Earth: it is the final product of the Mars Sample Handling Protocol Workshop Series. convened in 2000-2001 by NASA's Planetary Protection Officer. The goal of the five-workshop Series vas to develop a comprehensive protocol by which returned martian sample materials could be assessed k r the presence of any biological hazard(s) while safeguarding the purity of the samples from possible terrestrial contamination.
Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study
NASA Technical Reports Server (NTRS)
Michaels, Anthony F.; Knap, Anthony H.
1992-01-01
Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.
The US Geological Survey, digital spectral reflectance library: version 1: 0.2 to 3.0 microns
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; King, Trude V. V.; Gallagher, Andrea J.; Calvin, Wendy M.
1993-01-01
We have developed a digital reflectance spectral library, with management and spectral analysis software. The library includes 500 spectra of 447 samples (some samples include a series of grain sizes) measured from approximately 0.2 to 3.0 microns. The spectral resolution (Full Width Half Maximum) of the reflectance data is less than or equal to 4 nm in the visible (0.2-0.8 microns) and less than or equal 10 nm in the NIR (0.8-2.35 microns). All spectra were corrected to absolute reflectance using an NBS Halon standard. Library management software lets users search on parameters (e.g. chemical formulae, chemical analyses, purity of samples, mineral groups, etc.) as well as spectral features. Minerals from sulfide, oxide, hydroxide, halide, carbonate, nitrate, borate, phosphate, and silicate groups are represented. X-ray and chemical analyses are tabulated for many of the entries, and all samples have been evaluated for spectral purity. The library also contains end and intermediate members for the olivine, garnet, scapolite, montmorillonite, muscovite, jarosite, and alunite solid-solution series. We have included representative spectra of H2O ice, kerogen, ammonium-bearing minerals, rare-earth oxides, desert varnish coatings, kaolinite crystallinity series, kaolinite-smectite series, zeolite series, and an extensive evaporite series. Because of the importance of vegetation to climate-change studies we have include 17 spectra of tree leaves, bushes, and grasses.
Characterization of Volatiles Loss from Soil Samples at Lunar Environments
NASA Technical Reports Server (NTRS)
Kleinhenz, Julie; Smith, Jim; Roush, Ted; Colaprete, Anthony; Zacny, Kris; Paulsen, Gale; Wang, Alex; Paz, Aaron
2017-01-01
Resource Prospector Integrated Thermal Vacuum Test Program A series of ground based dirty thermal vacuum tests are being conducted to better understand the subsurface sampling operations for RP Volatiles loss during sampling operations Hardware performance Sample removal and transfer Concept of operationsInstrumentation5 test campaigns over 5 years have been conducted with RP hardware with advancing hardware designs and additional RP subsystems Volatiles sampling 4 years Using flight-forward regolith sampling hardware, empirically determine volatile retention at lunar-relevant conditions Use data to improve theoretical predictions Determine driving variables for retention Bound water loss potential to define measurement uncertainties. The main goal of this talk is to introduce you to our approach to characterizing volatiles loss for RP. Introduce the facility and its capabilities Overview of the RP hardware used in integrated testing (most recent iteration) Summarize the test variables used thus farReview a sample of the results.
A new device for dynamic sampling of radon in air
NASA Astrophysics Data System (ADS)
Lozano, J. C.; Escobar, V. Gómez; Tomé, F. Vera
2000-08-01
A new system is proposed for the active sampling of radon in air, based on the well-known property of activated charcoal to retain radon. Two identical carbon-activated cartridges arranged in series remove the radon from the air being sampled. The air passes first through a desiccant cell and then the carbon cartridges for short sampling times using a low-flow pump. The alpha activity for each cartridge is determined by a liquid scintillation counting system. The cartridge is placed in a holder into a vial that also contains the appropriate amount of scintillation cocktail, in a way that avoids direct contact between cocktail and charcoal. Once dynamic equilibrium between the phases has been reached, the vials can be counted. Optimum sampling conditions concerning flow rates and sampling times are determined. Using those conditions, the method was applied to environmental samples, straightforwardly providing good results for very different levels of activity.
Li, Xin; Kaattari, Stephen L.; Vogelbein, Mary A.; Vadas, George G.; Unger, Michael A.
2016-01-01
Immunoassays based on monoclonal antibodies (mAbs) are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs) and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC50 values between 1.68–31 μg/L (ppb). 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3-5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC-MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples. PMID:26925369
Gesser-Edelsburg, Anat; Hijazi, Rana
2018-01-01
Product placement can be presented through edutainment. A drug such as Viagra is introduced or impotence is branded in movies and TV series in different ways to raise awareness of impotence disorder and Viagra as a solution. This study aims to analyze strategies of framing and branding Viagra and impotence disorder, based on a qualitative method analysis of 40 movies and TV series. Findings show that Viagra is shown as not only for older men but also for young and healthy men. Out of 40 movies and TV series in the study sample, in 14 (32.5%), the age of the target audience ranged from 20 to 40 years, in 12 (31.6%) movies and series, the age of the target audience was over 40, and in 12 (31.6%) movies and series, the target audience was very old (over 70). Viagra is shown as not only treating impotence but is presented as a wonder drug that provides a solution for psychological and social needs. The movies show usage instructions, side effects, and risks, and how to store the drug. We recommend that the viewing audience be educated for critical viewing of movies/series in order to empower viewers and give them tools for their decision-making processes concerning their health.
Analysis of HD 73045 light curve data
NASA Astrophysics Data System (ADS)
Das, Mrinal Kanti; Bhatraju, Naveen Kumar; Joshi, Santosh
2018-04-01
In this work we analyzed the Kepler light curve data of HD 73045. The raw data has been smoothened using standard filters. The power spectrum has been obtained by using a fast Fourier transform routine. It shows the presence of more than one period. In order to take care of any non-stationary behavior, we carried out a wavelet analysis to obtain the wavelet power spectrum. In addition, to identify the scale invariant structure, the data has been analyzed using a multifractal detrended fluctuation analysis. Further to characterize the diversity of embedded patterns in the HD 73045 flux time series, we computed various entropy-based complexity measures e.g. sample entropy, spectral entropy and permutation entropy. The presence of periodic structure in the time series was further analyzed using the visibility network and horizontal visibility network model of the time series. The degree distributions in the two network models confirm such structures.
NASA Astrophysics Data System (ADS)
Wu, Qi
2010-03-01
Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.
Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series
NASA Astrophysics Data System (ADS)
Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki
2015-03-01
Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.
Reconstructing the temporal ordering of biological samples using microarray data.
Magwene, Paul M; Lizardi, Paul; Kim, Junhyong
2003-05-01
Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.
Classification and authentication of unknown water samples using machine learning algorithms.
Kundu, Palash K; Panchariya, P C; Kundu, Madhusree
2011-07-01
This paper proposes the development of water sample classification and authentication, in real life which is based on machine learning algorithms. The proposed techniques used experimental measurements from a pulse voltametry method which is based on an electronic tongue (E-tongue) instrumentation system with silver and platinum electrodes. E-tongue include arrays of solid state ion sensors, transducers even of different types, data collectors and data analysis tools, all oriented to the classification of liquid samples and authentication of unknown liquid samples. The time series signal and the corresponding raw data represent the measurement from a multi-sensor system. The E-tongue system, implemented in a laboratory environment for 6 numbers of different ISI (Bureau of Indian standard) certified water samples (Aquafina, Bisleri, Kingfisher, Oasis, Dolphin, and McDowell) was the data source for developing two types of machine learning algorithms like classification and regression. A water data set consisting of 6 numbers of sample classes containing 4402 numbers of features were considered. A PCA (principal component analysis) based classification and authentication tool was developed in this study as the machine learning component of the E-tongue system. A proposed partial least squares (PLS) based classifier, which was dedicated as well; to authenticate a specific category of water sample evolved out as an integral part of the E-tongue instrumentation system. The developed PCA and PLS based E-tongue system emancipated an overall encouraging authentication percentage accuracy with their excellent performances for the aforesaid categories of water samples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Goldstein, Steven J; Abdel-Fattah, Amr I; Murrell, Michael T; Dobson, Patrick F; Norman, Deborah E; Amato, Ronald S; Nunn, Andrew J
2010-03-01
Uranium-series data for groundwater samples from the Nopal I uranium ore deposit were obtained to place constraints on radionuclide transport and hydrologic processes for a nuclear waste repository located in fractured, unsaturated volcanic tuff. Decreasing uranium concentrations for wells drilled in 2003 are consistent with a simple physical mixing model that indicates that groundwater velocities are low ( approximately 10 m/y). Uranium isotopic constraints, well productivities, and radon systematics also suggest limited groundwater mixing and slow flow in the saturated zone. Uranium isotopic systematics for seepage water collected in the mine adit show a spatial dependence which is consistent with longer water-rock interaction times and higher uranium dissolution inputs at the front adit where the deposit is located. Uranium-series disequilibria measurements for mostly unsaturated zone samples indicate that (230)Th/(238)U activity ratios range from 0.005 to 0.48 and (226)Ra/(238)U activity ratios range from 0.006 to 113. (239)Pu/(238)U mass ratios for the saturated zone are <2 x 10(-14), and Pu mobility in the saturated zone is >1000 times lower than the U mobility. Saturated zone mobility decreases in the order (238)U approximately (226)Ra > (230)Th approximately (239)Pu. Radium and thorium appear to have higher mobility in the unsaturated zone based on U-series data from fractures and seepage water near the deposit.
Resuspension of ash after the 2014 phreatic eruption at Ontake volcano, Japan
NASA Astrophysics Data System (ADS)
Miwa, Takahiro; Nagai, Masashi; Kawaguchi, Ryohei
2018-02-01
We determined the resuspension process of an ash deposit after the phreatic eruption of September 27th, 2014 at Ontake volcano, Japan, by analyzing the time series data of particle concentrations obtained using an optical particle counter and the characteristics of an ash sample. The time series of particle concentration was obtained by an optical particle counter installed 11 km from the volcano from September 21st to October 19th, 2014. The time series contains counts of dust particles (ash and soil), pollen, and water drops, and was corrected to calculate the concentration of dust particles based on a polarization factor reflecting the optical anisotropy of particles. The dust concentration was compared with the time series of wind velocity. The dust concentration was high and the correlation coefficient with wind velocity was positive from September 28th to October 2nd. Grain-size analysis of an ash sample confirmed that the ash deposit contains abundant very fine particles (< 30 μm). Simple theoretical calculations revealed that the daily peaks of the moderate wind (a few m/s at 10 m above the ground surface) were comparable with the threshold wind velocity for resuspension of an unconsolidated deposit with a wide range of particle densities. These results demonstrate that moderate wind drove the resuspension of an ash deposit containing abundant fine particles produced by the phreatic eruption. Histogram of polarization factors of each species experimentally obtained. The N is the number of analyzed particles.
Sample entropy applied to the analysis of synthetic time series and tachograms
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.
2017-01-01
Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.
Wilkinson-Tough, Megan; Bocci, Laura; Thorne, Kirsty; Herlihy, Jane
2010-01-01
Despite the efficacy of cognitive-behavioural interventions in improving the experience of obsessions and compulsions, some people do not benefit from this approach. The present research uses a case series design to establish whether mindfulness-based therapy could benefit those experiencing obsessive-intrusive thoughts by targeting thought-action fusion and thought suppression. Three participants received a relaxation control intervention followed by a six-session mindfulness-based intervention which emphasized daily practice. Following therapy all participants demonstrated reductions in Yale-Brown Obsessive-Compulsive Scale scores to below clinical levels, with two participants maintaining this at follow-up. Qualitative analysis of post-therapy feedback suggested that mindfulness skills such as observation, awareness and acceptance were seen as helpful in managing thought-action fusion and suppression. Despite being limited by small participant numbers, these results suggest that mindfulness may be beneficial to some people experiencing intrusive unwanted thoughts and that further research could establish the possible efficacy of this approach in larger samples. Copyright (c) 2009 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Qingchen; Cao, Guangxi; Xu, Wei
2018-01-01
Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.
Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples.
Thorson, Megan K; Ung, Phuc; Leaver, Franklin M; Corbin, Teresa S; Tuck, Kellie L; Graham, Bim; Barrios, Amy M
2015-10-08
A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil. Copyright © 2015 Elsevier B.V. All rights reserved.
Thermal cracking of poly α-olefin aviation lubricating base oil
NASA Astrophysics Data System (ADS)
Fei, Yiwei; Wu, Nan; Ma, Jun; Hao, Jingtuan
2018-02-01
Thermal cracking of poly α-olefin (PAO) was conducted under different temperatures among 190 °C to 300 °C. The reacted mixtures were sequentially detected by gas chromatography-mass spectrometer (GC/MS). A series of small molecular normal alkanes, branched alkanes and olefins were identified. PAO perfect structure of aligned comb-likely side-chains has been seriously cracked under high temperatures. Property changes about kinematic viscosity and pour point of PAO samples reacted under high temperatures were also investigated. The appearance of small molecular compounds weakened the thermal stability, viscosity temperature performance and low temperature fluidity of PAO samples. Property of PAO samples was deteriorated due to thermal cracking under high temperatures.
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Zhu, Jun; Tang, Bin
2017-12-01
Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-12-07
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-01-01
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600
An Ultrasonic Sampler and Sensor Platform for In-Situ Astrobiological Exploration
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoaz E.; Bao, X.; Chang, Z.; Sherrit, S.
2003-01-01
The search for existing or past life in the Universe is one of the most important objectives of NASA's mission. In support of this objective, ultrasonic based mechanisms are currently being developed at JPL to allow probing and sampling rocks as well as perform as a sensor platform for in-situ astrobiological analysis. The technology is based on the novel Ultrasonic/Sonic Driller/Corer (USDC), which requires low axial force, thereby overcoming one of the major limitations of planetary sampling in low gravity using conventional drills. The USDC was demonstrated to: 1) drill ice and various rocks including granite, diorite, basalt and limestone, 2) not require bit sharpening, and 3) operate at high and low temperatures. The capabilities that are being investigated including probing the ground to select sampling sites, collecting various forms of samples, and hosting sensors for measuring chemical/physical properties. A series of modifications of the USDC basic configuration were implemented leading an ultrasonic abrasion tool (URAT), Ultrasonic Gopher for deep Drilling, and the lab-on-a-drill.
Evaluating steady-state soil thickness by coupling uranium series and 10Be cosmogenic radionuclides
NASA Astrophysics Data System (ADS)
Vanacker, Veerle; Schoonejans, Jerome; Opfergelt, Sophie; Granet, Matthieu; Christl, Marcus; Chabaux, Francois
2017-04-01
Within the Critical Zone, the development of the regolith mantle is controlled by the downwards propagation of the weathering front into the bedrock and denudation at the surface of the regolith by mass movements, water and wind erosion. When the removal of surface material is approximately balanced by the soil production, the soil system is assumed to be in steady-state. The steady state soil thickness (or so-called SSST) can be considered as a dynamic equilibrium of the system, where the thickness of the soil mantle stays relatively constant over time. In this study, we present and compare analytical data from two independent isotopic techniques: in-situ produced cosmogenic nuclides and U-series disequilibria to constrain soil development under semi-arid climatic conditions. The Spanish Betic Cordillera (Southeast Spain) was selected for this study, as it offers us a unique opportunity to analyze soil thickness steady-state conditions for thin soils of semiarid environments. Three soil profiles were sampled across the Betic Ranges, at the ridge crest of zero-order catchments with distinct topographic relief, hillslope gradient and 10Be-derived denudation rate. The magnitude of soil production rates determined based on U-series isotopes (238U, 234U, 230Th and 226Ra) is in the same order of magnitude as the 10Be-derived denudation rates, suggesting steady state soil thickness in two out of three sampling sites. The results suggest that coupling U-series isotopes with in-situ produced radionuclides can provide new insights in the rates of soil development; and also illustrate the potential frontiers in applying U-series disequilibria to track soil production in rapidly eroding landscapes characterized by thin weathering depths.
Buttitta, Fiamma; Felicioni, Lara; Del Grammastro, Maela; Filice, Giampaolo; Di Lorito, Alessia; Malatesta, Sara; Viola, Patrizia; Centi, Irene; D'Antuono, Tommaso; Zappacosta, Roberta; Rosini, Sandra; Cuccurullo, Franco; Marchetti, Antonio
2013-02-01
The therapeutic choice for patients with lung adenocarcinoma depends on the presence of EGF receptor (EGFR) mutations. In many cases, only cytologic samples are available for molecular diagnosis. Bronchoalveolar lavage (BAL) and pleural fluid, which represent a considerable proportion of cytologic specimens, cannot always be used for molecular testing because of low rate of tumor cells. We tested the feasibility of EGFR mutation analysis on BAL and pleural fluid samples by next-generation sequencing (NGS), an innovative and extremely sensitive platform. The study was devised to extend the EGFR test to those patients who could not get it due to the paucity of biologic material. A series of 830 lung cytology specimens was used to select 48 samples (BAL and pleural fluid) from patients with EGFR mutations in resected tumors. These samples included 36 cases with 0.3% to 9% of neoplastic cells (series A) and 12 cases without evidence of tumor (series B). All samples were analyzed by Sanger sequencing and NGS on 454 Roche platform. A mean of 21,130 ± 2,370 sequences per sample were obtained by NGS. In series A, EGFR mutations were detected in 16% of cases by Sanger sequencing and in 81% of cases by NGS. Seventy-seven percent of cases found to be negative by Sanger sequencing showed mutations by NGS. In series B, all samples were negative for EGFR mutation by Sanger sequencing whereas 42% of them were positive by NGS. The very sensitive EGFR-NGS assay may open up to the possibility of specific treatments for patients otherwise doomed to re-biopsies or nontargeted therapies.
Chen, Hong; Lin, Hua; Liu, Yi; Wu, Xin-Tao; Wu, Li-Ming
2017-11-07
The chemistry of copper-based chalcogenides has received considerable attention due to their diverse structures and potential applications in the area of thermoelectric (TE) materials. In this communication, a series of spinel-type Cu 4 Mn 2 Te 4 -based samples have been successfully prepared and their high TE performances are attributed to the enhanced power factor and low thermal conductivity via the synergistic effect of Te deficiency and Cl doping. Consequently, a maximum TE figure of merit (ZT) of ∼0.4 was achieved for the Cu 4 Mn 2 Te 3.93 Cl 0.03 sample at 700 K, which was about 100% enhanced in comparison with the undoped Cu 4 Mn 2 Te 4 sample and one of the highest ZT values reported for p-type spinel tellurides.
High mobility high efficiency organic films based on pure organic materials
Salzman, Rhonda F [Ann Arbor, MI; Forrest, Stephen R [Ann Arbor, MI
2009-01-27
A method of purifying small molecule organic material, performed as a series of operations beginning with a first sample of the organic small molecule material. The first step is to purify the organic small molecule material by thermal gradient sublimation. The second step is to test the purity of at least one sample from the purified organic small molecule material by spectroscopy. The third step is to repeat the first through third steps on the purified small molecule material if the spectroscopic testing reveals any peaks exceeding a threshold percentage of a magnitude of a characteristic peak of a target organic small molecule. The steps are performed at least twice. The threshold percentage is at most 10%. Preferably the threshold percentage is 5% and more preferably 2%. The threshold percentage may be selected based on the spectra of past samples that achieved target performance characteristics in finished devices.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
Radioactivity of Fertilizer and China (NORM) in Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michikuni, Shimo; Yuka, Matsuura; Noriko, Itoh
2008-08-07
Radioactivity of 6 fertilizer samples, 7 china clay samples and 5 china glaze samples, which are commonly used in Japan, was measured using a NaI(Tl) scintillation spectrometer. Potassium activity of fertilizer was almost 540-740 Bq/kg, and the highest activity was 9,100 Bq/kg. Activity of fertilizer was 10 times higher for potassium than for uranium-series. Furthermore, these activities were 25 times for potassium and 18 times for uranium-series in comparison with those in natural soil. In china clay, activities of potassium, uranium-series nuclides and thorium-series nuclides were 543-823 Bq/kg, 74.6-94.3 Bq/kg, and 86.3-128 Bq/kg, respectively. These were 1.5-2.2, 2.1-2.6 and 2.3-3.5more » times higher than activity of common soil. Activity of glaze was almost equal to that of china clay.« less
Algorithms exploiting ultrasonic sensors for subject classification
NASA Astrophysics Data System (ADS)
Desai, Sachi; Quoraishee, Shafik
2009-09-01
Proposed here is a series of techniques exploiting micro-Doppler ultrasonic sensors capable of characterizing various detected mammalian targets based on their physiological movements captured a series of robust features. Employed is a combination of unique and conventional digital signal processing techniques arranged in such a manner they become capable of classifying a series of walkers. These processes for feature extraction develops a robust feature space capable of providing discrimination of various movements generated from bipeds and quadrupeds and further subdivided into large or small. These movements can be exploited to provide specific information of a given signature dividing it in a series of subset signatures exploiting wavelets to generate start/stop times. After viewing a series spectrograms of the signature we are able to see distinct differences and utilizing kurtosis, we generate an envelope detector capable of isolating each of the corresponding step cycles generated during a walk. The walk cycle is defined as one complete sequence of walking/running from the foot pushing off the ground and concluding when returning to the ground. This time information segments the events that are readily seen in the spectrogram but obstructed in the temporal domain into individual walk sequences. This walking sequence is then subsequently translated into a three dimensional waterfall plot defining the expected energy value associated with the motion at particular instance of time and frequency. The value is capable of being repeatable for each particular class and employable to discriminate the events. Highly reliable classification is realized exploiting a classifier trained on a candidate sample space derived from the associated gyrations created by motion from actors of interest. The classifier developed herein provides a capability to classify events as an adult humans, children humans, horses, and dogs at potentially high rates based on the tested sample space. The algorithm developed and described will provide utility to an underused sensor modality for human intrusion detection because of the current high-rate of generated false alarms. The active ultrasonic sensor coupled in a multi-modal sensor suite with binary, less descriptive sensors like seismic devices realizing a greater accuracy rate for detection of persons of interest for homeland purposes.
Marchetti, Antonio; Pace, Maria Vittoria; Di Lorito, Alessia; Canarecci, Sara; Felicioni, Lara; D'Antuono, Tommaso; Liberatore, Marcella; Filice, Giampaolo; Guetti, Luigi; Mucilli, Felice; Buttitta, Fiamma
2016-09-01
Anaplastic Lymphoma Kinase (ALK) gene rearrangements have been described in 3-5% of lung adenocarcinomas (ADC) and their identification is essential to select patients for treatment with ALK tyrosine kinase inhibitors. For several years, fluorescent in situ hybridization (FISH) has been considered as the only validated diagnostic assay. Currently, alternative methods are commercially available as diagnostic tests. A series of 217 ADC comprising 196 consecutive resected tumors and 21 ALK FISH-positive cases from an independent series of 702 ADC were investigated. All specimens were screened by IHC (ALK-D5F3-CDx-Ventana), FISH (Vysis ALK Break-Apart-Abbott) and RT-PCR (ALK RGQ RT-PCR-Qiagen). Results were compared and discordant cases subjected to Next Generation Sequencing. Thirty-nine of 217 samples were positive by the ALK RGQ RT-PCR assay, using a threshold cycle (Ct) cut-off ≤35.9, as recommended. Of these positive samples, 14 were negative by IHC and 12 by FISH. ALK RGQ RT-PCR/FISH discordant cases were analyzed by the NGS assay with results concordant with FISH data. In order to obtain the maximum level of agreement between FISH and ALK RGQ RT-PCR data, we introduced a new scoring algorithm based on the ΔCt value. A ΔCt cut-off level ≤3.5 was used in a pilot series. Then the algorithm was tested on a completely independent validation series. By using the new scoring algorithm and FISH as reference standard, the sensitivity and the specificity of the ALK RGQ RT-PCR(ΔCt) assay were 100% and 100%, respectively. Our results suggest that the ALK RGQ RT-PCR test could be useful in clinical practice as a complementary assay in multi-test diagnostic algorithms or even, if our data will be confirmed in independent studies, as a standalone or screening test for the selection of patients to be treated with ALK inhibitors. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Updating stand-level forest inventories using airborne laser scanning and Landsat time series data
NASA Astrophysics Data System (ADS)
Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping
2018-04-01
Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.
Dennis E. Ferguson; John C. Byrne
2016-01-01
The response of 28 shrub species to wildfire burn severity was assessed for 8 wildfires on 6 national forests in the northern Rocky Mountains, USA. Stratified random sampling was used to choose 224 stands based on burn severity, habitat type series, slope steepness, stand height, and stand density, which resulted in 896 plots measured at approximately 2-year intervals...
Todd A. Schroeder; Sean P. Healey; Gretchen G. Moisen; Tracey S. Frescino; Warren B. Cohen; Chengquan Huang; Robert E. Kennedy; Zhiqiang Yang
2014-01-01
With earth's surface temperature and human population both on the rise a new emphasis has been placed on monitoring changes to forested ecosystems the world over. In the United States the U.S. Forest Service Forest Inventory and Analysis (FIA) program monitors the forested land base with field data collected over a permanent network of sample plots. Although these...
Code of Federal Regulations, 2014 CFR
2014-07-01
... series of daily values represents the 98th percentile for that year. Creditable samples include daily... measured (or averaged from hourly measurements in AQS) from midnight to midnight (local standard time) from... design value (DV) or a 24-hour PM2.5 NAAQS DV to determine if those metrics, which are judged to be based...
Paris, Doris F.; Wolfe, N. Lee; Steen, William C.; Baughman, George L.
1983-01-01
Microbial transformation rate constants for a series of phenols were correlated with a property of the substituents, van der Waal's radius. Transformation products were the corresponding catechols, with the exception of p-hydroxybenzoic acid, the product of p-acetylphenol. A different product suggested a different pathway; p-acetylphenol, therefore, was deleted from the data base. PMID:16346236
ERIC Educational Resources Information Center
Neman, Ronald S.; And Others
The study represents an extension of previous research involving the development of scales for the five-card, orally administered, and tape-recorded version of the Thematic Apperception Test(TAT). Scale development is documented and national norms are presented based on a national probability sample of 1,398 youths administered the Cycle III test…
ERIC Educational Resources Information Center
Truckenmiller, James L.
The former HEW National Strategy for Youth Development Model was a community-based planning and procedural tool designed to enhance positive youth development and prevent delinquency through a process of youth needs assessment, development of targeted programs, and program impact evaluation. A series of 12 Impact Scales most directly reflect the…
Encapsulated Decon for Use on Medical Patients
1983-12-01
Development of effective decon microcapsules was based on a series of tasks performed on this study. The preliminary tasks included a litera- ture search...culminated with evaluating selected microcapsules on pig skin samples, with HD, GB, arid GD. Results appear encouraging. The best capsule performance...term contact. in addition, a brief study showed magnetite can be incorporated into the capsule wall to provide magnetic microcapsules that can be
NASA Astrophysics Data System (ADS)
Feng, Yefeng; Zhang, Jianxiong; Hu, Jianbing; Peng, Cheng; He, Renqi
2018-01-01
Induced polarization at interface has been confirmed to have significant impact on the dielectric properties of 2-2 series composites bearing Si-based semi-conductor sheet and polymer layer. By compositing, the significantly elevated high permittivity in Si-based semi-conductor sheet should be responsible for the obtained high permittivity in composites. In that case, interface interaction could include two aspects namely a strong electrostatic force from high polarity polymeric layer and a newborn high polarity induced in Si-based ceramic sheet. In this work, this class of interface induced polarization was successfully extended into another 2-2 series composite system made up of ultra-high polarity ceramic sheet and high polarity polymer layer. By compositing, the greatly improved high permittivity in high polarity polymer layer was confirmed to strongly contribute to the high permittivity achieved in composites. In this case, interface interaction should consist of a rather large electrostatic force from ultra-high polarity ceramic sheet with ionic crystal structure and an enhanced high polarity induced in polymer layer based on a large polarizability of high polarity covalent dipoles in polymer. The dielectric and conductive properties of four designed 2-2 series composites and their components have been detailedly investigated. Increasing of polymer inborn polarity would lead to a significant elevating of polymer overall polarity in composite. Decline of inherent polarities in two components would result in a mild improving of polymer total polarity in composite. Introducing of non-polarity polymeric layer would give rise to a hardly unaltered polymer overall polarity in composite. The best 2-2 composite could possess a permittivity of ˜463 at 100 Hz 25.7 times of the original permittivity of polymer in it. This work might offer a facile route for achieving the promising composite dielectrics by constructing the 2-2 series samples from two high polarity components.
NASA Astrophysics Data System (ADS)
Sigro, J.; Brunet, M.; Aguilar, E.; Stoll, H.; Jimenez, M.
2009-04-01
The Spanish-funded research project Rapid Climate Changes in the Iberian Peninsula (IP) Based on Proxy Calibration, Long Term Instrumental Series and High Resolution Analyses of Terrestrial and Marine Records (CALIBRE: ref. CGL2006-13327-C04/CLI) has as main objective to analyse climate dynamics during periods of rapid climate change by means of developing high-resolution paleoclimate proxy records from marine and terrestrial (lakes and caves) deposits over the IP and calibrating them with long-term and high-quality instrumental climate time series. Under CALIBRE, the coordinated project Developing and Enhancing a Climate Instrumental Dataset for Calibrating Climate Proxy Data and Analysing Low-Frequency Climate Variability over the Iberian Peninsula (CLICAL: CGL2006-13327-C04-03/CLI) is devoted to the development of homogenised climate records and sub-regional time series which can be confidently used in the calibration of the lacustrine, marine and speleothem time series generated under CALIBRE. Here we present the procedures followed in order to homogenise a dataset of maximum and minimum temperature and precipitation data on a monthly basis over the Spanish northern coast. The dataset is composed of thirty (twenty) precipitation (temperature) long monthly records. The data are quality controlled following the procedures recommended by Aguilar et al. (2003) and tested for homogeneity and adjusted by following the approach adopted by Brunet et al. (2008). Sub-regional time series of precipitation, maximum and minimum temperatures for the period 1853-2007 have been generated by averaging monthly anomalies and then adding back the base-period mean, according to the method of Jones and Hulme (1996). Also, a method to adjust the variance bias present in regional time series associated over time with varying sample size has been applied (Osborn et al., 1997). The results of this homogenisation exercise and the development of the associated sub-regional time series will be widely discussed. Initial comparisons with rapidly growing speleothems in two different caves indicate that speleothem trace element ratios like Ba/Ca are recording the decrease in littoral precipitation in the last several decades. References Aguilar, E., Auer, I., Brunet, M., Peterson, T. C. and Weringa, J. 2003. Guidelines on Climate Metadata and Homogenization, World Meteorological Organization (WMO)-TD no. 1186 / World Climate Data and Monitoring Program (WCDMP) no. 53, Geneva: 51 pp. Brunet M, Saladié O, Jones P, Sigró J, Aguilar E, Moberg A, Lister D, Walther A, Almarza C. 2008. A case-study/guidance on the development of long-term daily adjusted temperature datasets, WMO-TD-1425/WCDMP-66, Geneva: 43 pp. Jones, P D, and Hulme M, 1996, Calculating regional climatic time series for temperature and precipitation: Methods and illustrations, Int. J. Climatol., 16, 361- 377. Osborn, T. J., Briffa K. R., and Jones P. D., 1997, Adjusting variance for sample-size in tree-ring chronologies and other regional mean time series, Dendrochronologia, 15, 89- 99.
PERIODOGRAMS FOR MULTIBAND ASTRONOMICAL TIME SERIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanderPlas, Jacob T.; Ivezic, Željko
This paper introduces the multiband periodogram, a general extension of the well-known Lomb–Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb–Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common tomore » all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.« less
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Cryo-Electron Tomography for Structural Characterization of Macromolecular Complexes
Cope, Julia; Heumann, John; Hoenger, Andreas
2011-01-01
Cryo-electron tomography (cryo-ET) is an emerging 3-D reconstruction technology that combines the principles of tomographic 3-D reconstruction with the unmatched structural preservation of biological material embedded in vitreous ice. Cryo-ET is particularly suited to investigating cell-biological samples and large macromolecular structures that are too polymorphic to be reconstructed by classical averaging-based 3-D reconstruction procedures. This unit aims to make cryo-ET accessible to newcomers and discusses the specialized equipment required, as well as the relevant advantages and hurdles associated with sample preparation by vitrification and cryo-ET. Protocols describe specimen preparation, data recording and 3-D data reconstruction for cryo-ET, with a special focus on macromolecular complexes. A step-by-step procedure for specimen vitrification by plunge freezing is provided, followed by the general practicalities of tilt-series acquisition for cryo-ET, including advice on how to select an area appropriate for acquiring a tilt series. A brief introduction to the underlying computational reconstruction principles applied in tomography is described, along with instructions for reconstructing a tomogram from cryo-tilt series data. Finally, a method is detailed for extracting small subvolumes containing identical macromolecular structures from tomograms for alignment and averaging as a means to increase the signal-to-noise ratio and eliminate missing wedge effects inherent in tomographic reconstructions. PMID:21842467
A Draft Test Protocol for Detecting Possible Biohazards in Martian Samples Returned to Earth
NASA Technical Reports Server (NTRS)
Rummel, John D.; Race, Margaret S.; DeVinenzi, Donald L.; Schad, P. Jackson; Stabekis, Pericles D.; Viso, Michel; Acevedo, Sara E.
2002-01-01
This document presents the first complete draft of a protocol for detecting possible biohazards in Mars samples returned to Earth; it is the final product of the Mars Sample Handling Protocol Workshop Series, convened in 2000-2001 by NASA's Planetary Protection Officer. The goal of the five-workshop Series vas to develop a comprehensive protocol by which returned martian sample materials could be assessed for the presence of any biological hazard(s) while safeguarding the purity of the samples from possible terrestrial contamination The reference numbers for the proceedings from the five individual Workshops.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Naseri, H; Homaeinezhad, M R; Pourkhajeh, H
2013-09-01
The major aim of this study is to describe a unified procedure for detecting noisy segments and spikes in transduced signals with a cyclic but non-stationary periodic nature. According to this procedure, the cycles of the signal (onset and offset locations) are detected. Then, the cycles are clustered into a finite number of groups based on appropriate geometrical- and frequency-based time series. Next, the median template of each time series of each cluster is calculated. Afterwards, a correlation-based technique is devised for making a comparison between a test cycle feature and the associated time series of each cluster. Finally, by applying a suitably chosen threshold for the calculated correlation values, a segment is prescribed to be either clean or noisy. As a key merit of this research, the procedure can introduce a decision support for choosing accurately orthogonal-expansion-based filtering or to remove noisy segments. In this paper, the application procedure of the proposed method is comprehensively described by applying it to phonocardiogram (PCG) signals for finding noisy cycles. The database consists of 126 records from several patients of a domestic research station acquired by a 3M Littmann(®) 3200, 4KHz sampling frequency electronic stethoscope. By implementing the noisy segments detection algorithm with this database, a sensitivity of Se=91.41% and a positive predictive value, PPV=92.86% were obtained based on physicians assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
NASA Astrophysics Data System (ADS)
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1993-01-01
The purpose of this work is to estimate sampling errors of area-time averaged rain rate due to temporal samplings by satellites. In particular, the sampling errors of the proposed low inclination orbit satellite of the Tropical Rainfall Measuring Mission (TRMM) (35 deg inclination and 350 km altitude), one of the sun synchronous polar orbiting satellites of NOAA series (98.89 deg inclination and 833 km altitude), and two simultaneous sun synchronous polar orbiting satellites--assumed to carry a perfect passive microwave sensor for direct rainfall measurements--will be estimated. This estimate is done by performing a study of the satellite orbits and the autocovariance function of the area-averaged rain rate time series. A model based on an exponential fit of the autocovariance function is used for actual calculations. Varying visiting intervals and total coverage of averaging area on each visit by the satellites are taken into account in the model. The data are generated by a General Circulation Model (GCM). The model has a diurnal cycle and parameterized convective processes. A special run of the GCM was made at NASA/GSFC in which the rainfall and precipitable water fields were retained globally for every hour of the run for the whole year.
Invik, Jesse; Barkema, Herman W; Massolo, Alessandro; Neumann, Norman F; Checkley, Sylvia
2017-10-01
With increasing stress on our water resources and recent waterborne disease outbreaks, understanding the epidemiology of waterborne pathogens is crucial to build surveillance systems. The purpose of this study was to explore techniques for describing microbial water quality in rural drinking water wells, based on spatiotemporal analysis, time series analysis and relative risk mapping. Tests results for Escherichia coli and coliforms from private and small public well water samples, collected between 2004 and 2012 in Alberta, Canada, were used for the analysis. Overall, 14.6 and 1.5% of the wells were total coliform and E. coli-positive, respectively. Private well samples were more often total coliform or E. coli-positive compared with untreated public well samples. Using relative risk mapping we were able to identify areas of higher risk for bacterial contamination of groundwater in the province not previously identified. Incorporation of time series analysis demonstrated peak contamination occurring for E. coli in July and a later peak for total coliforms in September, suggesting a temporal dissociation between these indicators in terms of groundwater quality, and highlighting the potential need to increase monitoring during certain periods of the year.
Qin, Chuan; Zhao, Jianlin; Di, Jianglei; Wang, Le; Yu, Yiting; Yuan, Weizheng
2009-02-10
We employed digital holographic microscopy to visually test microoptoelectromechanical systems (MOEMS). The sample is a blazed-angle adjustable grating. Considering the periodic structure of the sample, a local area unwrapping method based on a binary template was adopted to demodulate the fringes obtained by referring to a reference hologram. A series of holograms at different deformation states due to different drive voltages were captured to analyze the dynamic character of the MOEMS, and the uniformity of different microcantilever beams was also inspected. The results show this testing method is effective for a periodic structure.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267
NASA Astrophysics Data System (ADS)
Rehfeld, Kira; Goswami, Bedartha; Marwan, Norbert; Breitenbach, Sebastian; Kurths, Jürgen
2013-04-01
Statistical analysis of dependencies amongst paleoclimate data helps to infer on the climatic processes they reflect. Three key challenges have to be addressed, however: the datasets are heterogeneous in time (i) and space (ii), and furthermore time itself is a variable that needs to be reconstructed, which (iii) introduces additional uncertainties. To address these issues in a flexible way we developed the paleoclimate network framework, inspired by the increasing application of complex networks in climate research. Nodes in the paleoclimate network represent a paleoclimate archive, and an associated time series. Links between these nodes are assigned, if these time series are significantly similar. Therefore, the base of the paleoclimate network is formed by linear and nonlinear estimators for Pearson correlation, mutual information and event synchronization, which quantify similarity from irregularly sampled time series. Age uncertainties are propagated into the final network analysis using time series ensembles which reflect the uncertainty. We discuss how spatial heterogeneity influences the results obtained from network measures, and demonstrate the power of the approach by inferring teleconnection variability of the Asian summer monsoon for the past 1000 years.
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
Model-based monitoring of stormwater runoff quality.
Birch, Heidi; Vezzaro, Luca; Mikkelsen, Peter Steen
2013-01-01
Monitoring of micropollutants (MP) in stormwater is essential to evaluate the impacts of stormwater on the receiving aquatic environment. The aim of this study was to investigate how different strategies for monitoring of stormwater quality (combining a model with field sampling) affect the information obtained about MP discharged from the monitored system. A dynamic stormwater quality model was calibrated using MP data collected by automatic volume-proportional sampling and passive sampling in a storm drainage system on the outskirts of Copenhagen (Denmark) and a 10-year rain series was used to find annual average (AA) and maximum event mean concentrations. Use of this model reduced the uncertainty of predicted AA concentrations compared to a simple stochastic method based solely on data. The predicted AA concentration, obtained by using passive sampler measurements (1 month installation) for calibration of the model, resulted in the same predicted level but with narrower model prediction bounds than by using volume-proportional samples for calibration. This shows that passive sampling allows for a better exploitation of the resources allocated for stormwater quality monitoring.
Redesigning flow injection after 40 years of development: Flow programming.
Ruzicka, Jaromir Jarda
2018-01-01
Automation of reagent based assays, by means of Flow Injection (FI), is based on sample processing, in which a sample flows continuously towards and through a detector for quantification of the target analyte. The Achilles heel of this methodology, the legacy of Auto Analyzer®, is continuous reagent consumption, and continuous generation of chemical waste. However, flow programming, assisted by recent advances in precise pumping, combined with the lab-on-valve technique, allows the FI manifold to be designed around a single confluence point through which sample and reagents are sequentially directed by means of a series of flow reversals. This approach results in sample/reagent mixing analogous to the traditional FI, reduces sample and reagent consumption, and uses the stop flow technique for enhancement of the yield of chemical reactions. The feasibility of programmable Flow Injection (pFI) is documented by example of commonly used spectrophotometric assays of, phosphate, nitrate, nitrite and glucose. Experimental details and additional information are available in online tutorial http://www.flowinjectiontutorial.com/. Copyright © 2017 Elsevier B.V. All rights reserved.
Aphesteguy, Juan Carlos; Jacobo, Silvia E; Lezama, Luis; Kurlyandskaya, Galina V; Schegoleva, Nina N
2014-06-19
Fe3O4 and ZnxFe3-xO4 pure and doped magnetite magnetic nanoparticles (NPs) were prepared in aqueous solution (Series A) or in a water-ethyl alcohol mixture (Series B) by the co-precipitation method. Only one ferromagnetic resonance line was observed in all cases under consideration indicating that the materials are magnetically uniform. The shortfall in the resonance fields from 3.27 kOe (for the frequency of 9.5 GHz) expected for spheres can be understood taking into account the dipolar forces, magnetoelasticity, or magnetocrystalline anisotropy. All samples show non-zero low field absorption. For Series A samples the grain size decreases with an increase of the Zn content. In this case zero field absorption does not correlate with the changes of the grain size. For Series B samples the grain size and zero field absorption behavior correlate with each other. The highest zero-field absorption corresponded to 0.2 zinc concentration in both A and B series. High zero-field absorption of Fe3O4 ferrite magnetic NPs can be interesting for biomedical applications.
Fractal Dimension Analysis of Transient Visual Evoked Potentials: Optimisation and Applications.
Boon, Mei Ying; Henry, Bruce Ian; Chu, Byoung Sun; Basahi, Nour; Suttle, Catherine May; Luu, Chi; Leung, Harry; Hing, Stephen
2016-01-01
The visual evoked potential (VEP) provides a time series signal response to an external visual stimulus at the location of the visual cortex. The major VEP signal components, peak latency and amplitude, may be affected by disease processes. Additionally, the VEP contains fine detailed and non-periodic structure, of presently unclear relevance to normal function, which may be quantified using the fractal dimension. The purpose of this study is to provide a systematic investigation of the key parameters in the measurement of the fractal dimension of VEPs, to develop an optimal analysis protocol for application. VEP time series were mathematically transformed using delay time, τ, and embedding dimension, m, parameters. The fractal dimension of the transformed data was obtained from a scaling analysis based on straight line fits to the numbers of pairs of points with separation less than r versus log(r) in the transformed space. Optimal τ, m, and scaling analysis were obtained by comparing the consistency of results using different sampling frequencies. The optimised method was then piloted on samples of normal and abnormal VEPs. Consistent fractal dimension estimates were obtained using τ = 4 ms, designating the fractal dimension = D2 of the time series based on embedding dimension m = 7 (for 3606 Hz and 5000 Hz), m = 6 (for 1803 Hz) and m = 5 (for 1000Hz), and estimating D2 for each embedding dimension as the steepest slope of the linear scaling region in the plot of log(C(r)) vs log(r) provided the scaling region occurred within the middle third of the plot. Piloting revealed that fractal dimensions were higher from the sampled abnormal than normal achromatic VEPs in adults (p = 0.02). Variances of fractal dimension were higher from the abnormal than normal chromatic VEPs in children (p = 0.01). A useful analysis protocol to assess the fractal dimension of transformed VEPs has been developed.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng
2017-02-01
To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.
Identification of flood-rich and flood-poor periods in flood series
NASA Astrophysics Data System (ADS)
Mediero, Luis; Santillán, David; Garrote, Luis
2015-04-01
Recently, a general concern about non-stationarity of flood series has arisen, as changes in catchment response can be driven by several factors, such as climatic and land-use changes. Several studies to detect trends in flood series at either national or trans-national scales have been conducted. Trends are usually detected by the Mann-Kendall test. However, the results of this test depend on the starting and ending year of the series, which can lead to different results in terms of the period considered. The results can be conditioned to flood-poor and flood-rich periods located at the beginning or end of the series. A methodology to identify statistically significant flood-rich and flood-poor periods is developed, based on the comparison between the expected sampling variability of floods when stationarity is assumed and the observed variability of floods in a given series. The methodology is applied to a set of long series of annual maximum floods, peaks over threshold and counts of annual occurrences in peaks over threshold series observed in Spain in the period 1942-2009. Mediero et al. (2014) found a general decreasing trend in flood series in some parts of Spain that could be caused by a flood-rich period observed in 1950-1970, placed at the beginning of the flood series. The results of this study support the findings of Mediero et al. (2014), as a flood-rich period in 1950-1970 was identified in most of the selected sites. References: Mediero, L., Santillán, D., Garrote, L., Granados, A. Detection and attribution of trends in magnitude, frequency and timing of floods in Spain, Journal of Hydrology, 517, 1072-1088, 2014.
State-space modeling of population sizes and trends in Nihoa Finch and Millerbird
Gorresen, P. Marcos; Brinck, Kevin W.; Camp, Richard J.; Farmer, Chris; Plentovich, Sheldon M.; Banko, Paul C.
2016-01-01
Both of the 2 passerines endemic to Nihoa Island, Hawai‘i, USA—the Nihoa Millerbird (Acrocephalus familiaris kingi) and Nihoa Finch (Telespiza ultima)—are listed as endangered by federal and state agencies. Their abundances have been estimated by irregularly implemented fixed-width strip-transect sampling from 1967 to 2012, from which area-based extrapolation of the raw counts produced highly variable abundance estimates for both species. To evaluate an alternative survey method and improve abundance estimates, we conducted variable-distance point-transect sampling between 2010 and 2014. We compared our results to those obtained from strip-transect samples. In addition, we applied state-space models to derive improved estimates of population size and trends from the legacy time series of strip-transect counts. Both species were fairly evenly distributed across Nihoa and occurred in all or nearly all available habitat. Population trends for Nihoa Millerbird were inconclusive because of high within-year variance. Trends for Nihoa Finch were positive, particularly since the early 1990s. Distance-based analysis of point-transect counts produced mean estimates of abundance similar to those from strip-transects but was generally more precise. However, both survey methods produced biologically unrealistic variability between years. State-space modeling of the long-term time series of abundances obtained from strip-transect counts effectively reduced uncertainty in both within- and between-year estimates of population size, and allowed short-term changes in abundance trajectories to be smoothed into a long-term trend.
Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions
Li, Zuojin; Li, Shengbo Eben; Li, Renjie; Cheng, Bo; Shi, Jinliang
2017-01-01
This paper presents a drowsiness on-line detection system for monitoring driver fatigue level under real driving conditions, based on the data of steering wheel angles (SWA) collected from sensors mounted on the steering lever. The proposed system firstly extracts approximate entropy (ApEn) features from fixed sliding windows on real-time steering wheel angles time series. After that, this system linearizes the ApEn features series through an adaptive piecewise linear fitting using a given deviation. Then, the detection system calculates the warping distance between the linear features series of the sample data. Finally, this system uses the warping distance to determine the drowsiness state of the driver according to a designed binary decision classifier. The experimental data were collected from 14.68 h driving under real road conditions, including two fatigue levels: “wake” and “drowsy”. The results show that the proposed system is capable of working online with an average 78.01% accuracy, 29.35% false detections of the “awake” state, and 15.15% false detections of the “drowsy” state. The results also confirm that the proposed method based on SWA signal is valuable for applications in preventing traffic accidents caused by driver fatigue. PMID:28257094
Optical Properties of Bismuth Tellurite Based Glass
Oo, Hooi Ming; Mohamed-Kamari, Halimah; Wan-Yusoff, Wan Mohd Daud
2012-01-01
A series of binary tellurite based glasses (Bi2O3)x (TeO2)100−x was prepared by melt quenching method. The density, molar volume and refractive index increase when bismuth ions Bi3+ increase, this is due to the increased polarization of the ions Bi3+ and the enhanced formation of non-bridging oxygen (NBO). The Fourier transform infrared spectroscopy (FTIR) results show the bonding of the glass sample and the optical band gap, Eopt decreases while the refractive index increases when the ion Bi3+ content increases. PMID:22605999
NASA Astrophysics Data System (ADS)
Clark, Tara R.; Zhao, Jian-xin; Feng, Yue-xing; Done, Terry J.; Jupiter, Stacy; Lough, Janice; Pandolfi, John M.
2012-02-01
The main limiting factor in obtaining precise and accurate uranium-series (U-series) ages of corals that lived during the last few hundred years is the ability to constrain and correct for initial thorium-230 ( 230Th 0), which is proportionally much higher in younger samples. This is becoming particularly important in palaeoecological research where accurate chronologies, based on the 230Th chronometer, are required to pinpoint changes in coral community structure and the timing of mortality events in recent time (e.g. since European settlement of northern Australia in the 1850s). In this study, thermal ionisation mass spectrometry (TIMS) U-series dating of 43 samples of known ages collected from living Porites spp. from the far northern, central and southern inshore regions of the Great Barrier Reef (GBR) was performed to spatially constrain initial 230Th/ 232Th ( 230Th/ 232Th 0) variability. In these living Porites corals, the majority of 230Th/ 232Th 0 values fell within error of the conservative bulk Earth 230Th/ 232Th atomic value of 4.3 ± 4.3 × 10 -6 (2 σ) generally assumed for 230Th 0 corrections where the primary source is terrestrially derived. However, the results of this study demonstrate that the accuracy of 230Th ages can be further improved by using locally determined 230Th/ 232Th 0 values for correction, supporting the conclusion made by Shen et al. (2008) for the Western Pacific. Despite samples being taken from regions adjacent to contrasting levels of land modification, no significant differences were found in 230Th/ 232Th 0 between regions exposed to varying levels of sediment during river runoff events. Overall, 39 of the total 43 230Th/ 232Th 0 atomic values measured in samples from inshore reefs across the entire region show a normal distribution ranging from 3.5 ± 1.1 to 8.1 ± 1.1 × 10 -6, with a weighted mean of 5.76 ± 0.34 × 10 -6 (2 σ, MSWD = 8.1). Considering the scatter of the data, the weighted mean value with a more conservative assigned error of 25% (i.e. 5.8 ± 1.4 × 10 -6) that encompasses the full variation of the 39 230Th/ 232Th 0 measurements is recommended as a more appropriate value for initial 230Th corrections for U-series dating of most Porites samples from inshore regions of the GBR. This will result in significant improvement in both the precision and accuracy of the corrected 230Th ages related to those based on the assumed bulk Earth 230Th/ 232Th 0 value of 4.3 ± 4.3 × 10 -6. However, several anomalously high 230Th/ 232Th 0 values reaching up to 28.0 ± 1.6 × 10 -6 occasionally found in some coral annual bands coinciding with El Niño years imply high 230Th/ 232Th 0 sources and highlight the complexities of understanding 230Th/ 232Th 0 variability. For U-series dating of young coral samples from such sites where anomalous 230Th/ 232Th 0 values occur, we suggest replicate dating of multiple growth bands with known age difference to verify age accuracy.
Adventures in Modern Time Series Analysis: From the Sun to the Crab Nebula and Beyond
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey
2014-01-01
With the generation of long, precise, and finely sampled time series the Age of Digital Astronomy is uncovering and elucidating energetic dynamical processes throughout the Universe. Fulfilling these opportunities requires data effective analysis techniques rapidly and automatically implementing advanced concepts. The Time Series Explorer, under development in collaboration with Tom Loredo, provides tools ranging from simple but optimal histograms to time and frequency domain analysis for arbitrary data modes with any time sampling. Much of this development owes its existence to Joe Bredekamp and the encouragement he provided over several decades. Sample results for solar chromospheric activity, gamma-ray activity in the Crab Nebula, active galactic nuclei and gamma-ray bursts will be displayed.
HOMPRA Europe - A gridded precipitation data set from European homogenized time series
NASA Astrophysics Data System (ADS)
Rustemeier, Elke; Kapala, Alice; Meyer-Christoffer, Anja; Finger, Peter; Schneider, Udo; Venema, Victor; Ziese, Markus; Simmer, Clemens; Becker, Andreas
2017-04-01
Reliable monitoring data are essential for robust analyses of climate variability and, in particular, long-term trends. In this regard, a gridded, homogenized data set of monthly precipitation totals - HOMPRA Europe (HOMogenized PRecipitation Analysis of European in-situ data)- is presented. The data base consists of 5373 homogenized monthly time series, a carefully selected subset held by the Global Precipitation Climatology Centre (GPCC). The chosen series cover the period 1951-2005 and contain less than 10% missing values. Due to the large number of data, an automatic algorithm had to be developed for the homogenization of these precipitation series. In principal, the algorithm is based on three steps: * Selection of overlapping station networks in the same precipitation regime, based on rank correlation and Ward's method of minimal variance. Since the underlying time series should be as homogeneous as possible, the station selection is carried out by deterministic first derivation in order to reduce artificial influences. * The natural variability and trends were temporally removed by means of highly correlated neighboring time series to detect artificial break-points in the annual totals. This ensures that only artificial changes can be detected. The method is based on the algorithm of Caussinus and Mestre (2004). * In the last step, the detected breaks are corrected monthly by means of a multiple linear regression (Mestre, 2003). Due to the automation of the homogenization, the validation of the algorithm is essential. Therefore, the method was tested on artificial data sets. Additionally the sensitivity of the method was tested by varying the neighborhood series. If available in digitized form, the station history was also used to search for systematic errors in the jump detection. Finally, the actual HOMPRA Europe product is produced by interpolation of the homogenized series onto a 1° grid using one of the interpolation schems operationally at GPCC (Becker et al., 2013 and Schamm et al., 2014). Caussinus, H., und O. Mestre, 2004: Detection and correction of artificial shifts in climate series, Journal of the Royal, Statistical Society. Series C (Applied Statistics), 53(3), 405-425. Mestre, O., 2003: Correcting climate series using ANOVA technique, Proceedings of the fourth seminar Willmott, C.; Rowe, C. & Philpot, W., 1985: Small-scale climate maps: A sensitivity analysis of some common assumptions associated with grid-point interpolation and contouring The American Carthographer, 12, 5-16 Becker, A.; Finger, P.; Meyer-Christoffer, A.; Rudolf, B.; Schamm, K.; Schneider, U. & Ziese, M., 2013: A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901-present Earth System Science Data, 5, 71-99 Schamm, K.; Ziese, M.; Becker, A.; Finger, P.; Meyer-Christoffer, A.; Schneider, U.; Schröder, M. & Stender, P., 2014: Global gridded precipitation over land: a description of the new GPCC First Guess Daily product, Earth System Science Data, 6, 49-60
NASA Astrophysics Data System (ADS)
Kang, Wonmo; Chen, YungChia; Bagchi, Amit; O'Shaughnessy, Thomas J.
2017-12-01
The material response of biologically relevant soft materials, e.g., extracellular matrix or cell cytoplasm, at high rate loading conditions is becoming increasingly important for emerging medical implications including the potential of cavitation-induced brain injury or cavitation created by medical devices, whether intentional or not. However, accurately probing soft samples remains challenging due to their delicate nature, which often excludes the use of conventional techniques requiring direct contact with a sample-loading frame. We present a drop-tower-based method, integrated with a unique sample holder and a series of effective springs and dampers, for testing soft samples with an emphasis on high-rate loading conditions. Our theoretical studies on the transient dynamics of the system show that well-controlled impacts between a movable mass and sample holder can be used as a means to rapidly load soft samples. For demonstrating the integrated system, we experimentally quantify the critical acceleration that corresponds to the onset of cavitation nucleation for pure water and 7.5% gelatin samples. This study reveals that 7.5% gelatin has a significantly higher, approximately double, critical acceleration as compared to pure water. Finally, we have also demonstrated a non-optical method of detecting cavitation in soft materials by correlating cavitation collapse with structural resonance of the sample container.
Price-volume multifractal analysis and its application in Chinese stock markets
NASA Astrophysics Data System (ADS)
Yuan, Ying; Zhuang, Xin-tian; Liu, Zhi-ying
2012-06-01
An empirical research on Chinese stock markets is conducted using statistical tools. First, the multifractality of stock price return series, ri(ri=ln(Pt+1)-ln(Pt)) and trading volume variation series, vi(vi=ln(Vt+1)-ln(Vt)) is confirmed using multifractal detrended fluctuation analysis. Furthermore, a multifractal detrended cross-correlation analysis between stock price return and trading volume variation in Chinese stock markets is also conducted. It is shown that the cross relationship between them is also found to be multifractal. Second, the cross-correlation between stock price Pi and trading volume Vi is empirically studied using cross-correlation function and detrended cross-correlation analysis. It is found that both Shanghai stock market and Shenzhen stock market show pronounced long-range cross-correlations between stock price and trading volume. Third, a composite index R based on price and trading volume is introduced. Compared with stock price return series ri and trading volume variation series vi, R variation series not only remain the characteristics of original series but also demonstrate the relative correlation between stock price and trading volume. Finally, we analyze the multifractal characteristics of R variation series before and after three financial events in China (namely, Price Limits, Reform of Non-tradable Shares and financial crisis in 2008) in the whole period of sample to study the changes of stock market fluctuation and financial risk. It is found that the empirical results verified the validity of R.
U-Pb SHRIMP dating of uraniferous opals
Nemchin, A.A.; Neymark, L.A.; Simons, S.L.
2006-01-01
U-Pb and U-series analyses of four U-rich opal samples using sensitive high-resolution ion microprobe (SHRIMP) demonstrate the potential of this technique for the dating of opals with ages ranging from several tens of thousand years to millions of years. The major advantages of the technique, compared to the conventional thermal ionisation mass spectrometry (TIMS), are the high spatial resolution (???20 ??m), the ability to analyse in situ all isotopes required to determine both U-Pb and U-series ages, and a relatively short analysis time which allows obtaining a growth rate of opal as a result of a single SHRIMP session. There are two major limitations to this method, determined by both current level of development of ion probes and understanding of ion sputtering processes. First, sufficient secondary ion beam intensities can only be obtained for opal samples with U concentrations in excess of ???20 ??g/g. However, this restriction still permits dating of a large variety of opals. Second, U-Pb ratios in all analyses drifted with time and were only weakly correlated with changes in other ratios (such as U/UO). This drift, which is difficult to correct for, remains the main factor currently limiting the precision and accuracy of the U-Pb SHRIMP opal ages. Nevertheless, an assumption of similar behaviour of standard and unknown opals under similar analytical conditions allowed successful determination of ages with precisions of ???10% for the samples investigated in this study. SHRIMP-based U-series and U-Pb ages are consistent with TIMS dating results of the same materials and known geological timeframes. ?? 2005 Elsevier B.V. All rights reserved.
KRAS mutations in blood circulating cell-free DNA: a pancreatic cancer case-control
Le Calvez-Kelm, Florence; Foll, Matthieu; Wozniak, Magdalena B.; Delhomme, Tiffany M.; Durand, Geoffroy; Chopard, Priscilia; Pertesi, Maroulio; Fabianova, Eleonora; Adamcakova, Zora; Holcatova, Ivana; Foretova, Lenka; Janout, Vladimir; Vallee, Maxime P.; Rinaldi, Sabina; Brennan, Paul; McKay, James D.; Byrnes, Graham B.; Scelo, Ghislaine
2016-01-01
The utility of KRAS mutations in plasma circulating cell-free DNA (cfDNA) samples as non-invasive biomarkers for the detection of pancreatic cancer has never been evaluated in a large case-control series. We applied a KRAS amplicon-based deep sequencing strategy combined with analytical pipeline specifically designed for the detection of low-abundance mutations to screen plasma samples of 437 pancreatic cancer cases, 141 chronic pancreatitis subjects, and 394 healthy controls. We detected mutations in 21.1% (N=92) of cases, of whom 82 (89.1%) carried at least one mutation at hotspot codons 12, 13 or 61, with mutant allelic fractions from 0.08% to 79%. Advanced stages were associated with an increased proportion of detection, with KRAS cfDNA mutations detected in 10.3%, 17,5% and 33.3% of cases with local, regional and systemic stages, respectively. We also detected KRAS cfDNA mutations in 3.7% (N=14) of healthy controls and in 4.3% (N=6) of subjects with chronic pancreatitis, but at significantly lower allelic fractions than in cases. Combining cfDNA KRAS mutations and CA19-9 plasma levels on a limited set of case-control samples did not improve the overall performance of the biomarkers as compared to CA19-9 alone. Whether the limited sensitivity and specificity observed in our series of KRAS mutations in plasma cfDNA as biomarkers for pancreatic cancer detection are attributable to methodological limitations or to the biology of cfDNA should be further assessed in large case-control series. PMID:27705932
As part of a research team focused on aquatic toxicity testing using fathead minnows as a model species, this presentation is the second of a three-part series, giving an overview of the types of field and laboratory studies as well as sample processing our team conducts at the U...
The Inclusion of Life Skills in English Textbooks in Jordan
ERIC Educational Resources Information Center
Al Masri, Amaal; Smadi, Mona; Aqel, Amal; Hamed, Wafaa'
2016-01-01
This study aimed at analyzing Action Pack English textbooks' texts based on the availability of life skills for 5th, 6th and 7th grades, and to determine the frequencies and percentages of the life skills present in each text. The sample of the study was English language textbooks for Action Pack series for the 5th, 6th, and 7th grades. The life…
Response of six non-native invasive plant species to wildfires in the northern Rocky Mountains, USA
Dennis E. Ferguson; Christine L. Craig
2010-01-01
This paper presents early results on the response of six non-native invasive plant species to eight wildfires on six National Forests (NFs) in the northern Rocky Mountains, USA. Stratified random sampling was used to choose 224 stands based on burn severity, habitat type series, slope steepness, stand height, and stand density. Data for this report are from 219 stands...
ERIC Educational Resources Information Center
Board, Kathryn; Tinsley, Teresa
2014-01-01
The Language Trends survey 2013/4 is the 12th in a series of annual research exercises charting the health of language teaching and learning in English schools. The findings are based on an online survey completed by teachers in a large sample of secondary schools across the country from both the state and independent sectors. In 2012, and again…
ERIC Educational Resources Information Center
Barrows, Samuel; Cheng, Albert; Peterson, Paul E.; West, Martin R.
2017-01-01
This first report of charter school parents' perceptions based on nationally representative samples finds that charter parents, as compared with parents from district schools, are less likely to see serious problems at their children's school, report more extensive communications with the school, and are more satisfied with most aspects of the…
Reliability of a Measure of Institutional Discrimination against Minorities
1979-12-01
samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small
Development of a Portable Test Kit for Field-Screening Paints
1986-01-01
Use) rods. TT-P-002119 Paint, Latex Base, High Traffic Areas, Flat and Eggshell Finish Discussion (Low Lustre, For Interior Use) The applications...testing uniformity in different clean. Eggshell or flat surfaces have more pigment than laboratories. Although the methods are designed to vehicle on...samples (Table 12) were selected from the useful for determining the gloss of eggshell , semigloss, series to represent the range of gloss (glossy
ERIC Educational Resources Information Center
Loucks, Susan F.; And Others
Based on a local site sample of 146 school districts, this volume (the second in a series of 10) describes school improvement efforts supported by 4 different federal strategies and representative programs: interpersonal linkage of validated practices (National Diffusion Network), commercial distribution (Bureau of Education for the Handicapped…
CITE 3 meteorological highlights
NASA Technical Reports Server (NTRS)
Shipham, Mark C.; Bachmeier, A. Scott; Anderson, Bruce E.
1993-01-01
Meteorological highlights from the third NASA Global Tropospheric Experiment Chemical Instrumentation Test and Evaluation (GTE/CITE 3) are presented. During August and September 1989, research flights were conducted from Wallops Island, Virginia, and Natal, Brazil, and included airborne sampling of air masses over adjacent regions of the Atlantic Ocean. Isentropic backward trajectory calculations, wind vector/streamline fields, rawinsonde data, and GOES and METEOSAT satellite imagery are utilized to examine the meteorological conditions for each flight and to determine the transport paths of the sampled air masses. Some aspects of the chemical signatures of the sampled air are also discussed. During the series of flights based at Wallops Island, Virginia, the flow into the experiment area was governed primarily by the position of the North Atlantic subtropical anticyclone. The large-scale tropospheric circulation switched from primarily a marine flow during flights 1-4, to a predominantly offshore mid-latitude continental flow during flights 5-10. During these later flights, the regional influences of large eastern U.S. cities along with vertical mixing by typical summertime convective activity strongly influenced the chemical characteristics of the sampled air. During the series of flights based at Natal, Brazil, the dominant synoptic feature was the South Atlantic subtropical anticyclone which generally transported air across the tropical Atlantic toward eastern Brazil. Pronounced subsidence and a well-defined trade wind inversion often characterized the lower and middle troposphere over the Natal region. Some high-altitude recirculation of air from South America was observed, as was cross-equatorial transport which had come from northern Africa. Biomass burning plumes were observed on segments of all of the flights, the source region being the central and southern savannah regions of Africa.
Statistical properties of Fourier-based time-lag estimates
NASA Astrophysics Data System (ADS)
Epitropakis, A.; Papadakis, I. E.
2016-06-01
Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the observed time series; b) smoothing of the cross-periodogram should be avoided, as this may introduce significant bias to the time-lag estimates, which can be taken into account by assuming a model cross-spectrum (and not just a model time-lag spectrum); c) time-lags should be estimated by dividing observed time series into a number, say m, of shorter data segments and averaging the resulting cross-periodograms; d) if the data segments have a duration ≳ 20 ks, the time-lag bias is ≲15% of its intrinsic value for the model cross-spectra and power-spectra considered in this work. This bias should be estimated in practise (by considering possible intrinsic cross-spectra that may be applicable to the time-lag spectra at hand) to assess the reliability of any time-lag analysis; e) the effects of experimental noise can be minimised by only estimating time-lags in the frequency range where the sample coherence is larger than 1.2/(1 + 0.2m). In this range, the amplitude of noise variations caused by measurement errors is smaller than the amplitude of the signal's intrinsic variations. As long as m ≳ 20, time-lags estimated by averaging over individual data segments have analytical error estimates that are within 95% of the true scatter around their mean, and their distribution is similar, albeit not identical, to a Gaussian.
Microchip Module for Blood Sample Preparation and Nucleic Acid Amplification Reactions
Yuen, Po Ki; Kricka, Larry J.; Fortina, Paolo; Panaro, Nicholas J.; Sakazume, Taku; Wilding, Peter
2001-01-01
A computer numerical control-machined plexiglas-based microchip module was designed and constructed for the integration of blood sample preparation and nucleic acid amplification reactions. The microchip module is comprised of a custom-made heater-cooler for thermal cycling, a series of 254 μm × 254 μm microchannels for transporting human whole blood and reagents in and out of an 8–9 μL dual-purpose (cell isolation and PCR) glass-silicon microchip. White blood cells were first isolated from a small volume of human whole blood (<3 μL) in an integrated cell isolation–PCR microchip containing a series of 3.5-μm feature-sized “weir-type” filters, formed by an etched silicon dam spanning the flow chamber. A genomic target, a region in the human coagulation Factor V gene (226-bp), was subsequently directly amplified by microchip-based PCR on DNA released from white blood cells isolated on the filter section of the microchip mounted onto the microchip module. The microchip module provides a convenient means to simplify nucleic acid analyses by integrating two key steps in genetic testing procedures, cell isolation and PCR and promises to be adaptable for additional types of integrated assays. PMID:11230164
Occurrence analysis of daily rainfalls by using non-homogeneous Poissonian processes
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Ferrari, E.; de Luca, D. L.
2009-09-01
In recent years several temporally homogeneous stochastic models have been applied to describe the rainfall process. In particular stochastic analysis of daily rainfall time series may contribute to explain the statistic features of the temporal variability related to the phenomenon. Due to the evident periodicity of the physical process, these models have to be used only to short temporal intervals in which occurrences and intensities of rainfalls can be considered reliably homogeneous. To this aim, occurrences of daily rainfalls can be considered as a stationary stochastic process in monthly periods. In this context point process models are widely used for at-site analysis of daily rainfall occurrence; they are continuous time series models, and are able to explain intermittent feature of rainfalls and simulate interstorm periods. With a different approach, periodic features of daily rainfalls can be interpreted by using a temporally non-homogeneous stochastic model characterized by parameters expressed as continuous functions in the time. In this case, great attention has to be paid to the parsimony of the models, as regards the number of parameters and the bias introduced into the generation of synthetic series, and to the influence of threshold values in extracting peak storm database from recorded daily rainfall heights. In this work, a stochastic model based on a non-homogeneous Poisson process, characterized by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. In particular, variation of rainfall occurrence intensity ? (t) is modelled by using Fourier series analysis, in which the non-homogeneous process is transformed into a homogeneous and unit one through a proper transformation of time domain, and the choice of the minimum number of harmonics is evaluated applying available statistical tests. The procedure is applied to a dataset of rain gauges located in different geographical zones of Mediterranean area. Time series have been selected on the basis of the availability of at least 50 years in the time period 1921-1985, chosen as calibration period, and of all the years of observation in the subsequent validation period 1986-2005, whose daily rainfall occurrence process variability is under hypothesis. Firstly, for each time series and for each fixed threshold value, parameters estimation of the non-homogeneous Poisson model is carried out, referred to calibration period. As second step, in order to test the hypothesis that daily rainfall occurrence process preserves the same behaviour in more recent time periods, the intensity distribution evaluated for calibration period is also adopted for the validation period. Starting from this and using a Monte Carlo approach, 1000 synthetic generations of daily rainfall occurrences, of length equal to validation period, have been carried out, and for each simulation sample ?(t) has been evaluated. This procedure is adopted because of the complexity of determining analytical statistical confidence limits referred to the sample intensity ?(t). Finally, sample intensity, theoretical function of the calibration period and 95% statistical band, evaluated by Monte Carlo approach, are matching, together with considering, for each threshold value, the mean square error (MSE) between the theoretical ?(t) and the sample one of recorded data, and his correspondent 95% one tail statistical band, estimated from the MSE values between the sample ?(t) of each synthetic series and the theoretical one. The results obtained may be very useful in the context of the identification and calibration of stochastic rainfall models based on historical precipitation data. Further applications of the non-homogeneous Poisson model will concern the joint analyses of the storm occurrence process with the rainfall height marks, interpreted by using a temporally homogeneous model in proper sub-year intervals.
Washburne, Alex D.; Burby, Joshua W.; Lacker, Daniel; ...
2016-09-30
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or themore » number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Here, future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washburne, Alex D.; Burby, Joshua W.; Lacker, Daniel
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or themore » number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Here, future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems.« less
Chen, Hungyen; Chen, Ching-Yi; Shao, Kwang-Tsao
2018-05-08
Long-term time series datasets with consistent sampling methods are rather rare, especially the ones of non-target coastal fishes. Here we described a long-term time series dataset of fish collected by trammel net fish sampling and observed by an underwater diving visual census near the thermal discharges at two nuclear power plants on the northern coast of Taiwan. Both experimental and control stations of these two investigations were monitored four times per year in the surrounding seas at both plants from 2000 to 2017. The underwater visual census mainly monitored reef fish assemblages and trammel net samples monitored pelagic or demersal fishes above the muddy/sandy bottom. In total, 508 samples containing 203,863 individuals from 347 taxa were recorded in both investigations at both plants. These data can be used by ecologists and fishery biologists interested in the elucidation of the temporal patterns of species abundance and composition.
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
NASA Astrophysics Data System (ADS)
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
Belostotsky, Inessa; Gridin, Vladimir V; Schechter, Israel; Yarnitzky, Chaim N
2003-02-01
An improved analytical method for airborne lead traces is reported. It is based on using a Venturi scrubber sampling device for simultaneous thin-film stripping and droplet entrapment of aerosol influxes. At least threefold enhancement of the lead-trace pre-concentration is achieved. The sampled traces are analyzed by square-wave anodic stripping voltammetry. The method was tested by a series of pilot experiments. These were performed using contaminant-controlled air intakes. Reproducible calibration plots were obtained. The data were validated by traditional analysis using filter sampling. LODs are comparable with the conventional techniques. The method was successfully applied to on-line and in situ environmental monitoring of lead.
Multiscale analysis of the intensity fluctuation in a time series of dynamic speckle patterns.
Federico, Alejandro; Kaufmann, Guillermo H
2007-04-10
We propose the application of a method based on the discrete wavelet transform to detect, identify, and measure scaling behavior in dynamic speckle. The multiscale phenomena presented by a sample and displayed by its speckle activity are analyzed by processing the time series of dynamic speckle patterns. The scaling analysis is applied to the temporal fluctuation of the speckle intensity and also to the two derived data sets generated by its magnitude and sign. The application of the method is illustrated by analyzing paint-drying processes and bruising in apples. The results are discussed taking into account the different time organizations obtained for the scaling behavior of the magnitude and the sign of the intensity fluctuation.
On the effects of signal processing on sample entropy for postural control.
Lubetzky, Anat V; Harel, Daphna; Lubetzky, Eyal
2018-01-01
Sample entropy, a measure of time series regularity, has become increasingly popular in postural control research. We are developing a virtual reality assessment of sensory integration for postural control in people with vestibular dysfunction and wished to apply sample entropy as an outcome measure. However, despite the common use of sample entropy to quantify postural sway, we found lack of consistency in the literature regarding center-of-pressure signal manipulations prior to the computation of sample entropy. We therefore wished to investigate the effect of parameters choice and signal processing on participants' sample entropy outcome. For that purpose, we compared center-of-pressure sample entropy data between patients with vestibular dysfunction and age-matched controls. Within our assessment, participants observed virtual reality scenes, while standing on floor or a compliant surface. We then analyzed the effect of: modification of the radius of similarity (r) and the embedding dimension (m); down-sampling or filtering and differencing or detrending. When analyzing the raw center-of-pressure data, we found a significant main effect of surface in medio-lateral and anterior-posterior directions across r's and m's. We also found a significant interaction group × surface in the medio-lateral direction when r was 0.05 or 0.1 with a monotonic increase in p value with increasing r in both m's. These effects were maintained with down-sampling by 2, 3, and 4 and with detrending but not with filtering and differencing. Based on these findings, we suggest that for sample entropy to be compared across postural control studies, there needs to be increased consistency, particularly of signal handling prior to the calculation of sample entropy. Procedures such as filtering, differencing or detrending affect sample entropy values and could artificially alter the time series pattern. Therefore, if such procedures are performed they should be well justified.
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Wang, Huiya; Feng, Jun; Wang, Hongyu
2017-07-20
Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
2011-01-01
The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments. PMID:22136293
Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E
2011-12-02
The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.
NASA Astrophysics Data System (ADS)
Huijse, Pablo; Estévez, Pablo A.; Förster, Francisco; Daniel, Scott F.; Connolly, Andrew J.; Protopapas, Pavlos; Carrasco, Rodrigo; Príncipe, José C.
2018-05-01
The Large Synoptic Survey Telescope (LSST) will produce an unprecedented amount of light curves using six optical bands. Robust and efficient methods that can aggregate data from multidimensional sparsely sampled time-series are needed. In this paper we present a new method for light curve period estimation based on quadratic mutual information (QMI). The proposed method does not assume a particular model for the light curve nor its underlying probability density and it is robust to non-Gaussian noise and outliers. By combining the QMI from several bands the true period can be estimated even when no single-band QMI yields the period. Period recovery performance as a function of average magnitude and sample size is measured using 30,000 synthetic multiband light curves of RR Lyrae and Cepheid variables generated by the LSST Operations and Catalog simulators. The results show that aggregating information from several bands is highly beneficial in LSST sparsely sampled time-series, obtaining an absolute increase in period recovery rate up to 50%. We also show that the QMI is more robust to noise and light curve length (sample size) than the multiband generalizations of the Lomb–Scargle and AoV periodograms, recovering the true period in 10%–30% more cases than its competitors. A python package containing efficient Cython implementations of the QMI and other methods is provided.
Uranium series dating of Allan Hills ice
NASA Technical Reports Server (NTRS)
Fireman, E. L.
1986-01-01
Uranium-238 decay series nuclides dissolved in Antarctic ice samples were measured in areas of both high and low concentrations of volcanic glass shards. Ice from the Allan Hills site (high shard content) had high Ra-226, Th-230 and U-234 activities but similarly low U-238 activities in comparison with Antarctic ice samples without shards. The Ra-226, Th-230 and U-234 excesses were found to be proportional to the shard content, while the U-238 decay series results were consistent with the assumption that alpha decay products recoiled into the ice from the shards. Through this method of uranium series dating, it was learned that the Allen Hills Cul de Sac ice is approximately 325,000 years old.
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Ozone Time Series From GOMOS and SAGE II Measurements
NASA Astrophysics Data System (ADS)
Kyrola, E. T.; Laine, M.; Tukiainen, S.; Sofieva, V.; Zawodny, J. M.; Thomason, L. W.
2011-12-01
Satellite measurements are essential for monitoring changes in the global stratospheric ozone distribution. Both the natural variation and anthropogenic change are strongly dependent on altitude. Stratospheric ozone has been measured from space with good vertical resolution since 1985 by the SAGE II solar occultation instrument. The advantage of the occultation measurement principle is the self-calibration, which is essential to ensuring stable time series. SAGE II measurements in 1985-2005 have been a valuable data set in investigations of trends in the vertical distribution of ozone. This time series can now be extended by the GOMOS measurements started in 2002. GOMOS is a stellar occultation instrument and offers, therefore, a natural continuation of SAGE II measurements. In this paper we study how well GOMOS and SAGE II measurements agree with each other in the period 2002-2005 when both instruments were measuring. We detail how the different spatial and temporal sampling of these two instruments affect the conformity of measurements. We study also how the retrieval specifics like absorption cross sections and assumed aerosol modeling affect the results. Various combined time series are constructed using different estimators and latitude-time grids. We also show preliminary results from a novel time series analysis based on Markov chain Monte Carlo approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.L.; Pool, K.H.; Evans, J.C.
1997-01-01
This report describes the analytical results of vapor samples taken from the headspace of waste storage tank 241-BY-108 (Tank BY-108) at the Hanford Site in Washington State. The results described in this report is the second in a series comparing vapor sampling of the tank headspace using the Vapor Sampling System (VSS) and In Situ Vapor Sampling (ISVS) system without high efficiency particulate air (HEPA) prefiltration. The results include air concentrations of water (H{sub 2}O) and ammonia (NH{sub 3}), permanent gases, total non-methane organic compounds (TO-12), and individual organic analytes collected in SUMMA{trademark} canisters and on triple sorbent traps (TSTs).more » Samples were collected by Westinghouse Hanford Company (WHC) and analyzed by Pacific Northwest National Laboratory (PNNL). Analyses were performed by the Vapor Analytical Laboratory (VAL) at PNNL. Analyte concentrations were based on analytical results and, where appropriate, sample volume measurements provided by WHC.« less
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Sampling Of SAR Imagery For Wind Resource Assesment
NASA Astrophysics Data System (ADS)
Badger, Merete; Badger, Jake; Hasager, Charlotte; Nielsen, Morten
2010-04-01
Wind resources over the sea can be assessed from a series of wind fields retrieved from Envisat ASAR imagery, or other SAR data. Previous wind resource maps have been produced through random sampling of 70 or more satellite scenes over a given area of interest followed by fitting of a Weibull function to the data. Here we introduce a more advanced sampling strategy based on wind class methodology that is normally applied in Risø DTU’s numerical modeling of wind resources. The aim is to obtain a more representative data set using fewer satellite SAR scenes. The new sampling strategy has been applied within a wind and solar resource assessment study for the United Arab Emirates (UAE) and also for wind resource mapping over a domain in the North Sea, as part of the EU- NORSEWInD project (2008-2012).
Dendritic growth and structure of undercooled nickel base alloys
NASA Technical Reports Server (NTRS)
Flemings, M. C.; Shiohara, Y.
1988-01-01
The principal objectives of this overall investigation are to: study means for obtaining high undercooling in levitation melted droplets, and study structures produced upon the solidification of these undercooled specimens. Thermal measurements are made of the undercooling, and of the rapid recalescence, to develop an understanding of the solidification mechanism. Comparison of results is made with the modeling studies. Characterization and metallographic work is done to gain an understanding of the relationship between rapid solidification variables and the structures so produced. In ground based work to date, solidification of undercooled Ni-25 wt percent Sn alloy was observed by high-speed cinematography and the results compared with optical temperature measurements. Also in ground based work, high-speed optical temperature measurements were made of the solidification behavior of levitated metal samples within a transparent glass medium. Two undercooled Ni-Sn alloys were examined. Measurements were carried out on samples at undercoolings up to 330 K. Microstructures of samples produced in ground based work were determined by optical metallography and by SEM, and microsegregation by electron microprobe measurements. A series of flight tests were planned to conduct experiments similar to the ground based experiments. The Space Shuttle Columbia carried an alloy undercooled experiment in the STS 61-C mission in January 1986. A sample of Ni-32.5 wt percent Sn eutectic was melted and solidified under microgravity conditions.
Diagnostic criteria and follow-up in neuroendocrine cell hyperplasia of infancy: a case series*
Gomes, Vivianne Calheiros Chaves; Silva, Mara Cristina Coelho; Maia, José Holanda; Daltro, Pedro; Ramos, Simone Gusmão; Brody, Alan S.; Marchiori, Edson
2013-01-01
OBJECTIVE: Neuroendocrine cell hyperplasia of infancy (NEHI) is a form of childhood interstitial lung disease characterized by tachypnea, retractions, crackles, and hypoxia. The aim of this study was to report and discuss the clinical, imaging, and histopathological findings in a series of NEHI cases at a tertiary pediatric hospital, with an emphasis on diagnostic criteria and clinical outcomes. METHODS: Between 2003 and 2011, 12 full-term infants were diagnosed with NEHI, based on clinical and tomographic findings. Those infants were followed for 1-91 months. Four infants were biopsied, and the histopathological specimens were stained with bombesin antibody. RESULTS: In this case series, symptoms appeared at birth in 6 infants and by 3 months of age in the remaining 6. In all of the cases, NEHI was associated with acute respiratory infection. The most common initial chest HRCT findings were ground-glass opacities that were in the middle lobe/lingula in 12 patients and in other medullary areas in 10. Air trapping was the second most common finding, being observed in 7 patients. Follow-up HRCT scans (performed in 10 patients) revealed normal results in 1 patient and improvement in 9. The biopsy findings were nonspecific, and the staining was positive for bombesin in all samples. Confirmation of NEHI was primarily based on clinical and tomographic findings. Symptoms improved during the follow-up period (mean, 41 months). A clinical cure was achieved in 4 patients. CONCLUSIONS: In this sample of patients, the diagnosis of NEHI was made on the basis of the clinical and tomographic findings, independent of the lung biopsy results. Most of the patients showed clinical improvement and persistent tomographic changes during the follow-up period, regardless of the initial severity of the disease or type of treatment. PMID:24310630
NASA Astrophysics Data System (ADS)
Samberg, Andre; Babichenko, Sergei; Poryvkina, Larisa
2005-05-01
Delay between the time when natural disaster, for example, oil accident in coastal water, occurred and the time when environmental protection actions, for example, water and shoreline clean-up, started is of significant importance. Mostly remote sensing techniques are considered as (near) real-time and suitable for multiple tasks. These techniques in combination with rapid environmental assessment methodologies would form multi-tier environmental assessment model, which allows creating (near) real-time datasets and optimizing sampling scenarios. This paper presents the idea of three-tier environmental assessment model. Here all three tiers are briefly described to show the linkages between them, with a particular focus on the first tier. Furthermore, it is described how large-scale environmental assessment can be improved by using an airborne 3-D scanning FLS-AM series hyperspectral lidar. This new aircraft-based sensor is typically applied for oil mapping on sea/ground surface and extracting optical features of subjects. In general, a sampling network, which is based on three-tier environmental assessment model, can include ship(s) and aircraft(s). The airborne 3-D scanning FLS-AM series hyperspectral lidar helps to speed up the whole process of assessing of area of natural disaster significantly, because this is a real-time remote sensing mean. For instance, it can deliver such information as georeferenced oil spill position in WGS-84, the estimated size of the whole oil spill, and the estimated amount of oil in seawater or on ground. All information is produced in digital form and, thus, can be directly transferred into a customer"s GIS (Geographical Information System) system.
NASA Astrophysics Data System (ADS)
Guesmi, Latifa; Menif, Mourad
2015-09-01
Optical performance monitoring (OPM) becomes an inviting topic in high speed optical communication networks. In this paper, a novel technique of OPM based on a new elaborated computation approach of singular spectrum analysis (SSA) for time series prediction is presented. Indeed, various optical impairments among chromatic dispersion (CD), polarization mode dispersion (PMD) and amplified spontaneous emission (ASE) noise are a major factors limiting quality of transmission data in the systems with data rates lager than 40 Gbit/s. This technique proposed an independent and simultaneous multi-impairments monitoring, where we used SSA of time series analysis and forecasting. It has proven their usefulness in the temporal analysis of short and noisy time series in several fields, that it is based on the singular value decomposition (SVD). Also, advanced optical modulation formats (100 Gbit/s non-return-to zero dual-polarization quadrature phase shift keying (NRZ-DP-QPSK) and 160 Gbit/s DP-16 quadrature amplitude modulation (DP-16QAM)) offering high spectral efficiencies have been successfully employed by analyzing their asynchronously sampled amplitude. The simulated results proved that our method is efficient on CD, first-order PMD, Q-factor and OSNR monitoring, which enabled large monitoring ranges, the CD in the range of 170-1700 ps/nm.Km and 170-1110 ps/nm.Km for 100 Gbit/s NRZ-DP-QPSK and 160 Gbit/s DP-16QAM respectively, and also the DGD up to 20 ps is monitored. We could accurately monitor the OSNR in the range of 10-40 dB with monitoring error remains less than 1 dB in the presence of large accumulated CD.
Schreiber, P W; Köhler, N; Cervera, R; Hasse, B; Sax, H; Keller, P M
2018-07-01
A growing number of Mycobacterium chimaera infections after cardiosurgery have been reported by several countries. These potentially fatal infections were traced back to contaminated heater-cooler devices (HCDs), which use water as a heat transfer medium. Aerosolization of water contaminated with M. chimaera from HCDs enables airborne transmission to patients undergoing open chest surgery. Infection control teams test HCD water samples for mycobacterial growth to guide preventive measures. The detection limit of M. chimaera in water samples, however, has not previously been investigated. To determine the detection limit of M. chimaera in water samples using laboratory-based serial dilution tests. An M. chimaera strain representative of the international cardiosurgery-associated M. chimaera outbreak was used to generate a logarithmic dilution series. Two different water volumes, 50 and 1000mL, were inoculated, and, after identical processing (centrifugation, decantation, and decontamination), seeded on mycobacteria growth indicator tube (MGIT) and Middlebrook 7H11 solid media. MGIT consistently showed a lower detection limit than 7H11 solid media, corresponding to a detection limit of ≥1.44 × 10 4 cfu/mL for 50mL and ≥2.4cfu/mL for 1000mL water samples. Solid media failed to detect M. chimaera in 50mL water samples. Depending on water volume and culture method, major differences exist in the detection limit of M. chimaera. In terms of sensitivity, 1000mL water samples in MGIT media performed best. Our results have important implications for infection prevention and control strategies in mitigation of the M. chimaera outbreak and healthcare water safety in general. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
UDATE1: A computer program for the calculation of uranium-series isotopic ages
Rosenbauer, R.J.
1991-01-01
UDATE1 is a FORTRAN-77 program with an interface for an Apple Macintosh computer that calculates isotope activities from measured count rates to date geologic materials by uranium-series disequilibria. Dates on pure samples can be determined directly by the accumulation of 230Th from 234U and of 231Pa from 235U. Dates for samples contaminated by clays containing abundant natural thorium can be corrected by the program using various mixing models. Input to the program and file management are made simple and user friendly by a series of Macintosh modal dialog boxes. ?? 1991.
Mirzaian, M; Wisse, P; Ferraz, M J; Marques, A R A; Gaspar, P; Oussoren, S V; Kytidou, K; Codée, J D C; van der Marel, G; Overkleeft, H S; Aerts, J M
2017-03-01
Free sphingoid bases (lysosphingolipids) of primary storage sphingolipids are increased in tissues and plasma of several sphingolipidoses. As shown earlier by us, sphingoid bases can be accurately quantified using UPLC-ESI-MS/MS, particularly in combination with identical 13 C-encoded internal standards. The feasibility of simultaneous quantitation of sphingoid bases in plasma specimens spiked with a mixture of such standards is here described. The sensitivity and linearity of detection is excellent for all examined sphingoid bases (sphingosine, sphinganine, hexosyl-sphingosine (glucosylsphingosine), hexosyl 2 -sphingosine (lactosylsphingosine), hexosyl 3 -sphingosine (globotriaosylsphingosine), phosphorylcholine-sphingosine) in the relevant concentration range and the measurements show very acceptable intra- and inter-assay variation (<10% average). Plasma samples of a series of male and female Gaucher Disease and Fabry Disease patients were analyzed with the multiplex assay. The obtained data compare well to those earlier determined for plasma globotriaosylsphingosine and glucosylsphingosine in GD and FD patients. The same approach can be also applied to measure sphingolipids in the same sample. Following extraction of sphingolipids from the same sample these can be converted to sphingoid bases by microwave exposure and subsequently quantified using 13 C-encoded internal standards. Copyright © 2017 Elsevier B.V. All rights reserved.
Bounds of memory strength for power-law series.
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bounds of memory strength for power-law series
NASA Astrophysics Data System (ADS)
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.
Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N
2012-01-16
We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.
Geoelectrical inference of mass transfer parameters using temporal moments
Day-Lewis, Frederick D.; Singha, Kamini
2008-01-01
We present an approach to infer mass transfer parameters based on (1) an analytical model that relates the temporal moments of mobile and bulk concentration and (2) a bicontinuum modification to Archie's law. Whereas conventional geochemical measurements preferentially sample from the mobile domain, electrical resistivity tomography (ERT) is sensitive to bulk electrical conductivity and, thus, electrolytic solute in both the mobile and immobile domains. We demonstrate the new approach, in which temporal moments of collocated mobile domain conductivity (i.e., conventional sampling) and ERT‐estimated bulk conductivity are used to calculate heterogeneous mass transfer rate and immobile porosity fractions in a series of numerical column experiments.
Experiments with a small behaviour controlled planetary rover
NASA Technical Reports Server (NTRS)
Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John
1993-01-01
A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Horswill, M S; Coster, M E
2001-02-01
The Internet has been exploited successfully in the past as a medium for behavioral research. This paper presents a series of studies designed to assess Internet-based measures of drivers' risk-taking behavior. First, we compared responses from an Internet sample with a traditional pencil-and-paper sample using established questionnaire measures of risk taking. No significant differences were found. Second, we assessed the validity of new Internet-based instruments, involving photographs and photographic animations, that measured speed, gap acceptance, and passing. Responses were found to reflect known demographic patterns of actual behavior to some degree. Also, a roadside survey of speeds was carried out at the locations depicted in the photographic measure of speeding and, with certain exceptions, differences between the two appeared to be constant. Third, a between-subject experimental manipulation involving the photographic animation measure of gap acceptance was used to demonstrate one application of these techniques.
Miller, Robert; Plessow, Franziska
2013-06-01
Endocrine time series often lack normality and homoscedasticity most likely due to the non-linear dynamics of their natural determinants and the immanent characteristics of the biochemical analysis tools, respectively. As a consequence, data transformation (e.g., log-transformation) is frequently applied to enable general linear model-based analyses. However, to date, data transformation techniques substantially vary across studies and the question of which is the optimum power transformation remains to be addressed. The present report aims to provide a common solution for the analysis of endocrine time series by systematically comparing different power transformations with regard to their impact on data normality and homoscedasticity. For this, a variety of power transformations of the Box-Cox family were applied to salivary cortisol data of 309 healthy participants sampled in temporal proximity to a psychosocial stressor (the Trier Social Stress Test). Whereas our analyses show that un- as well as log-transformed data are inferior in terms of meeting normality and homoscedasticity, they also provide optimum transformations for both, cross-sectional cortisol samples reflecting the distributional concentration equilibrium and longitudinal cortisol time series comprising systematically altered hormone distributions that result from simultaneously elicited pulsatile change and continuous elimination processes. Considering these dynamics of endocrine oscillations, data transformation prior to testing GLMs seems mandatory to minimize biased results. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wannagat, Wienke; Waizenegger, Gesine; Hauf, Juliane; Nieding, Gerhild
2018-01-01
This study investigated the effects of auditory and audiovisual text presentation on the three levels of mental representations assumed in theories of discourse processing. A sample of 106 children aged 7, 9, and 11 years listened to 16 short narrative texts, 8 of which were accompanied by a series of pictures illustrating the content.…
ERIC Educational Resources Information Center
Nesbitt, John A., Ed.
Fifty-seven papers on new models of community recreation for the handicapped comprise the third report in the series (EC 114 401-409). Papers deal with the following topics (sample subtopics in parentheses): administration (management by objectives); advocacy; areas and equipment (outdoor playground equipment); attitudes; barriers (an analysis of…
Todd A. Schroeder; Sean P. Healey; Gretchen G. Moisen; Tracey S. Frescino; Warren B. Cohen; Chengquan Huang; Robert E. Kennedy; Zhiqiang Yang
2014-01-01
With earth's surface temperature and human population both on the rise a new emphasis has been placed on monitoring changes to forested ecosystems the world over. In the United States the U.S. Forest Service Forest Inventory and Analysis (FIA) program monitors the forested land base with field data collected over a permanent network of sample plots. Although these...
Thermodynamic Mixing Behavior Of F-OH Apatite Crystalline Solutions
NASA Astrophysics Data System (ADS)
Hovis, G. L.
2011-12-01
It is important to establish a thermodynamic data base for accessory minerals and mineral series that are useful in determining fluid composition during petrologic processes. As a starting point for apatite-system thermodynamics, Hovis and Harlov (2010, American Mineralogist 95, 946-952) reported enthalpies of mixing for a F-Cl apatite series. Harlov synthesized all such crystalline solutions at the GFZ-Potsdam using a slow-cooled molten-flux method. In order to expand thermodynamic characterization of the F-Cl-OH apatite system, a new study has been initiated along the F-OH apatite binary. Synthesis of this new series made use of National Institute of Standards and Technology (NIST) 2910a hydroxylapatite, a standard reference material made at NIST "by solution reaction of calcium hydroxide with phosphoric acid." Synthesis efforts at Lafayette College have been successful in producing fluorapatite through ion exchange between hydroxylapatite 2910a and fluorite. In these experiments, a thin layer of hydroxylapatite powder was placed on a polished CaF2 disc (obtained from a supplier of high-purity crystals for spectroscopy), pressed firmly against the disc, then annealed at 750 °C (1 bar) for three days. Longer annealing times did not produce further change in unit-cell dimensions of the resulting fluorapatite, but it is uncertain at this time whether this procedure produces a pure-F end member (chemical analyses to be performed in the near future). It is clear from the unit-cell dimensions, however, that the newly synthesized apatite contains a high percentage of fluorine, probably greater than 90 mol % F. Intermediate compositions for a F-OH apatite series were made by combining 2910a hydroxylapatite powder with the newly synthesized fluorapatite in various proportions, then conducting chemical homogenization experiments at 750 °C on each mixture. X-ray powder diffraction data indicated that these experiments were successful in producing chemically homogeneous intermediate series members, as doubled peaks merged into single diffraction maxima, the latter changing position systematically with bulk composition. All of the resulting F-OH apatite series members have hexagonal symmetry. The "a" unit-cell dimension behaves linearly with composition, and "c" is nearly constant across the series. Unit-cell volume also is linear with F:OH ratio, thus behaving in a thermodynamically ideal manner. Solution calorimetric experiments have been conducted in 20.0 wt % HCl at 50 °C on all series members. Enthalpies of F-OH mixing are nonexistent at F-rich compositions but have small negative values toward the hydroxylapatite end member. There is no enthalpy barrier, therefore, to complete F-OH mixing across the series, indicated as well by the ease of chemical homogenization for intermediate F:OH series members. In addition to the synthetic specimens described above, natural samples of hydroxylapatite, fluorapatite, and chlorapatite have been obtained for study from the U.S. National Museum of Natural History, as well as the American Museum of Natural History (our sincere appreciation to both museums for providing samples). Solution calorimetric results for these samples will be compared with data for the synthetic OH, F, and Cl apatite analogs noted above.
Johnson, Michaela R.; Graham, Garth E.; Hubbard, Bernard E.; Benzel, William M.
2015-07-16
This Data Series summarizes results from July 2013 sampling in the western Alaska Range near Mount Estelle, Alaska. The fieldwork combined in situ and camp-based spectral measurements of talus/soil and rock samples. Five rock and 48 soil samples were submitted for quantitative geochemical analysis (for 55 major and trace elements), and the 48 soils samples were also analyzed by x-ray diffraction to establish mineralogy and geochemistry. The results and sample photographs are presented in a geodatabase that accompanies this report. The spectral, mineralogical, and geochemical characterization of these samples and the sites that they represent can be used to validate existing remote-sensing datasets (for example, ASTER) and future hyperspectral studies. Empirical evidence of jarosite (as identified by x-ray diffraction and spectral analysis) corresponding with gold concentrations in excess of 50 parts per billion in soil samples suggests that surficial mapping of jarosite in regional surveys may be useful for targeting areas of prospective gold occurrences in this sampling area.
Evaluating the Effectiveness of the 1999-2000 NASA CONNECT Program
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Frank, Kari Lou
2002-01-01
NASA CONNECT is a standards-based, integrated mathematics, science, and technology series of 30-minute instructional distance learning (satellite and television) programs for students in grades 6-8. Each of the five programs in the 1999-2000 NASA CONNECT series included a lesson, an educator guide, a student activity or experiment, and a web-based component. In March 2000, a mail (self-reported) survey (booklet) was sent to a randomly selected sample of 1,000 NASA CONNECT registrants. A total of 336 surveys (269 usable) were received by the established cut-off date. Most survey questions employed a 5-point Likert-type response scale. Survey topics included (1) instructional technology and teaching, (2) instructional programming and technology in the classroom, (3) the NASA CONNECT program, (4) classroom use of computer technology, and (5) demographics. About 73% of the respondents were female, about 92% identified "classroom teacher" as their present professional duty, about 90% worked in a public school, and about 62% held a master's degree or master's equivalency. Regarding NASA CONNECT, respondents reported that (1) they used the five programs in the 1999-2000 NASA CONNECT series; (2) the stated objectives for each program were met (4.54); (3) the programs were aligned with the national mathematics, science, and technology standards (4.57); (4) program content was developmentally appropriate for grade level (4.17); and (5) the programs in the 1999-2000 NASA CONNECT series enhanced/enriched the teaching of mathematics, science, and technology (4.51).
Evaluating the Effectiveness of the 1998-1999 NASA CONNECT Program
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Frank, Kari Lou; House, Patricia L.
2000-01-01
NASA CONNECT is a standards-based, integrated mathematics, science, and technology series of 30-minute instructional distance learning (satellite and television) programs for students in grades 5-8. Each of the five programs in the 1998-1999 NASA CONNECT series included a lesson, an educator guide, a student activity or experiment, and a web-based component. In March 1999, a mail (self-reported) survey (booklet) was sent to a randomly selected sample of 1,000 NASA CONNECT registrants. A total of 401 surveys (351 usable) were received by the established cut-off date. Most survey questions employed a 5-point Likert-type response scale. Survey topics included: (1) instructional technology and teaching, (2) instructional programming and technology in the classroom, (3) the NASA CONNECT program, (4) classroom use of computer technology, and (5) demographics. About 68% of the respondents were female, about 88% identified "classroom teacher" as their present professional duty, about 75% worked in a public school, and about 67% held a master's degree or master's equivalency. Regarding NASA CONNECT, respondents reported that: (1) they used the five programs in the 1998-1999 NASA CONNECT series; (2) the stated objectives for each program were met (4.49); (3) the programs were aligned with the national mathematics, science, and technology standards (4.61); (4) program content was developmentally appropriate for grade level (4.25); and (5) the programs in the 1998-1999 NASA CONNECT series enhanced/enriched the teaching of mathematics, science, and technology (4.45).
Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M
2018-05-16
Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
X-ray and dielectric characterization of Co doped tetragonal BaTiO3 ceramics
NASA Astrophysics Data System (ADS)
Bujakiewicz-Koronska, R.; Vasylechko, L.; Markiewicz, E.; Nalecz, D. M.; Kalvane, A.
2017-01-01
The crystal structure modifications of BaTiO3 induced by cobalt doping were studied. The polycrystalline (1 - x)BaTiO3 + xCo2O3 samples, with x ≤ 10 wt.%, were prepared by high temperature sintering conventional method. According to X-ray phase and structural characterization, performed by full-profile Rietveld refinement technique, all synthesized samples showed tetragonal symmetry perovskite structure with minor amount of parasitic phases. Pure single-phase composition has been detected only in the low level of doping BaTiO3. It was indicated that substitution of Co for the Ti sites in the (1 - x)BaTiO3 + xCo2O3 series led to decrease of tetragonality (c/a) of the BaTiO3 perovskite structure. This effect almost vanished in the (1 - x)BaTiO3 + xCo2O3 samples with nominal Co content higher than ∼1 wt.%, in which precipitation of parasitic Co-containing phases CoO and Co2TiO4 has been observed. Based on the results, the solubility limit of Co in Ti sub-lattice in the (1 - x)BaTiO3 + xCo2O3 series is estimated as x = 0.75 wt.%.
NASA Astrophysics Data System (ADS)
Zeng, Yayun; Wang, Jun; Xu, Kaixuan
2017-04-01
A new financial agent-based time series model is developed and investigated by multiscale-continuum percolation system, which can be viewed as an extended version of continuum percolation system. In this financial model, for different parameters of proportion and density, two Poisson point processes (where the radii of points represent the ability of receiving or transmitting information among investors) are applied to model a random stock price process, in an attempt to investigate the fluctuation dynamics of the financial market. To validate its effectiveness and rationality, we compare the statistical behaviors and the multifractal behaviors of the simulated data derived from the proposed model with those of the real stock markets. Further, the multiscale sample entropy analysis is employed to study the complexity of the returns, and the cross-sample entropy analysis is applied to measure the degree of asynchrony of return autocorrelation time series. The empirical results indicate that the proposed financial model can simulate and reproduce some significant characteristics of the real stock markets to a certain extent.
Estimating clinical chemistry reference values based on an existing data set of unselected animals.
Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe
2008-11-01
In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.
Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality
Hirsch, Robert M.
1988-01-01
This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.
NASA Astrophysics Data System (ADS)
Ganzha, V.; Ivshin, K.; Kammel, P.; Kravchenko, P.; Kravtsov, P.; Petitjean, C.; Trofimov, V.; Vasilyev, A.; Vorobyov, A.; Vznuzdaev, M.; Wauters, F.
2018-02-01
A series of muon experiments at the Paul Scherrer Institute in Switzerland deploy ultra-pure hydrogen active targets. A new gas impurity analysis technique was developed, based on conventional gas chromatography, with the capability to measure part-per-billion (ppb) traces of nitrogen and oxygen in hydrogen and deuterium. Key ingredients are a cryogenic admixture accumulation, a directly connected sampling system and a dedicated calibration setup. The dependence of the measured concentration on the sample volume was investigated, confirming that all impurities from the sample gas are collected in the accumulation column and measured with the gas chromatograph. The system was calibrated utilizing dynamic dilution of admixtures into the gas flow down to sub-ppb level concentrations. The total amount of impurities accumulated in the purification system during a three month long experimental run was measured and agreed well with the calculated amount based on the measured concentrations in the flow.
Usage of CT data in biomechanical research
NASA Astrophysics Data System (ADS)
Safonov, Roman A.; Golyadkina, Anastasiya A.; Kirillova, Irina V.; Kossovich, Leonid Y.
2017-02-01
Object of study: The investigation is focused on development of personalized medicine. The determination of mechanical properties of bone tissues based on in vivo data was considered. Methods: CT, MRI, natural experiments on versatile test machine Instron 5944, numerical experiments using Python programs. Results: The medical diagnostics methods, which allows determination of mechanical properties of bone tissues based on in vivo data. The series of experiments to define the values of mechanical parameters of bone tissues. For one and the same sample, computed tomography (CT), magnetic resonance imaging (MRI), ultrasonic investigations and mechanical experiments on single-column test machine Instron 5944 were carried out. The computer program for comparison of CT and MRI images was created. The grayscale values in the same points of the samples were determined on both CT and MRI images. The Haunsfield grayscale values were used to determine rigidity (Young module) and tensile strength of the samples. The obtained data was compared to natural experiments results for verification.
Testing the weak-form efficiency of the WTI crude oil futures market
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Xie, Wen-Jie; Zhou, Wei-Xing
2014-07-01
The weak-form efficiency of energy futures markets has long been studied and empirical evidence suggests controversial conclusions. In this work, nonparametric methods are adopted to estimate the Hurst indexes of the WTI crude oil futures prices (1983-2012) and a strict statistical test in the spirit of bootstrapping is put forward to verify the weak-form market efficiency hypothesis. The results show that the crude oil futures market is efficient when the whole period is considered. When the whole series is divided into three sub-series separated by the outbreaks of the Gulf War and the Iraq War, it is found that the Gulf War reduced the efficiency of the market. If the sample is split into two sub-series based on the signing date of the North American Free Trade Agreement, the market is found to be inefficient in the sub-periods during which the Gulf War broke out. The same analysis on short-time series in moving windows shows that the market is inefficient only when some turbulent events occur, such as the oil price crash in 1985, the Gulf war, and the oil price crash in 2008.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bud'ko, Sergey L.; Chung, Duck Young; Bugaris, Daniel
2014-01-16
We present the evolution of the initial (up to ~ 10 kbar) hydrostatic pressure dependencies of T c and of the ambient pressure, and the jump in the heat capacity associated with the superconducting transition as a function of Na doping in the Ba1-xNaxFe2As2 family of iron-based superconductors. For Na concentrations 0.15 ≤ x ≤ 0.9, the jump in specific heat at T c, ΔC p| Tmore » $$_c$$, follows the ΔC p ∝ to T 3 (the so-called BNC scaling) found for most BaFe 2As 2 based superconductors. This finding suggests that, unlike the related Ba 1-xK xFe 2As 2 series, there is no significant modification of the superconducting state (e. g., change in superconducting gap symmetry) in the Ba 1-xNa xFe 2As 2 series over the whole studied Na concentration range. Pressure dependencies are nonmonotonic for x = 0.2 and 0.24. For other Na concentrations, T c decreases under pressure in an almost linear fashion. The anomalous behavior of the x = 0.2 and 0.24 samples under pressure is possibly due to the crossing of the phase boundaries of the narrow antiferromagnetic tetragonal phase, unique for the Ba 1-xNa xFe 2As 2 series, with the application of pressure. The negative sign of the pressure derivatives of T c across the whole superconducting dome (except for x = 0.2) is a clear indication of the nonequivalence of substitution and pressure for the Ba 1-xNa xFe 2As 2 series.« less
Dexter, Franklin; Epstein, Richard H; Ledolter, Johannes; Wanderer, Jonathan P
2018-05-16
Recent studies have made longitudinal assessments of case counts using State (e.g., United States) and Provincial (e.g., Canada) databases. Such databases rarely include either operating room (OR) or anesthesia times and, even when duration data are available, there are major statistical limitations to their use. We evaluated how to forecast short-term changes in OR caseload and workload (hours) and how to decide whether changes are outliers (e.g., significant, abrupt decline in anesthetics). Observational cohort study. Large teaching hospital. 35 years of annual anesthesia caseload data. Annual data were used without regard to where or when in the year each case was performed, thereby matching public use files. Changes in caseload or hours among four-week periods were examined within individual year-long periods using 159 consecutive four-week periods from the same hospital. Series of 12 four-week periods of the hours of cases performed on workdays lacked trend or correlation among periods for 49 of 50 series and followed normal distributions for 50 of 50 series. These criteria also were satisfied for 50 of 50 series based on counts of cases. The Pearson r = 0.999 between hours of anesthetics and cases. For purposes of time series analysis of total workload at a hospital within 1-year, hours of cases and counts of cases are interchangeable. Simple control chart methods of detecting sudden changes in workload or caseload, based simply on the sample mean and standard deviation from the preceding year, are appropriate. Copyright © 2018 Elsevier Inc. All rights reserved.
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun
2017-12-01
Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Portfolio management under sudden changes in volatility and heterogeneous investment horizons
NASA Astrophysics Data System (ADS)
Fernandez, Viviana; Lucey, Brian M.
2007-03-01
We analyze the implications for portfolio management of accounting for conditional heteroskedasticity and sudden changes in volatility, based on a sample of weekly data of the Dow Jones Country Titans, the CBT-municipal bond, spot and futures prices of commodities for the period 1992-2005. To that end, we first proceed to utilize the ICSS algorithm to detect long-term volatility shifts, and incorporate that information into PGARCH models fitted to the returns series. At the next stage, we simulate returns series and compute a wavelet-based value at risk, which takes into consideration the investor's time horizon. We repeat the same procedure for artificial data generated from semi-parametric estimates of the distribution functions of returns, which account for fat tails. Our estimation results show that neglecting GARCH effects and volatility shifts may lead to an overestimation of financial risk at different time horizons. In addition, we conclude that investors benefit from holding commodities as their low or even negative correlation with stock and bond indices contribute to portfolio diversification.
Erickson, R.L.; Marsh, S.P.
1972-01-01
This series of maps shows the distribution and abundance of mercury, arsenic, antimony, tungsten, gold, copper, lead, and silver related to a geologic and aeromagnetic base in the Golconda and Iron Point 7½-minute quadrangles. All samples are rock samples; most are from shear or fault zones, fractures, jasperoid, breccia reefs, and altered rocks. All the samples were prepared and analyzed in truck-mounted laboratories at Winnemucca, Nevada. Arsenic, tungsten, copper, lead, and silver were determined by semiquantitative spectrographic methods by D.F. Siems and E.F. Cooley. Mercury and gold were determined by atomic absorption methods and antimony was determined by wet chemical methods by R.M. O'Leary, M.S. Erickson, and others.
Gray, John R.; Fisk, Gregory G.
1992-01-01
From July 1988 through September 1991, radionuclide and suspended-sediment transport were monitored in ephemeral streams in the semiarid Little Colorado River basin of Arizona and New Mexico, USA, where in-stream gross-alpha plus gross-beta activities have exceeded Arizona's Maximum Allowable Limit through releases from natural weathering processes and from uranium-mining operations in the Church Rock Mining District, Grants Mineral Belt, New Mexico. Water samples were collected at a network of nine continuous-record streamgauges equipped with microprocessor-based satellite telemetry and automatic water-sampling systems, and six partial-record streamgauges equipped with passive water samplers. Analytical results from these samples were used to calculate transport of selected suspended and dissolved radionuclides in the uranium-238 and thorium-232 decay series.
Arbitrary-order corrections for finite-time drift and diffusion coefficients
NASA Astrophysics Data System (ADS)
Anteneodo, C.; Riera, R.
2009-09-01
We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.
Southern Salish Sea Habitat Map Series data catalog
Cochrane, Guy R.
2015-01-01
This data catalog contains much of the data used to prepare the SIMs in the Southern Salish Sea Habitat Map Series. Other data that were used to prepare the maps were compiled from previously published sources (for example, sediment samples and seismic reflection profiles) and are not included in this data series.
Gass, Katherine; Balachandran, Sivaraman; Chang, Howard H.; Russell, Armistead G.; Strickland, Matthew J.
2015-01-01
Epidemiologic studies utilizing source apportionment (SA) of fine particulate matter have shown that particles from certain sources might be more detrimental to health than others; however, it is difficult to quantify the uncertainty associated with a given SA approach. In the present study, we examined associations between source contributions of fine particulate matter and emergency department visits for pediatric asthma in Atlanta, Georgia (2002–2010) using a novel ensemble-based SA technique. Six daily source contributions from 4 SA approaches were combined into an ensemble source contribution. To better account for exposure uncertainty, 10 source profiles were sampled from their posterior distributions, resulting in 10 time series with daily SA concentrations. For each of these time series, Poisson generalized linear models with varying lag structures were used to estimate the health associations for the 6 sources. The rate ratios for the source-specific health associations from the 10 imputed source contribution time series were combined, resulting in health associations with inflated confidence intervals to better account for exposure uncertainty. Adverse associations with pediatric asthma were observed for 8-day exposure to particles generated from diesel-fueled vehicles (rate ratio = 1.06, 95% confidence interval: 1.01, 1.10) and gasoline-fueled vehicles (rate ratio = 1.10, 95% confidence interval: 1.04, 1.17). PMID:25776011
Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis
NASA Astrophysics Data System (ADS)
Rzepecka, Zofia; Kalita, Jakub
2016-04-01
It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.
Causal discovery and inference: concepts and recent methodological advances.
Spirtes, Peter; Zhang, Kun
This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.
Shielding gas effect to diffusion activities of magnesium and copper on aluminum clad
NASA Astrophysics Data System (ADS)
Manurung, Charles SP; Napitupulu, Richard AM
2017-09-01
Aluminum is the second most metal used in many application, because of its corrosion resistance. The Aluminum will be damaged in over time if it’s not maintained in good condition. That is important to give protection to the Aluminums surface. Cladding process is one of surface protection methodes, especially for metals. Aluminum clad copper (Al/Cu) or copper clad aluminum (Cu/Al) composite metals have been widely used for many years. These mature protection method and well tested clad metal systems are used industrially in a variety application. The inherent properties and behavior of both copper and aluminum combine to provide unique performance advantages. In this paper Aluminum 2024 series will be covered with Aluminum 1100 series by hot rolling process. Observations will focus on diffusion activities of Mg and Cu that not present on Aluminum 1100 series. The differences of clad material samples is the use of shielding gas during heating before hot rolling process. The metallurgical characteristics will be examined by using optical microscopy. Transition zone from the interface cannot be observed but from Energy Dispersive Spectrometry it’s found that Mg and Cu are diffused from base metal (Al 2024) to the clad metal (Al 1100). Hardness test proved that base metals hardness to interface was decrease.
Zhong, Lieshuang; Zhu, Hai; Wu, Yang; Guo, Zhiguang
2018-09-01
The Namib Desert beetle-Stenocara can adapt to the arid environment by its fog harvesting ability. A series of samples with different topography and wettability that mimicked the elytra of the beetle were fabricated to study the effect of these factors on fog harvesting. The superhydrophobic bulgy sample harvested 1.5 times the amount of water than the sample with combinational pattern of hydrophilic bulgy/superhydrophobic surrounding and 2.83 times than the superhydrophobic surface without bulge. These bulges focused the droplets around them which endowed droplets with higher velocity and induced the highest dynamic pressure atop them. Superhydrophobicity was beneficial for the departure of harvested water on the surface of sample. The bulgy topography, together with surface wettability, dominated the process of water supply and water removal. Copyright © 2018 Elsevier Inc. All rights reserved.
Thomas, Elaine
2005-01-01
This article is the second in a series of three that will give health care professionals (HCPs) a sound introduction to medical statistics (Thomas, 2004). The objective of research is to find out about the population at large. However, it is generally not possible to study the whole of the population and research questions are addressed in an appropriate study sample. The next crucial step is then to use the information from the sample of individuals to make statements about the wider population of like individuals. This procedure of drawing conclusions about the population, based on study data, is known as inferential statistics. The findings from the study give us the best estimate of what is true for the relevant population, given the sample is representative of the population. It is important to consider how accurate this best estimate is, based on a single sample, when compared to the unknown population figure. Any difference between the observed sample result and the population characteristic is termed the sampling error. This article will cover the two main forms of statistical inference (hypothesis tests and estimation) along with issues that need to be addressed when considering the implications of the study results. Copyright (c) 2005 Whurr Publishers Ltd.
MicroRNA signatures in B-cell lymphomas
Di Lisio, L; Sánchez-Beato, M; Gómez-López, G; Rodríguez, M E; Montes-Moreno, S; Mollejo, M; Menárguez, J; Martínez, M A; Alves, F J; Pisano, D G; Piris, M A; Martínez, N
2012-01-01
Accurate lymphoma diagnosis, prognosis and therapy still require additional markers. We explore the potential relevance of microRNA (miRNA) expression in a large series that included all major B-cell non-Hodgkin lymphoma (NHL) types. The data generated were also used to identify miRNAs differentially expressed in Burkitt lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL) samples. A series of 147 NHL samples and 15 controls were hybridized on a human miRNA one-color platform containing probes for 470 human miRNAs. Each lymphoma type was compared against the entire set of NHLs. BL was also directly compared with DLBCL, and 43 preselected miRNAs were analyzed in a new series of routinely processed samples of 28 BLs and 43 DLBCLs using quantitative reverse transcription-polymerase chain reaction. A signature of 128 miRNAs enabled the characterization of lymphoma neoplasms, reflecting the lymphoma type, cell of origin and/or discrete oncogene alterations. Comparative analysis of BL and DLBCL yielded 19 differentially expressed miRNAs, which were confirmed in a second confirmation series of 71 paraffin-embedded samples. The set of differentially expressed miRNAs found here expands the range of potential diagnostic markers for lymphoma diagnosis, especially when differential diagnosis of BL and DLBCL is required. PMID:22829247
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Bychkov, P. S.; Chentsov, A. V.; Kozintsev, V. M.; Popov, A. L.
2018-04-01
A calculation-experimental technique is developed for identification of the shrinkage stresses generated in objects after their additive manufacturing by layer-by-layer photopolymerization. The technique is based on the analysis of shrinkage deformations at bending occurring in a series of samples in the form of plates-stripes with identical sizes, but with different time of polymerization which is predetermined during their production on the 3D printer.
Spectral and correlation analysis with applications to middle-atmosphere radars
NASA Technical Reports Server (NTRS)
Rastogi, Prabhat K.
1989-01-01
The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
NASA Astrophysics Data System (ADS)
Gholizadeh, A.; Reyhani, A.; Parvin, P.; Mortazavi, S. Z.
2017-05-01
ZnO nanostructures (including nano-plates and nano-rods (NRs)) are grown in various temperatures and Ar/O2 flow rates using thermal chemical vapor deposition, which affect the structure, nano-plate/NR population, and the quality of ZnO nanostructures. X-ray diffraction (XRD) attests that the peak intensity of the crystallographic plane (1 0 0) is correlated to nano-plate abundance. Moreover, optical properties elucidate that the population of nano-plates in samples strongly affect the band gap, binding energy of the exciton, and UV-visible (UV-vis) absorption and spectral luminescence emissions. In fact, the exciton binding energy reduces from ~100 to 80 meV when the population of nano-plates increases in samples. Photovoltaic characteristics based on the drop-casting on Si solar cells reveals three dominant factors, namely, the equivalent series resistance, decreasing reflectance, and down-shifting, in order to scale up the absolute efficiency by 3%. As a consequence, the oxygen vacancies in ZnO nanostructures give rise to the down-shifting and increase of free-carriers, leading to a reduction in the equivalent series resistance and an enlargement of fill factor. To obtain a larger I sc, reduction of spectral reflectance is essential; however, the down-shifting process is shown to be dominant by lessening the surface electron-hole recombination rate over the UV-blue spectral range.
Further contributions to the understanding of nitrogen removal in waste stabilization ponds.
Bastos, R K X; Rios, E N; Sánchez, I A
2018-06-01
A set of experiments were conducted in Brazil in a pilot-scale waste stabilization pond (WSP) system (a four-maturation-pond series) treating an upflow anaerobic sludge blanket (UASB) reactor effluent. Over a year and a half the pond series was monitored under two flow rate conditions, hence also different hydraulic retention times and surface loading rates. On-site and laboratory trials were carried out to assess: (i) ammonia losses by volatilization using acrylic capture chambers placed at the surface of the ponds; (ii) organic nitrogen sedimentation rates using metal buckets placed at the bottom of the ponds for collecting settled particulate matter; (iii) nitrogen removal by algal uptake based on the nitrogen content of the suspended particulate matter in samples from the ponds' water column. In addition, nitrification and denitrification rates were measured in laboratory-based experiments using pond water and sediment samples. The pond system achieved high nitrogen removal (69% total nitrogen and 92% ammonia removal). The average total nitrogen removal rates varied from 10,098 to 3,849 g N/ha·d in the first and the last ponds, respectively, with the following fractions associated with the various removal pathways: (i) 23.5-45.6% sedimentation of organic nitrogen; (ii) 13.1-27.8% algal uptake; (iii) 1.2-3.1% ammonia volatilization; and (iv) 0.15-0.34% nitrification-denitrification.
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes
2015-09-01
Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs.
Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes
2015-01-01
Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs. PMID:26517655
A comparison of moment-based methods of estimation for the log Pearson type 3 distribution
NASA Astrophysics Data System (ADS)
Koutrouvelis, I. A.; Canavos, G. C.
2000-06-01
The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.
Overland Flow Analysis Using Time Series of Suas-Derived Elevation Models
NASA Astrophysics Data System (ADS)
Jeziorska, J.; Mitasova, H.; Petrasova, A.; Petras, V.; Divakaran, D.; Zajkowski, T.
2016-06-01
With the advent of the innovative techniques for generating high temporal and spatial resolution terrain models from Unmanned Aerial Systems (UAS) imagery, it has become possible to precisely map overland flow patterns. Furthermore, the process has become more affordable and efficient through the coupling of small UAS (sUAS) that are easily deployed with Structure from Motion (SfM) algorithms that can efficiently derive 3D data from RGB imagery captured with consumer grade cameras. We propose applying the robust overland flow algorithm based on the path sampling technique for mapping flow paths in the arable land on a small test site in Raleigh, North Carolina. By comparing a time series of five flights in 2015 with the results of a simulation based on the most recent lidar derived DEM (2013), we show that the sUAS based data is suitable for overland flow predictions and has several advantages over the lidar data. The sUAS based data captures preferential flow along tillage and more accurately represents gullies. Furthermore the simulated water flow patterns over the sUAS based terrain models are consistent throughout the year. When terrain models are reconstructed only from sUAS captured RGB imagery, however, water flow modeling is only appropriate in areas with sparse or no vegetation cover.
NASA Astrophysics Data System (ADS)
Chabaux, F.; Blaes, E.; Stille, P.; di Chiara Roupert, R.; Pelt, E.; Dosseto, A.; Ma, L.; Buss, H. L.; Brantley, S. L.
2013-01-01
A 2 m-thick spheroidal weathering profile, developed on a quartz diorite in the Rio Icacos watershed (Luquillo Mountains, eastern Puerto Rico), was analyzed for major and trace element concentrations, Sr and Nd isotopic ratios and U-series nuclides (238U-234U-230Th-226Ra). In this profile a 40 cm thick soil horizon is overlying a 150 cm thick saprolite which is separated from the basal corestone by a ˜40 cm thick rindlet zone. The Sr and Nd isotopic variations along the whole profile imply that, in addition to geochemical fractionations associated to water-rock interactions, the geochemical budget of the profile is influenced by a significant accretion of atmospheric dusts. The mineralogical and geochemical variations along the profile also confirm that the weathering front does not progress continuously from the top to the base of the profile. The upper part of the profile is probably associated with a different weathering system (lateral weathering of upper corestones) than the lower part, which consists of the basal corestone, the associated rindlet system and the saprolite in contact with these rindlets. Consequently, the determination of weathering rates from 238U-234U-230Th-226Ra disequilibrium in a series of samples collected along a vertical depth profile can only be attempted for samples collected in the lower part of the profile, i.e. the rindlet zone and the lower saprolite. Similar propagation rates were derived for the rindlet system and the saprolite by using classical models involving loss and gain processes for all nuclides to interpret the variation of U-series nuclides in the rindlet-saprolite subsystem. The consistency of these weathering rates with average weathering and erosion rates derived via other methods for the whole watershed provides a new and independent argument that, in the Rio Icacos watershed, the weathering system has reached a geomorphologic steady-state. Our study also indicates that even in environments with differential weathering, such as observed for the Puerto Rico site, the radioactive disequilibrium between the nuclides of a single radioactive series (here 238U-234U-230Th-226Ra) can still be interpreted in terms of a simplified scenario of congruent weathering. Incidentally, the U-Th-Ra disequilibrium in the corestone samples confirms that the outermost part of the corestone is already weathered.
Two approaches to timescale modeling for proxy series with chronological errors.
NASA Astrophysics Data System (ADS)
Divine, Dmitry; Godtliebsen, Fred
2010-05-01
A substantial part of proxy series used in paleoclimate research has chronological uncertainties. Any constructed timescale is therefore only an estimate of the true, but unknown timescale. An accurate assessment of the timing of events in the paleoproxy series and networks, as well as the use of proxy-based paleoclimate reconstructions in GCM model scoring experiments, requires the effect of these errors to be properly taken into account. We consider two types of the timescale error models corresponding to the two basic approaches to construction of the (depth-) age scale in a proxy series. Typically, a chronological control of a proxy series stemming from all types of marine and terrestrial sedimentary archives is based on the use of 14C dates, reference horizons or their combination. Depending on the prevalent origin of the available fix points (age markers) the following approaches to timescale modeling are proposed. 1) 14C dates. The algorithm uses Markov-chain Monte Carlo sampling technique to generate the ordered set of perturbed age markers. Proceeding sequentially from the youngest to the oldest fixpoint, the sampler draws random numbers from the age distribution of each individual 14C date. Every following perturbed age marker is generated such that condition of no age reversal is fulfilled. The relevant regression model is then applied to construct a simulated timescale. 2) Reference horizons (f. ex. volcanic or dust layers, T bomb peak) generally provide absolutely dated fixpoints. Due to a natural variability in sedimentation (accumulation) rate, however, the dating uncertainty in the interpolated timescale tends to grow together with a span to the nearest fixpoint. The (accumulation, sedimentation) process associated with formation of a proxy series is modelled using stochastic Levy process. The respective increments for the process are drawn from the log-normal distribution with the mean/variance ratio prescribed as a site(proxy)- dependent external parameter. The number of generated annual increments corresponds to a time interval between the considered reference horizons. The simulated series is then rescaled to match the length of the actual core section being modelled. Within each method the multitude of timescales is generated creating a number of possible realisations of a proxy series or a proxy based reconstruction in the time domain. This allows consideration of a proxy record in a probabilistic framework. The effect of accounting for uncertainties in chronology on a reconstructed environmental variable is illustrated with the two case studies of marine sediment records.
Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R.; ...
2015-06-18
Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections using a fully automated droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system for spatially resolved sampling, HPLC separation, and mass spectral detection. Excellent correlation was found between the protein distribution data obtained with this droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system and those data obtained with matrix assisted laser desorption ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland.more » AVP was most abundant in the posterior pituitary gland region (neurohypophysis) and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH secreting adenomas and in normal anterior adenohypophysis compared to non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis as anticipated. This work demonstrates that a fully automated droplet-based liquid microjunction surface sampling system coupled to HPLC-ESI-MS/MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, such as AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity and specificity of the current methodology support the potential of this basic technology with further advancement for assisting surgical decision-making.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R.
Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections using a fully automated droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system for spatially resolved sampling, HPLC separation, and mass spectral detection. Excellent correlation was found between the protein distribution data obtained with this droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system and those data obtained with matrix assisted laser desorption ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland.more » AVP was most abundant in the posterior pituitary gland region (neurohypophysis) and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH secreting adenomas and in normal anterior adenohypophysis compared to non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis as anticipated. This work demonstrates that a fully automated droplet-based liquid microjunction surface sampling system coupled to HPLC-ESI-MS/MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, such as AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity and specificity of the current methodology support the potential of this basic technology with further advancement for assisting surgical decision-making.« less
Nonlinear analysis and dynamic structure in the energy market
NASA Astrophysics Data System (ADS)
Aghababa, Hajar
This research assesses the dynamic structure of the energy sector of the aggregate economy in the context of nonlinear mechanisms. Earlier studies have focused mainly on the price of the energy products when detecting nonlinearities in time series data of the energy market, and there is little mention of the production side of the market. Moreover, there is a lack of exploration about the implication of high dimensionality and time aggregation when analyzing the market's fundamentals. This research will address these gaps by including the quantity side of the market in addition to the price and by systematically incorporating various frequencies for sample sizes in three essays. The goal of this research is to provide an inclusive and exhaustive examination of the dynamics in the energy markets. The first essay begins with the application of statistical techniques, and it incorporates the most well-known univariate tests for nonlinearity with distinct power functions over alternatives and tests different null hypotheses. It utilizes the daily spot price observations on five major products in the energy market. The results suggest that the time series daily spot prices of the energy products are highly nonlinear in their nature. They demonstrate apparent evidence of general nonlinear serial dependence in each individual series, as well as nonlinearity in the first, second, and third moments of the series. The second essay examines the underlying mechanism of crude oil production and identifies the nonlinear structure of the production market by utilizing various monthly time series observations of crude oil production: the U.S. field, Organization of the Petroleum Exporting Countries (OPEC), non-OPEC, and the world production of crude oil. The finding implies that the time series data of the U.S. field, OPEC, and the world production of crude oil exhibit deep nonlinearity in their structure and are generated by nonlinear mechanisms. However, the dynamics of the non-OPEC production time series data does not reveal signs of nonlinearity. The third essay explores nonlinear structure in the case of high dimensionality of the observations, different frequencies of sample sizes, and division of the samples into sub-samples. It systematically examines the robustness of the inference methods at various levels of time aggregation by employing daily spot prices on crude oil for 26 years as well as monthly spot price index on crude oil for 41 years. The daily and monthly samples are divided into sub-samples as well. All the tests detect strong evidence of nonlinear structure in the daily spot price of crude oil; whereas in monthly observations the evidence of nonlinear dependence is less dramatic, indicating that the nonlinear serial dependence will not be as intense when the time aggregation increase in time series observations.
Properties of 91 Southern Soil Series
Basil D. Doss; W. M. Broadfoot
1956-01-01
From June 1954 to July 1955 the Vicksburg Infiltration Project collected and analyzed samples of 91 soil series in 7 southern states. The purpose was to supply the U. S. Army with information needed for specialized research on military trafficability, but the basic data on soil properties should be of interest to soil scientists generally. The 91 series may be...
OSMEAN - OSCULATING/MEAN CLASSICAL ORBIT ELEMENTS CONVERSION (HP9000/7XX VERSION)
NASA Technical Reports Server (NTRS)
Guinn, J. R.
1994-01-01
OSMEAN is a sophisticated FORTRAN algorithm that converts between osculating and mean classical orbit elements. Mean orbit elements are advantageous for trajectory design and maneuver planning since they can be propagated very quickly; however, mean elements cannot describe the exact orbit at any given time. Osculating elements will enable the engineer to give an exact description of an orbit; however, computation costs are significantly higher due to the numerical integration procedure required for propagation. By calculating accurate conversions between osculating and mean orbit elements, OSMEAN allows the engineer to exploit the advantages of each approach for the design and planning of orbital trajectories and maneuver planning. OSMEAN is capable of converting mean elements to osculating elements or vice versa. The conversion is based on modelling of all first order aspherical and lunar-solar gravitation perturbations as well as a second-order aspherical term based on the second degree central body zonal perturbation. OSMEAN is written in FORTRAN 77 for HP 9000 series computers running HP-UX (NPO-18796) and DEC VAX series computers running VMS (NPO-18741). The HP version requires 388K of RAM for execution and the DEC VAX version requires 254K of RAM for execution. Sample input and output are listed in the documentation. Sample input is also provided on the distribution medium. The standard distribution medium for the HP 9000 series version is a .25 inch streaming magnetic IOTAMAT tape cartridge in UNIX tar format. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the DEC VAX version is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. OSMEAN was developed on a VAX 6410 in 1989, and was ported to the HP 9000 series platform in 1991. It is a copyrighted work with all copyright vested in NASA.
OSMEAN - OSCULATING/MEAN CLASSICAL ORBIT ELEMENTS CONVERSION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Guinn, J. R.
1994-01-01
OSMEAN is a sophisticated FORTRAN algorithm that converts between osculating and mean classical orbit elements. Mean orbit elements are advantageous for trajectory design and maneuver planning since they can be propagated very quickly; however, mean elements cannot describe the exact orbit at any given time. Osculating elements will enable the engineer to give an exact description of an orbit; however, computation costs are significantly higher due to the numerical integration procedure required for propagation. By calculating accurate conversions between osculating and mean orbit elements, OSMEAN allows the engineer to exploit the advantages of each approach for the design and planning of orbital trajectories and maneuver planning. OSMEAN is capable of converting mean elements to osculating elements or vice versa. The conversion is based on modelling of all first order aspherical and lunar-solar gravitation perturbations as well as a second-order aspherical term based on the second degree central body zonal perturbation. OSMEAN is written in FORTRAN 77 for HP 9000 series computers running HP-UX (NPO-18796) and DEC VAX series computers running VMS (NPO-18741). The HP version requires 388K of RAM for execution and the DEC VAX version requires 254K of RAM for execution. Sample input and output are listed in the documentation. Sample input is also provided on the distribution medium. The standard distribution medium for the HP 9000 series version is a .25 inch streaming magnetic IOTAMAT tape cartridge in UNIX tar format. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the DEC VAX version is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. OSMEAN was developed on a VAX 6410 in 1989, and was ported to the HP 9000 series platform in 1991. It is a copyrighted work with all copyright vested in NASA.
Sinha, Shriprakash
2017-12-04
Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues. This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices. The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.
Mars Sample Handling Protocol Workshop Series: Workshop 2
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Acevedo, Sara E. (Editor); Kovacs, Gregory T. A. (Editor); Race, Margaret S. (Editor); DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
Numerous NASA reports and studies have identified Planetary Protection (PP) as an important part of any Mars sample return mission. The mission architecture, hardware, on-board experiments, and related activities must be designed in ways that prevent both forward- and back-contamination and also ensure maximal return of scientific information. A key element of any PP effort for sample return missions is the development of guidelines for containment and analysis of returned sample(s). As part of that effort, NASA and the Space Studies Board (SSB) of the National Research Council (NRC) have each assembled experts from a wide range of scientific fields to identify and discuss issues pertinent to sample return. In 1997, the SSB released its report on recommendations for handling and testing of returned Mars samples. In particular, the NRC recommended that: a) samples returned from Mars by spacecraft should be contained and treated as potentially hazardous until proven otherwise, and b) rigorous physical, chemical, and biological analyses [should] confirm that there is no indication of the presence of any exogenous biological entity. Also in 1997, a Mars Sample Quarantine Protocol workshop was convened at NASA Ames Research Center to deal with three specific aspects of the initial handling of a returned Mars sample: 1) biocontainment, to prevent 'uncontrolled release' of sample material into the terrestrial environment; 2) life detection, to examine the sample for evidence of organisms; and 3) biohazard testing, to determine if the sample poses any threat to terrestrial life forms and the Earth's biosphere. In 1999, a study by NASA's Mars Sample Handling and Requirements Panel (MSHARP) addressed three other specific areas in anticipation of returning samples from Mars: 1) sample collection and transport back to Earth; 2) certification of the samples as non-hazardous; and 3) sample receiving, curation, and distribution. To further refine the requirements for sample hazard testing and the criteria for subsequent release of sample materials from quarantine, the NASA Planetary Protection Officer convened an additional series of workshops beginning in March 2000. The overall objective of these workshops was to develop comprehensive protocols to assess whether the returned materials contain any biological hazards, and to safeguard the purity of the samples from possible terrestrial contamination. This document is the report of the second Workshop in the Series. The information herein will ultimately be integrated into a final document reporting the proceedings of the entire Workshop Series along with additional information and recommendations.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Evaluation of line transect sampling based on remotely sensed data from underwater video
Bergstedt, R.A.; Anderson, D.R.
1990-01-01
We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.
Rapid determination of Faraday rotation in optical glasses by means of secondary Faraday modulator.
Sofronie, M; Elisa, M; Sava, B A; Boroica, L; Valeanu, M; Kuncser, V
2015-05-01
A rapid high sensitive method for determining the Faraday rotation of optical glasses is proposed. Starting from an experimental setup based on a Faraday rod coupled to a lock-in amplifier in the detection chain, two methodologies were developed for providing reliable results on samples presenting low and large Faraday rotations. The proposed methodologies were critically discussed and compared, via results obtained in transmission geometry, on a new series of aluminophosphate glasses with or without rare-earth doping ions. An example on how the method can be used for a rapid examination of the optical homogeneity of the sample with respect to magneto-optical effects is also provided.
NASA Astrophysics Data System (ADS)
Butler, P. G.; Scourse, J. D.; Richardson, C. A.; Wanamaker, A. D., Jr.
2009-04-01
Determinations of the local correction (ΔR) to the globally averaged marine radiocarbon reservoir age are often isolated in space and time, derived from heterogeneous sources and constrained by significant uncertainties. Although time series of ΔR at single sites can be obtained from sediment cores, these are subject to multiple uncertainties related to sedimentation rates, bioturbation and interspecific variations in the source of radiocarbon in the analysed samples. Coral records provide better resolution, but these are available only for tropical locations. It is shown here that it is possible to use the shell of the long-lived bivalve mollusc Arctica islandica as a source of high resolution time series of absolutely-dated marine radiocarbon determinations for the shelf seas surrounding the North Atlantic ocean. Annual growth increments in the shell can be crossdated and chronologies can be constructed in a precise analogue with the use of tree-rings. Because the calendar dates of the samples are known, ΔR can be determined with high precision and accuracy and because all the samples are from the same species, the time series of ΔR values possesses a high degree of internal consistency. Presented here is a multi-centennial (AD 1593 - AD 1933) time series of 31 ΔR values for a site in the Irish Sea close to the Isle of Man. The mean value of ΔR (-62 14C yrs) does not change significantly during this period but increased variability is apparent before AD 1750.
NASA Astrophysics Data System (ADS)
Lark, R. M.; Rawlins, B. G.; Lark, T. A.
2014-05-01
The LUCAS Topsoil survey is a pan-European Union initiative in which soil data were collected according to standard protocols from 19 967 sites. Any inference about soil variables is subject to uncertainty due to different sources of variability in the data. In this study we examine the likely magnitude of uncertainty due to the field-sampling protocol. The published sampling protocol (LUCAS, 2009) describes a procedure to form a composite soil sample from aliquots collected to a depth of between approximately 15-20. A v-shaped hole to the target depth is cut with a spade, then a slice is cut from one of the exposed surfaces. This methodology gives rather less control of the sampling depth than protocols used in other soil and geochemical surveys, this may be a substantial source of variation in uncultivated soils with strong contrasts between an organic-rich A-horizon and an underlying B-horizon. We extracted all representative profile descriptions from soil series recorded in the memoir of the 1:250 000-scale map of Northern England (Soil Survey of England and Wales, 1984) where the base of the A-horizon is less than 20 cm below the surface. The Soil Associations in which these 14 series are significant members cover approximately 17% of the area of Northern England, and are expected to be the mineral soils with the largest organic content. Soil Organic Carbon content and bulk density were extracted for the A- and B-horizons, along with the thickness of the horizons. Recorded bulk density, or prediction by a pedotransfer function, were also recorded. For any proposed angle of the v-shaped hole, the proportions of A- and B-horizon in the resulting sample may be computed by trigonometry. From the bulk density and SOC concentration of the horizons, the SOC concentration of the sample can be computed. For each Soil Series we drew 1000 random samples from a trapezoidal distribution of angles, with uniform density over the range corresponding to depths 15-20 cm and zero density for angles corresponding to depths larger than 21 cm or less than 14 cm. We computed the corresponding variance of sample SOC contents. We found that the variance in SOC determinations attributable to variation in sample depth for these uncultivated soils was of the same order of magnitude as the estimate of the subsampling + analytical variance component (both on a log scale) that we previously computed for soils in the UK (Rawlins et al., 2009). It seems unnecessary to accept this source of uncertainty, given the effort undertaken to reduce the analytical variation which is no larger (and often smaller) than this variation due to the field protocol. If pan-European soil monitoring is to be based on the LUCAS Topsoil survey, as suggested by an initial report, uncertainty could be reduced if the sampling depth was specified to a unique depth, rather than the current depth range. LUCAS. 2009. Instructions for Surveyors. Technical reference document C-1: General implementation, Land Cover and Use, Water management, Soil, Transect, Photos. European Commission, Eurostat. Rawlins, B.G., Scheib, A.J., Lark, R.M. & Lister, T.R. 2009. Sampling and analytical plus subsampling variance components for five soil indicators observed at regional scale. European Journal of Soil Science 60, 740-747
Advancing Methods for U.S. Transgender Health Research
Reisner, Sari L.; Deutsch, Madeline B.; Bhasin, Shalender; Bockting, Walter; Brown, George R.; Feldman, Jamie; Garofalo, Rob; Kreukels, Baudewijntje; Radix, Asa; Safer, Joshua D.; Tangpricha, Vin; T’Sjoen, Guy; Goodman, Michael
2016-01-01
Purpose of Review To describe methodological challenges, gaps, and opportunities in U.S. transgender health research. Recent Findings Lack of large prospective observational studies and intervention trials, limited data on risks and benefits of gender affirmation (e.g., hormones and surgical interventions), and inconsistent use of definitions across studies hinder evidence-based care for transgender people. Systematic high-quality observational and intervention-testing studies may be carried out using several approaches, including general population-based, health systems-based, clinic-based, venue-based, and hybrid designs. Each of these approaches has its strength and limitations; however, harmonization of research efforts is needed. Ongoing development of evidence-based clinical recommendations will benefit from a series of observational and intervention studies aimed at identification, recruitment, and follow-up of transgender people of different ages, from different racial, ethnic, and socioeconomic backgrounds and with diverse gender identities. Summary Transgender health research faces challenges that include standardization of lexicon, agreed-upon population definitions, study design, sampling, measurement, outcome ascertainment, and sample size. Application of existing and new methods is needed to fill existing gaps, increase the scientific rigor and reach of transgender health research, and inform evidence-based prevention and care for this underserved population. PMID:26845331
NASA Technical Reports Server (NTRS)
Hazen-Bosveld, April; Lipert, Robert J.; Nordling, John; Shih, Chien-Ju; Siperko, Lorraine; Porter, Marc D.; Gazda, Daniel B.; Rutz, Jeff A.; Straub, John E.; Schultz, John R.;
2007-01-01
Colorimetric-solid phase extraction (C-SPE) is being developed as a method for in-flight monitoring of spacecraft water quality. C-SPE is based on measuring the change in the diffuse reflectance spectrum of indicator disks following exposure to a water sample. Previous microgravity testing has shown that air bubbles suspended in water samples can cause uncertainty in the volume of liquid passed through the disks, leading to errors in the determination of water quality parameter concentrations. We report here the results of a recent series of C-9 microgravity experiments designed to evaluate manual manipulation as a means to collect bubble-free water samples of specified volumes from water sample bags containing up to 47% air. The effectiveness of manual manipulation was verified by comparing the results from C-SPE analyses of silver(I) and iodine performed in-flight using samples collected and debubbled in microgravity to those performed on-ground using bubble-free samples. The ground and flight results showed excellent agreement, demonstrating that manual manipulation is an effective means for collecting bubble-free water samples in microgravity.
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
NASA Astrophysics Data System (ADS)
Arp, Hans Peter H.; Goss, Kai-Uwe
Due to the apparent environmental omnipresence of perfluorocarboxylic acids (PFAs), an increasing number of researchers are investigating their ambient particle- and gas-phase concentrations. Typically this is done using a high-volume air sampler equipped with Quartz Fiber Filters (QFFs) or Glass Fiber Filters (GFFs) to sample the particle-bound PFAs and downstream sorbents to sample the gas-phase PFAs. This study reports that at trace, ambient concentrations gas-phase PFAs sorb to QFFs and GFFs irreversibly and hardly pass through these filters to the downstream sorbents. As a consequence, it is not possible to distinguish between particle- and gas-phase concentrations, or to distinguish concentrations on different particle size fractions, unless precautions are taken. Failure to take such precautions could have already caused reported data to be misinterpreted. Here it is also reported that deactivating QFFs and GFFs with a silylating agent renders them suitable for sampling PFAs. Based on the presented study, a series of recommendations for air-sampling PFAs are provided.
Strategies to address participant misrepresentation for eligibility in Web-based research.
Kramer, Jessica; Rubin, Amy; Coster, Wendy; Helmuth, Eric; Hermos, John; Rosenbloom, David; Moed, Rich; Dooley, Meghan; Kao, Ying-Chia; Liljenquist, Kendra; Brief, Deborah; Enggasser, Justin; Keane, Terence; Roy, Monica; Lachowicz, Mark
2014-03-01
Emerging methodological research suggests that the World Wide Web ("Web") is an appropriate venue for survey data collection, and a promising area for delivering behavioral intervention. However, the use of the Web for research raises concerns regarding sample validity, particularly when the Web is used for recruitment and enrollment. The purpose of this paper is to describe the challenges experienced in two different Web-based studies in which participant misrepresentation threatened sample validity: a survey study and an online intervention study. The lessons learned from these experiences generated three types of strategies researchers can use to reduce the likelihood of participant misrepresentation for eligibility in Web-based research. Examples of procedural/design strategies, technical/software strategies and data analytic strategies are provided along with the methodological strengths and limitations of specific strategies. The discussion includes a series of considerations to guide researchers in the selection of strategies that may be most appropriate given the aims, resources and target population of their studies. Copyright © 2014 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engle, V.D.; Summers, J.K.; Macauley, J.M.
1994-12-31
The Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) in the Gulf of Mexico supplements its base sampling effort each year with localized, intensive spatial sampling in selected large estuarine systems. By selecting random locations within 70 km{sup 2} hexagonal areas, individual estuaries were sampled using EMAP methods but at four times the density as base sampling. In 1992, 19 sites were sampled in Lake Pontchartrain, Louisiana. In 1 993, 18 sites were sampled in Sabine Lake, Texas and 12 sites were sampled in Choctawhatchee Bay, Florida. At all sites, sediment grabs were taken and analyzed for benthic species compositionmore » and abundance, for toxicity to Ampelisca, and for organic and inorganic sediment contaminants. An indicator of biotic integrity, the benthic index, was calculated to represent the status of benthic communities. A series of statistical techniques, such as stepwise regression analysis, were employed to determine whether the variation in the benthic index could be associated with variation in sediment contaminants, sediment toxicity, or levels of dissolved oxygen. Spatial distributions of these parameters were examined to determine the geographical co-occurrence of degraded benthic communities and environmental stressors. In Lake Pontchartrain, for example, 85% of the variation in the benthic index was associated with decreased levels of dissolved oxygen, and increased concentrations of PCBs, alkanes, copper, tin, and zinc in the sediments.« less
Barr, Margo L; Ferguson, Raymond A; Steel, David G
2014-08-12
Since 1997, the NSW Population Health Survey (NSWPHS) had selected the sample using random digit dialing of landline telephone numbers. When the survey began coverage of the population by landline phone frames was high (96%). As landline coverage in Australia has declined and continues to do so, in 2012, a sample of mobile telephone numbers was added to the survey using an overlapping dual-frame design. Details of the methodology are published elsewhere. This paper discusses the impacts of the sampling frame change on the time series, and provides possible approaches to handling these impacts. Prevalence estimates were calculated for type of phone-use, and a range of health indicators. Prevalence ratios (PR) for each of the health indicators were also calculated using Poisson regression analysis with robust variance estimation by type of phone-use. Health estimates for 2012 were compared to 2011. The full time series was examined for selected health indicators. It was estimated from the 2012 NSWPHS that 20.0% of the NSW population were mobile-only phone users. Looking at the full time series for overweight or obese and current smoking if the NSWPHS had continued to be undertaken only using a landline frame, overweight or obese would have been shown to continue to increase and current smoking would have been shown to continue to decrease. However, with the introduction of the overlapping dual-frame design in 2012, overweight or obese increased until 2011 and then decreased in 2012, and current smoking decreased until 2011, and then increased in 2012. Our examination of these time series showed that the changes were a consequence of the sampling frame change and were not real changes. Both the backcasting method and the minimal coverage method could adequately adjust for the design change and allow for the continuation of the time series. The inclusion of the mobile telephone numbers, through an overlapping dual-frame design, did impact on the time series for some of the health indicators collected through the NSWPHS, but only in that it corrected the estimates that were being calculated from a sample frame that was progressively covering less of the population.
Comparing Stream DOC Fluxes from Sensor- and Sample-Based Approaches
NASA Astrophysics Data System (ADS)
Shanley, J. B.; Saraceno, J.; Aulenbach, B. T.; Mast, A.; Clow, D. W.; Hood, K.; Walker, J. F.; Murphy, S. F.; Torres-Sanchez, A.; Aiken, G.; McDowell, W. H.
2015-12-01
DOC transport by streamwater is a significant flux that does not consistently show up in ecosystem carbon budgets. In an effort to quantify stream DOC flux, we analyzed three to four years of high-frequency in situ fluorescing dissolved organic matter (FDOM) concentrations and turbidity measured by optical sensors at the five diverse forested and/or alpine headwater sites of the U.S. Geological Survey (USGS) Water, Energy, and Biogeochemical Budgets (WEBB) program. FDOM serves as a proxy for DOC. We also took discrete samples over a range of hydrologic conditions, using both manual weekly and automated event-based sampling. After compensating FDOM for temperature effects and turbidity interference - which was successful even at the high-turbidity Luquillo, PR site -- we evaluated the DOC-FDOM relation based on discrete sample DOC analyses matched to corrected FDOM at the time of sampling. FDOM was a moderately robust predictor of DOC, with r2 from 0.60 to more than 0.95 among sites. We then formed continuous DOC time series by two independent approaches: (1) DOC predicted from FDOM; and (2) the composite method, based on modeled DOC from regression on stream discharge, season, air temperature, and time, forcing the model to observations and adjusting modeled concentrations between observations by linearly-interpolated model residuals. DOC flux from each approach was then computed directly as concentration times discharge. DOC fluxes based on the sensor approach were consistently greater than the sample-based approach. At Loch Vale, CO (2.5 years) and Panola Mountain GA (1 year), the difference was 5-17%. At Sleepers River, VT (3 years), preliminary differences were greater than 20%. The difference is driven by the highest events, but we are investigating these results further. We will also present comparisons from Luquillo, PR, and Allequash Creek, WI. The higher sensor-based DOC fluxes could result from their accuracy during hysteresis, which is difficult to model. In at least one case the higher sensor-based DOC flux was linked to an unsampled event outside the range of the concentration model. Sensors require upkeep and vigilance with the data, but have the potential to yield more accurate fluxes than sample-based approaches.
NASA Astrophysics Data System (ADS)
Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.
2013-08-01
The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.
NASA Astrophysics Data System (ADS)
Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.
2017-12-01
The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.
Reid, Brian J; Papanikolaou, Niki D; Wilcox, Ronah K
2005-02-01
The catabolic activity with respect to the systemic herbicide isoproturon was determined in soil samples by (14)C-radiorespirometry. The first experiment assessed levels of intrinsic catabolic activity in soil samples that represented three dissimilar soil series under arable cultivation. Results showed average extents of isoproturon mineralisation (after 240 h assay time) in the three soil series to be low. A second experiment assessed the impact of addition of isoproturon (0.05 microg kg(-1)) into these soils on the levels of catabolic activity following 28 days of incubation. Increased catabolic activity was observed in all three soils. A third experiment assessed levels of intrinsic catabolic activity in soil samples representing a single soil series managed under either conventional agricultural practice (including the use of isoproturon) or organic farming practice (with no use of isoproturon). Results showed higher (and more consistent) levels of isoproturon mineralisation in the soil samples collected from conventional land use. The final experiment assessed the impact of isoproturon addition on the levels of inducible catabolic activity in these soils. The results showed no significant difference in the case of the conventional farm soil samples while the induction of catabolic activity in the organic farm soil samples was significant.
NASA Astrophysics Data System (ADS)
Singh, A. K.; Toshniwal, D.
2017-12-01
The MODIS Joint Atmosphere product, MODATML2 and MYDATML2 L2/3 provided by LAADS DAAC (Level-1 and Atmosphere Archive & Distribution System Distributed Active Archive Center) re-sampled from medium resolution MODIS Terra /Aqua Satellites data at 5km scale, contains Cloud Reflectance, Cloud Top Temperature, Water Vapor, Aerosol Optical Depth/Thickness, Humidity data. These re-sampled data, when used for deriving climatic effects of aerosols (particularly in case of cooling effect) still exposes limitations in presence of uncertainty measures in atmospheric artifacts such as aerosol, cloud, cirrus cloud etc. The effect of uncertainty measures in these artifacts imposes an important challenge for estimation of aerosol effects, adequately affecting precise regional weather modeling and predictions: Forecasting and recommendation applications developed largely depend on these short-term local conditions (e.g. City/Locality based recommendations to citizens/farmers based on local weather models). Our approach inculcates artificial intelligence technique for representing heterogeneous data(satellite data along with air quality data from local weather stations (i.e. in situ data)) to learn, correct and predict aerosol effects in the presence of cloud and other atmospheric artifacts, defusing Spatio-temporal correlations and regressions. The Big Data process pipeline consisting correlation and regression techniques developed on Apache Spark platform can easily scale for large data sets including many tiles (scenes) and over widened time-scale. Keywords: Climatic Effects of Aerosols, Situation-Aware, Big Data, Apache Spark, MODIS Terra /Aqua, Time Series
NASA Astrophysics Data System (ADS)
Kim, Hwankyo; Kim, Dae-Hyun; Seong, Tae-Yeon
2017-11-01
We investigated the electrical performance of near ultraviolet (NUV) (390 nm) light-emitting diodes (LEDs) fabricated with various semi-transparent Cr/ITO n-type contacts. It was shown that after annealing at 400 °C, Cr/ITO (10 nm/40 nm) contact was ohmic with a specific contact resistance of 9.8 × 10-4 Ωcm2. NUV AlGaN-based LEDs fabricated with different Cr/ITO (6-12 nm/40 nm) electrodes exhibited forward-bias voltages of 3.27-3.30 V at an injection current of 20 mA, which are similar to that of reference LED with Cr/Ni/Au (20 nm/25 nm/200 nm) electrode (3.29 V). The LEDs with the Cr/ITO electrodes gave series resistances of 10.69-11.98 Ω, while the series resistance is 10.84 Ohm for the reference LED. The transmittance of the Cr/ITO samples significantly improved when annealed at 400 °C. The transmittance (25.8-45.2% at 390 nm) of the annealed samples decreased with increasing Cr layer thickness. The LEDs with the Cr/ITO electrodes exhibited higher light output power than reference LED (with Cr/Ni/Au electrode). In particular, the LED with the Cr/ITO (12 nm/40 nm) electrode showed 9.3% higher light output power at 100 mA than reference LED. Based on the X-ray photoemission spectroscopy (XPS) and electrical results, the ohmic formation mechanism is described and discussed.
Apes are intuitive statisticians.
Rakoczy, Hannes; Clüver, Annette; Saucke, Liane; Stoffregen, Nicole; Gräbener, Alice; Migura, Judith; Call, Josep
2014-04-01
Inductive learning and reasoning, as we use it both in everyday life and in science, is characterized by flexible inferences based on statistical information: inferences from populations to samples and vice versa. Many forms of such statistical reasoning have been found to develop late in human ontogeny, depending on formal education and language, and to be fragile even in adults. New revolutionary research, however, suggests that even preverbal human infants make use of intuitive statistics. Here, we conducted the first investigation of such intuitive statistical reasoning with non-human primates. In a series of 7 experiments, Bonobos, Chimpanzees, Gorillas and Orangutans drew flexible statistical inferences from populations to samples. These inferences, furthermore, were truly based on statistical information regarding the relative frequency distributions in a population, and not on absolute frequencies. Intuitive statistics in its most basic form is thus an evolutionarily more ancient rather than a uniquely human capacity. Copyright © 2014 Elsevier B.V. All rights reserved.
Optofluidic refractive index sensor based on partial reflection
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhang, Zhang; Wang, Yichuan; Ye, Meiying; Fang, Wei; Tong, Limin
2017-06-01
We demonstrate a novel optofluidic refractive index (RI) sensor with high sensitivity and wide dynamic range based on partial reflection. Benefited from the divergent incident light and the output fibers with different tilting angles, we have achieved highly sensitive RI sensing in a wide range from 1.33 to 1.37. To investigate the effectiveness of this sensor, we perform a measurement of RI with a resolution of ca. 5.0×10-5 refractive index unit (RIU) for ethylene glycol solutions. Also, we have measured a series of liquid solutions by using different output fibers, achieving a resolution of ca. 0.52 mg/mL for cane surge. The optofluidic RI sensor takes advantage of the high sensitivity, wide dynamic range, small footprint, and low sample consumption, as well as the efficient fluidic sample delivery, making it useful for applications in the food industry.
Wolff, Kevin T; Baglivio, Michael T; Piquero, Alex R
2017-08-01
Adverse childhood experiences (ACEs) have been identified as a key risk factor for a range of negative life outcomes, including delinquency. Much less is known about how exposure to negative experiences relates to continued offending among juvenile offenders. In this study, we examine the effect of ACEs on recidivism in a large sample of previously referred youth from the State of Florida who were followed for 1 year after participation in community-based treatment. Results from a series of Cox hazard models suggest that ACEs increase the risk of subsequent arrest, with a higher prevalence of ACEs leading to a shorter time to recidivism. The relationship between ACEs and recidivism held quite well in demographic-specific analyses. Implications for empirical research on the long-term effects of traumatic childhood events and juvenile justice policy are discussed.
Electric Motors Maintenance Planning From Its Operating Variables
NASA Astrophysics Data System (ADS)
Rodrigues, Francisco; Fonseca, Inácio; Farinha, José Torres; Ferreira, Luís; Galar, Diego
2017-09-01
The maintenance planning corresponds to an approach that seeks to maximize the availability of equipment and, consequently, increase the levels of competitiveness of companies by increasing production times. This paper presents a maintenance planning based on operating variables (number of hours worked, duty cycles, number of revolutions) to maximizing the availability of operation of electrical motors. The reading of the operating variables and its sampling is done based on predetermined sampling cycles and subsequently is made the data analysis through time series algorithms aiming to launch work orders before reaching the variables limit values. This approach is supported by tools and technologies such as logical applications that enable a graphical user interface for access to relevant information about their Physical Asset HMI (Human Machine Interface), including the control and supervision by acquisition through SCADA (Supervisory Control And data acquisition) data, also including the communication protocols among different logical applications.
The influence of crystal structure on ion-irradiation tolerance in the Sm(x)Yb(2-x)TiO5 series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aughterson, R. D.; Lumpkin, G. R.; de los Reyes, M.
2016-04-01
his ion-irradiation study covers the four major crystal structure types in the Ln(2)TiO(5) series (Ln = lanthanide), namely orthorhombic Pnma, hexagonal P63/mmc, cubic (pyrochlore-like) Fd-3m and cubic (fluorite-like) Fm-3m. This is the first systematic examination of the complete Ln(2)TiO(5) crystal system and the first reported examination of the hexagonal structure. A series of samples, based on the stoichiometry Sm(x)Yb(2-x)TiO5 (where x = 2, 1.4, 1, 0.6, and 0) have been irradiated using 1 MeV Kr2+ ions and characterised in-situ using a transmission electron microscope. Two quantities are used to define ion-irradiation tolerance: critical dose of amorphisation (D-c), which is themore » irradiating ion dose required for a crystalline to amorphous transition, and the critical temperature (T-c), above which the sample cannot be rendered amorphous by ion irradiation. The structure type plus elements of bonding are correlated to ion-irradiation tolerance. The cubic phases, Yb2TiO5 and Sm0.6Yb1.4TiO5, were found to be the most radiation tolerant, with Tc values of 479 and 697 K respectively. The improved radiation tolerance with a change in symmetry to cubic is consistent with previous studies of similar compounds.« less
Evaluation of Streptococcus pneumoniae in bile samples: A case series review.
Itoh, Naoya; Kawamura, Ichiro; Tsukahara, Mika; Mori, Keita; Kurai, Hanako
2016-06-01
Although Streptococcus pneumoniae is an important pathogen of humans, pneumococcal cholangitis is rare because of the rapid autolysis of S. pneumoniae. The aim of this case series was to review patients with bile cultures positive for S. pneumoniae. This study was a single center retrospective case series review of patients with S. pneumoniae in their bile at a tertiary-care cancer center between September 2002 and August 2015. Subjects consisted of all patients in whom S. pneumoniae was isolated in their bile during the study period. Bile specimens for culture were obtained from biliary drainage procedures such as endoscopic retrograde biliary drainage, endoscopic nasobiliary drainage, and percutaneous transhepatic biliary drainage. There were 20 patients with bile cultures positive for S. pneumoniae during the study period. All patients presented with extrahepatic obstructive jaundice due to hepatopancreatobiliary tumors. Nineteen of 20 patients underwent the placement of plastic intrabiliary tubes. The mean time between the first-time drainage and the positive culture was 26 days (range 0-313 days). Although 12 of 20 patients met our definition of cholangitis, 5 were clinically treated with antibiotics based on a physician's assessment of whether there was a true infection. The present study is the largest case series of patients with S. pneumoniae in their bile. Based on our findings, the isolation of S. pneumoniae from bile may be attributed to the placement of biliary drainage devices. Copyright © 2016 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Stochastic modeling of hourly rainfall times series in Campania (Italy)
NASA Astrophysics Data System (ADS)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil protection agency meteorological warning network. ACKNOWLEDGEMENTS The research was co-financed by the Italian Ministry of University, by means of the PRIN 2006 PRIN program, within the research project entitled ‘Definition of critical rainfall thresholds for destructive landslides for civil protection purposes'. REFERENCES Cowpertwait, P.S.P., Kilsby, C.G. and O'Connell, P.E., 2002. A space-time Neyman-Scott model of rainfall: Empirical analysis of extremes, Water Resources Research, 38(8):1-14. Salas, J.D., 1992. Analysis and modeling of hydrological time series, in D.R. Maidment, ed., Handbook of Hydrology, McGraw-Hill, New York. Heneker, T.M., Lambert, M.F. and Kuczera G., 2001. A point rainfall model for risk-based design, Journal of Hydrology, 247(1-2):54-71.
Kleiman, Susan C; Glenny, Elaine M; Bulik-Sullivan, Emily C; Huh, Eun Young; Tsilimigras, Matthew C B; Fodor, Anthony A; Bulik, Cynthia M; Carroll, Ian M
2017-09-01
Anorexia nervosa, a severe psychiatric illness, is associated with an intestinal microbial dysbiosis. Individual microbial signatures dominate in healthy samples, even over time and under controlled conditions, but whether microbial markers of the disorder overcome inter-individual variation during the acute stage of illness or renourishment is unknown. We characterized daily changes in the intestinal microbiota in three acutely ill patients with anorexia nervosa over the entire course of hospital-based renourishment and found significant, patient-specific changes in microbial composition and diversity. This preliminary case series suggests that even in a state of pathology, individual microbial signatures persist in accounting for the majority of intestinal microbial variation. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2014-07-01
Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Field Performance of ISFET based Deep Ocean pH Sensors
NASA Astrophysics Data System (ADS)
Branham, C. W.; Murphy, D. J.
2017-12-01
Historically, ocean pH time series data was acquired from infrequent shipboard grab samples and measured using labor intensive spectrophotometry methods. However, with the introduction of robust and stable ISFET pH sensors for use in ocean applications a paradigm shift in the methods used to acquire long-term pH time series data has occurred. Sea-Bird Scientific played a critical role in the adoption this new technology by commercializing the SeaFET pH sensor and float pH Sensor developed by the MBARI chemical sensor group. Sea-Bird Scientific continues to advance this technology through a concerted effort to improve pH sensor accuracy and reliability by characterizing their performance in the laboratory and field. This presentation will focus on calibration of the ISFET pH sensor, evaluate its analytical performance, and validate performance using recent field data.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
Soejarto, D.D.; Gyllenhaal, C.; Kadushin, M.R.; Southavong, B.; Sydara, K.; Bouamanivong, S.; Xaiveu, M.; Zhang, H.-J.; Franzblau, S.G.; Tan, Ghee T.; Pezzuto, J.M.; Riley, M.C.; Elkington, B.G.; Waller, D.P.
2012-01-01
Context An ethnobotany-based approach in the selection of raw plant materials to study was implemented. Objective To acquire raw plant materials using ethnobotanical field interviews as starting point to discover new bioactive compounds from medicinal plants of the Lao People’s Democratic Republic. Methods Using semi-structured field interviews with healers in the Lao PDR, plant samples were collected, extracted, and bio-assayed to detect bioactivity against cancer, HIV/AIDS, TB, malaria. Plant species demonstrating activity were recollected and the extracts subjected to a bioassay-guided isolation protocol to isolate and identify the active compounds. Results Field interviews with 118 healers in 15 of 17 provinces of Lao PDR yielded 753 collections (573 species) with 955 plant samples. Of these 955, 50 extracts demonstrated activity in the anticancer, 10 in the anti-HIV, 30 in the anti-TB, and 52 in the antimalarial assay. Recollection of actives followed by bioassay-guided isolation processes yielded a series of new and known in vitro-active anticancer and antimalarial compounds from 5 species. Discussion Laos has a rich biodiversity, harboring an estimated 8000–11,000 species of plants. In a country highly dependent on traditional medicine for its primary health care, this rich plant diversity serves as a major source of their medication. Conclusions Ethnobotanical survey has demonstrated the richness of plant-based traditional medicine of Lao PDR, taxonomically and therapeutically. Biological assays of extracts of half of the 955 samples followed by in-depth studies of a number of actives have yielded a series of new bioactive compounds against the diseases of cancer and malaria. PMID:22136442
From brain to earth and climate systems: small-world interaction networks or not?
Bialonski, Stephan; Horstmann, Marie-Therese; Lehnertz, Klaus
2010-03-01
We consider recent reports on small-world topologies of interaction networks derived from the dynamics of spatially extended systems that are investigated in diverse scientific fields such as neurosciences, geophysics, or meteorology. With numerical simulations that mimic typical experimental situations, we have identified an important constraint when characterizing such networks: indications of a small-world topology can be expected solely due to the spatial sampling of the system along with the commonly used time series analysis based approaches to network characterization.
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
This preliminary report is the fourth in a series describing the progress of a 6-year longitudinal study by the Educational Testing Service (ETS). The present report specifically describes initial differences between children who go on to Head Start, and those who do not, based on results of 16 of the 33 measures administered in Year 1 (1969) in…
Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks
Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav
2017-01-01
Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880
NASA Astrophysics Data System (ADS)
Ahmad, Imam Safawi; Setiawan, Suhartono, Masun, Nunun Hilyatul
2015-12-01
Currency plays an important role in economic transactions of Indonesian society. In order to guarantee the availability of currency, Bank Indonesia needs to develop demand and supply planning of currency. The purpose of this study is to get model and predict inflow and outflow of currency in KPW BI Region IV (East Java) with ARIMA method, time series regression and ARIMAX. The data of monthly inflow and outflow is used of currency in KPW BI Surabaya, Malang, Kediri and Jember.The observation period starting from January 2003 to December 2014. Based on the smallest values of out-sample RMSE and SMAPE, ARIMA is the best model to predict the outflow of currency in KPW BI Surabaya and ARIMAX for KPW BI Malang, Kediri and Jember. The best forecasting model for inflow of currency in KPW BI Surabaya, Malang, Kediri and Jember chronologically as follows are calendar variation model, transfer function, ARIMA, and time series regression. These results indicates that the more complex models may not necessarily produce a more accurate forecast as the result of M3-Competition.
Forecasting currency circulation data of Bank Indonesia by using hybrid ARIMAX-ANN model
NASA Astrophysics Data System (ADS)
Prayoga, I. Gede Surya Adi; Suhartono, Rahayu, Santi Puteri
2017-05-01
The purpose of this study is to forecast currency inflow and outflow data of Bank Indonesia. Currency circulation in Indonesia is highly influenced by the presence of Eid al-Fitr. One way to forecast the data with Eid al-Fitr effect is using autoregressive integrated moving average with exogenous input (ARIMAX) model. However, ARIMAX is a linear model, which cannot handle nonlinear correlation structures of the data. In the field of forecasting, inaccurate predictions can be considered caused by the existence of nonlinear components that are uncaptured by the model. In this paper, we propose a hybrid model of ARIMAX and artificial neural networks (ANN) that can handle both linear and nonlinear correlation. This method was applied for 46 series of currency inflow and 46 series of currency outflow. The results showed that based on out-of-sample root mean squared error (RMSE), the hybrid models are up to10.26 and 10.65 percent better than ARIMAX for inflow and outflow series, respectively. It means that ANN performs well in modeling nonlinear correlation of the data and can increase the accuracy of linear model.
Glushakova, Lyudmyla G; Alto, Barry W; Kim, Myong Sang; Bradley, Andrea; Yaren, Ozlem; Benner, Steven A
2017-08-01
Chikungunya virus (CHIKV) represents a growing and global concern for public health that needs inexpensive and convenient methods to collect mosquitoes as potential carriers so that they can be preserved, stored and transported for later and/or remote analysis. Reported here is a cellulose-based paper, derivatized with quaternary ammonium groups ("Q-paper") that meets these needs. In a series of tests, infected mosquito bodies were squashed directly on Q-paper. Aqueous ammonia was then added on the mosquito bodies to release viral RNA that adsorbed on the cationic surface via electrostatic interactions. The samples were then stored (frozen) or transported. For analysis, the CHIKV nucleic acids were eluted from the Q-paper and PCR amplified in a workflow, previously developed, that also exploited two nucleic acid innovations, ("artificially expanded genetic information systems", AEGIS, and "self-avoiding molecular recognition systems", SAMRS). The amplicons were then analyzed by a Luminex hybridization assay. This procedure detected CHIKV RNA, if present, in each infected mosquito sample, but not in non-infected counterparts or ddH 2 O samples washes, with testing one week or ten months after sample collection. Copyright © 2017 Elsevier B.V. All rights reserved.
Mars Sample Handling Protocol Workshop Series
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Race, Margaret S. (Editor); Acevedo, Sara (Technical Monitor)
2000-01-01
This document is the report resulting from the first workshop of the series on development of the criteria for a Mars sample handling protocol. Workshop 1 was held in Bethesda, Maryland on March 20-22, 2000. This report serves to document the proceedings of Workshop 1; it summarizes relevant background information, provides an overview of the deliberations to date, and helps frame issues that will need further attention or resolution in upcoming workshops. Specific recommendations are not part of this report.
NASA Astrophysics Data System (ADS)
Nagy, B. K.; Mohssen, M.; Hughey, K. F. D.
2017-04-01
This study addresses technical questions concerning the use of the partial duration series (PDS) within the domain of flood frequency analysis. The recurring questions which often prevent the standardised use of the PDS are peak independence and threshold selection. This paper explores standardised approaches to peak and threshold selection to produce PDS samples with differing average annual exceedances, using six theoretical probability distributions. The availability of historical annual maximum (AMS) data (1930-1966) in addition to systemic AMS data (1967-2015) enables a unique comparison between the performance of the PDS sample and the systemic AMS sample. A recently derived formula for the translation of the PDS into the annual domain, simplifying the use of the PDS, is utilised in an applied case study for the first time. Overall, the study shows that PDS sampling returns flood magnitudes similar to those produced by AMS series utilising historical data and thus the use of the PDS should be preferred in cases where historical flood data is unavailable.
Semiparametric modeling: Correcting low-dimensional model error in parametric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013
2016-03-01
In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less
NASA Astrophysics Data System (ADS)
Buta, Ronald J.
2017-11-01
Rings are important and characteristic features of disc-shaped galaxies. This paper is the first in a series that re-visits galactic rings with the goals of further understanding the nature of the features and for examining their role in the secular evolution of galaxy structure. The series begins with a new sample of 3962 galaxies drawn from the Galaxy Zoo 2 citizen science data base, selected because zoo volunteers recognized a ring-shaped pattern in the morphology as seen in Sloan Digital Sky Survey colour images. The galaxies are classified within the framework of the Comprehensive de Vaucouleurs revised Hubble-Sandage system. It is found that zoo volunteers cued on the same kinds of ring-like features that were recognized in the 1995 Catalogue of Southern Ringed Galaxies. This paper presents the full catalogue of morphological classifications, comparisons with other sources of classifications and some histograms designed mainly to highlight the content of the catalogue. The advantages of the sample are its large size and the generally good quality of the images; the main disadvantage is the low physical resolution that limits the detectability of linearly small rings such as nuclear rings. The catalogue includes mainly inner and outer disc rings and lenses. Cataclysmic (`encounter-driven') rings (such as ring and polar ring galaxies) are recognized in less than 1 per cent of the sample.
Mapping mountain pine beetle mortality through growth trend analysis of time-series landsat data
Liang, Lu; Chen, Yanlei; Hawbaker, Todd J.; Zhu, Zhi-Liang; Gong, Peng
2014-01-01
Disturbances are key processes in the carbon cycle of forests and other ecosystems. In recent decades, mountain pine beetle (MPB; Dendroctonus ponderosae) outbreaks have become more frequent and extensive in western North America. Remote sensing has the ability to fill the data gaps of long-term infestation monitoring, but the elimination of observational noise and attributing changes quantitatively are two main challenges in its effective application. Here, we present a forest growth trend analysis method that integrates Landsat temporal trajectories and decision tree techniques to derive annual forest disturbance maps over an 11-year period. The temporal trajectory component successfully captures the disturbance events as represented by spectral segments, whereas decision tree modeling efficiently recognizes and attributes events based upon the characteristics of the segments. Validated against a point set sampled across a gradient of MPB mortality, 86.74% to 94.00% overall accuracy was achieved with small variability in accuracy among years. In contrast, the overall accuracies of single-date classifications ranged from 37.20% to 75.20% and only become comparable with our approach when the training sample size was increased at least four-fold. This demonstrates that the advantages of this time series work flow exist in its small training sample size requirement. The easily understandable, interpretable and modifiable characteristics of our approach suggest that it could be applicable to other ecoregions.
A television format for national health promotion: Finland's "Keys to Health".
Puska, P; McAlister, A; Niemensivu, H; Piha, T; Wiio, J; Koskela, K
1987-01-01
A series of televised risk reduction and health promotion programs have been broadcast in Finland since 1978. The five series of programs were the product of a cooperative effort by Finland's television channel 2 and the North Karelia Project. The series has featured a group of volunteers who are at high risk of diseases because of their unhealthful habits and two health educators who counsel the studio group and the viewers to make changes in health behaviors. The "Keys to Health 84-85" was the fifth of the series and consisted of 15 parts, 35 minutes viewing time each. Results of the evaluation surveys, which are presented briefly, indicate that viewing rates were high. Of the countrywide sample, 27 percent of men and 35 percent of women reported that they had viewed at least three parts of the series. Reported changes in behaviors were substantial among the viewers who had seen several parts of the series and were meaningful, overall, for the entire population. Of the countrywide sample, 7.1 percent of smoking viewers reported an attempt to stop smoking--this number was 3.6 percent of all smokers. The percentages of weight loss among viewers and the total population sample were 3.9 for men and 2.1 for women. The reported reductions in fat consumption were 27.2 percent for men and 15.0 percent for women. The reported effects in the demonstration area of North Karelia were even higher, mainly because of higher viewing rates. Images p265-a p266-a PMID:3108941
NASA Astrophysics Data System (ADS)
Pratiwi, W. N.; Rochintaniawati, D.; Agustin, R. R.
2018-05-01
This research was focused on investigating the effect of multiple intelligence -based learning as a learning approach towards students’ concept mastery and interest in learning matter. The one-group pre-test - post-test design was used in this research towards a sample which was according to the suitable situation of the research sample, n = 13 students of the 7th grade in a private school in Bandar Seri Begawan. The students’ concept mastery was measured using achievement test and given at the pre-test and post-test, meanwhile the students’ interest level was measured using a Likert Scale for interest. Based on the analysis of the data, the result shows that the normalized gain was .61, which was considered as a medium improvement. in other words, students’ concept mastery in matter increased after being taught using multiple intelligence-based learning. The Likert scale of interest shows that most students have a high interest in learning matter after being taught by multiple intelligence-based learning. Therefore, it is concluded that multiple intelligence – based learning helped in improving students’ concept mastery and gain students’ interest in learning matter.
Lakewide monitoring of suspended solids using satellite data. [Lake Superior water reclamation
NASA Technical Reports Server (NTRS)
Sydor, M. (Principal Investigator)
1981-01-01
In anticipation of using LANDSAT and Nimbus 7 coastal zone color scanner data to observe the decrease in suspended solids in Lake Superior following cessation of the dumping of taconite tailings, a series of lakewide sampling cruises was conducted to make radiometric measurements at a lake level. A means for identifying particulates and measuring their concentration from LANDSAT data was developed. The initial distribution of chemical parameters in the extreme western arm of the lake, where the concentration gradients are high, is to be based on the LANDSAT data. Subsequent lakewide dispersal and distribution is to be based on the coastal zone color scanner data.
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
TEAM Webinar Series | EGRP/DCCPS/NCI/NIH
View archived webinars from the Transforming Epidemiology through Advanced Methods (TEAM) Webinar Series, hosted by NCI's Epidemiology and Genomics Research Program. Topics include participant engagement, data coordination, mHealth tools, sample selection, and instruments for diet & physical activity assessment.
A hybrid algorithm for clustering of time series data based on affinity search technique.
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.
A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966
Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R; Changelian, Armen; Laws, Edward R; Santagata, Sandro; Agar, Nathalie Y R; Van Berkel, Gary J
2015-08-01
Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections, using a fully automated droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS system for spatially resolved sampling, HPLC separation, and mass spectrometric detection. Excellent correlation was found between the protein distribution data obtained with this method and data obtained with matrix-assisted laser desorption/ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland. AVP was most abundant in the posterior pituitary gland region (neurohypophysis), and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH-secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH-secreting adenomas and in normal anterior adenohypophysis compared with non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis, as expected. This work reveals that a fully automated droplet-based liquid-microjunction surface-sampling system coupled to HPLC-ESI-MS-MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, including AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity, and specificity of this method support the potential of this basic technology, with further advancement, for assisting surgical decision-making. Graphical Abstract Mass spectrometry based profiling of hormones in human pituitary gland and tumor thin tissue sections.
Characteristics of surface modified Ti-6Al-4V alloy by a series of YAG laser irradiation
NASA Astrophysics Data System (ADS)
Zeng, Xian; Wang, Wenqin; Yamaguchi, Tomiko; Nishio, Kazumasa
2018-01-01
In this study, a double-layer Ti (C, N) film was successfully prepared on Ti-6Al-4V alloy by a series of YAG laser irradiation in nitrogen atmosphere, aiming at improving the wear resistance. The effects of laser irradiation pass upon surface chemical composition, microstructures and hardness were investigated. The results showed that the surface chemicals were independent from laser irradiation pass, which the up layer of film was a mixture of TiN and TiC0.3N0.7, and the down layer was nitrogen-rich α-Ti. Both the surface roughness and hardness increased as raising the irradiation passes. However, surface deformation and cracks happened in the case above 3 passes' irradiation. The wear resistance of laser modified sample by 3 passes was improved approximately by 37 times compared to the as received substrate. Moreover, the cytotoxic V ion released from laser modified sample was less than that of as received Ti-6Al-4V alloy in SBF, suggesting the potentiality of a new try to modify the sliding part of Ti-based hard tissue implants in future biomedical application.
Processes of Fatigue Destruction in Nanopolymer-Hydrophobised Ceramic Bricks
Fic, Stanisław; Szewczak, Andrzej; Barnat-Hunek, Danuta; Łagód, Grzegorz
2017-01-01
The article presents a proposal of a model of fatigue destruction of hydrophobised ceramic brick, i.e., a basic masonry material. The brick surface was hydrophobised with two inorganic polymers: a nanopolymer preparation based on dialkyl siloxanes (series 1–5) and an aqueous silicon solution (series 6–10). Nanosilica was added to the polymers to enhance the stability of the film formed on the brick surface. To achieve an appropriate blend of the polymer liquid phase and the nano silica solid phase, the mixture was disintegrated by sonication. The effect of the addition of nano silica and sonication on changes in the rheological parameters, i.e., viscosity and surface tension, was determined. Material fatigue was induced by cyclic immersion of the samples in water and drying at a temperature of 100 °C, which caused rapid and relatively dynamic movement of water. The moisture and temperature effect was determined by measurement of changes in surface hardness performed with the Vickers method and assessment of sample absorbability. The results provided an approximate picture of fatigue destruction of brick and hydrophobic coatings in relation to changes in their temporal stability. Additionally, SEM images of hydrophobic coatings in are shown. PMID:28772404
New U-series dates at the Caune de l'Arago, France
Falgueres, Christophe; Yokoyama, Y.; Shen, G.; Bischoff, J.L.; Ku, T.-L.; de Lumley, Henry
2004-01-01
In the beginning of the 1980s, the Caune de l'Arago was the focus of an interdisciplinary effort to establish the chronology of the Homo heidelbergensis (Preneandertals) fossils using a variety of techniques on bones and on speleothems. The result was a very large spread of dates particularly on bone samples. Amid the large spread of results, some radiometric data on speleothems showed a convergence in agreement with inferences from faunal studies. We present new U-series results on the stalagmitic formation located at the bottom of Unit IV (at the base of the Upper Stratigraphic Complex). Samples and splits were collaboratively analyzed in the four different laboratories with excellent interlaboratory agreement. Results show the complex sequence of this stalagmitic formation. The most ancient part is systematically at internal isotopic equilibrium (>350 ka) suggesting growth during or before isotopic stage 9, representing a minimum age for the human remains found in Unit III of the Middle Stratigraphical Complex which is stratigraphically under the basis of the studied stalagmitic formation. Overlaying parts of the speleothem date to the beginning of marine isotope stages 7 and 5. ?? 2003 Elsevier Science Ltd. All rights reserved.
Processes of Fatigue Destruction in Nanopolymer-Hydrophobised Ceramic Bricks.
Fic, Stanisław; Szewczak, Andrzej; Barnat-Hunek, Danuta; Łagód, Grzegorz
2017-01-06
The article presents a proposal of a model of fatigue destruction of hydrophobised ceramic brick, i.e., a basic masonry material. The brick surface was hydrophobised with two inorganic polymers: a nanopolymer preparation based on dialkyl siloxanes (series 1-5) and an aqueous silicon solution (series 6-10). Nanosilica was added to the polymers to enhance the stability of the film formed on the brick surface. To achieve an appropriate blend of the polymer liquid phase and the nano silica solid phase, the mixture was disintegrated by sonication. The effect of the addition of nano silica and sonication on changes in the rheological parameters, i.e., viscosity and surface tension, was determined. Material fatigue was induced by cyclic immersion of the samples in water and drying at a temperature of 100 °C, which caused rapid and relatively dynamic movement of water. The moisture and temperature effect was determined by measurement of changes in surface hardness performed with the Vickers method and assessment of sample absorbability. The results provided an approximate picture of fatigue destruction of brick and hydrophobic coatings in relation to changes in their temporal stability. Additionally, SEM images of hydrophobic coatings in are shown.
New U-series dates at the Caune de l'Arago, France
Falgueres, Christophe; Yokoyama, Y.; Shen, G.; Bischoff, J.L.; Ku, T.-L.; de Lumley, Henry
2004-01-01
In the beginning of the 1980s, the Caune de l'Arago was the focus of an interdisciplinary effort to establish the chronology of the Homo heidelbergensis (Preneandertals) fossils using a variety of techniques on bones and on speleothems. The result was a very large spread of dates particularly on bone samples. Amid the large spread of results, some radiometric data on speleothems showed a convergence in agreement with inferences from faunal studies. We present new U-series results on the stalagmitic formation located at the bottom of Unit IV (at the base of the Upper Stratigraphic Complex). Samples and splits were collaboratively analyzed in the four different laboratories with excellent interlaboratory agreement. Results show the complex sequence of this stalagmitic formation. The most ancient part is systematically at internal isotopic equilibrium (>350 ka) suggesting growth during or before isotopic stage 9, representing a minimum age for the human remains found in Unit III of the Middle Stratigraphical Complex which is stratigraphically under the basis of the studied stalagmitic formation. Overlaying parts of the speleothem date to the beginning of marine isotope stages 7 and 5. ?? 2003 Elsevier Ltd. All rights reserved.
Characterization of linear viscoelastic anti-vibration rubber mounts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lodhia, B.B.; Esat, I.I.
1996-11-01
The aim of this paper is to identify the dynamic characteristics that are evident in linear viscoelastic rubber mountings. The characteristics under consideration included the static and dynamic stiffnesses with the variation of amplitude and frequency of the sinusoidal excitation. Test samples of various rubber mix were tested and compared to reflect magnitude of dependency on composition. In the light of the results, the validity and effectiveness of a mathematical model was investigated and a suitable technique based on the Tschoegl and Emri Algorithm, was utilized to fit the model to the experimental data. The model which was chosen, wasmore » an extension of the basic Maxwell model, which is based on linear spring and dashpot elements in series and parallel called the Wiechert model. It was found that the extent to which the filler and vulcanisate was present in the rubber sample, did have a great effect on the static stiffness characteristics, and the storage and loss moduli. The Tschoegl and Emri Algorithm was successfully utilized in modelling the frequency response of the samples.« less
NASA Astrophysics Data System (ADS)
Kester, Do; Bontekoe, Romke
2011-03-01
We present a way to generate heuristic mathematical models based on the Darwinian principles of variation and selection in a pool of individuals over many generations. Each individual has a genotype (the hereditary properties) and a phenotype (the expression of these properties in the environment). Variation is achieved by cross-over and mutation operations on the genotype which consists in the present case of a single chromosome. The genotypes `live' in the environment of the data. Nested Sampling is used to optimize the free parameters of the models given the data, thus giving rise to the phenotypes. Selection is based on the phenotypes. The evidences which naturally follow from the Nested Sampling Algorithm are used in a second level of Nested Sampling to find increasingly better models. The data in this paper originate from the Leiden Cytology and Pathology Laboratory (LCPL), which screens pap smears for cervical cancer. We have data for 1750 women who on average underwent 5 tests each. The data on individual women are treated as a small time series. We will try to estimate the next value of the prime cancer indicator from previous tests of the same woman.
Modular design and implementation of field-programmable-gate-array-based Gaussian noise generator
NASA Astrophysics Data System (ADS)
Li, Yuan-Ping; Lee, Ta-Sung; Hwang, Jeng-Kuang
2016-05-01
The modular design of a Gaussian noise generator (GNG) based on field-programmable gate array (FPGA) technology was studied. A new range reduction architecture was included in a series of elementary function evaluation modules and was integrated into the GNG system. The approximation and quantisation errors for the square root module with a first polynomial approximation were high; therefore, we used the central limit theorem (CLT) to improve the noise quality. This resulted in an output rate of one sample per clock cycle. We subsequently applied Newton's method for the square root module, thus eliminating the need for the use of the CLT because applying the CLT resulted in an output rate of two samples per clock cycle (>200 million samples per second). Two statistical tests confirmed that our GNG is of high quality. Furthermore, the range reduction, which is used to solve a limited interval of the function approximation algorithms of the System Generator platform using Xilinx FPGAs, appeared to have a higher numerical accuracy, was operated at >350 MHz, and can be suitably applied for any function evaluation.
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364
On the Inference of Functional Circadian Networks Using Granger Causality
Pourzanjani, Arya; Herzog, Erik D.; Petzold, Linda R.
2015-01-01
Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals. PMID:26413748
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.
Canon, Abbey J; Lauterbach, Nicholas; Bates, Jessica; Skoland, Kristin; Thomas, Paul; Ellingson, Josh; Ruston, Chelsea; Breuer, Mary; Gerardy, Kimberlee; Hershberger, Nicole; Hayman, Kristen; Buckley, Alexis; Holtkamp, Derald; Karriker, Locke
2017-06-15
OBJECTIVE To develop and evaluate a pyramid training method for teaching techniques for collection of diagnostic samples from swine. DESIGN Experimental trial. SAMPLE 45 veterinary students. PROCEDURES Participants went through a preinstruction assessment to determine their familiarity with the equipment needed and techniques used to collect samples of blood, nasal secretions, feces, and oral fluid from pigs. Participants were then shown a series of videos illustrating the correct equipment and techniques for collecting samples and were provided hands-on pyramid-based instruction wherein a single swine veterinarian trained 2 or 3 participants on each of the techniques and each of those participants, in turn, trained additional participants. Additional assessments were performed after the instruction was completed. RESULTS Following the instruction phase, percentages of participants able to collect adequate samples of blood, nasal secretions, feces, and oral fluid increased, as did scores on a written quiz assessing participants' ability to identify the correct equipment, positioning, and procedures for collection of samples. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that the pyramid training method may be a feasible way to rapidly increase diagnostic sampling capacity during an emergency veterinary response to a swine disease outbreak.
Stochastic rainfall synthesis for urban applications using different regionalization methods
NASA Astrophysics Data System (ADS)
Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.
2017-12-01
The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.
Zia, Khalid Mahmood; Anjum, Sohail; Zuber, Mohammad; Mujahid, Muhammad; Jamil, Tahir
2014-05-01
The present research work was performed to synthesize a new series of chitosan based polyurethane elastomers (PUEs) using poly(ɛ-caprolactone) (PCL). The chitosan based PUEs were prepared by step-growth polymerization technique using poly(ɛ-caprolactone) (PCL) and 2,4-toluene diisocyanate (TDI). In the second step the PU prepolymer was extended with different mole ratios of chitosan and 1,4-butane diol (BDO). Molecular engineering was carried out during the synthesis. The conventional spectroscopic characterization of the synthesized samples using FT-IR confirms the existence of the proposed chitosan based PUEs structure. Internal morphology of the prepared PUEs was studied using SEM analysis. The SEM images confirmed the incorporation of chitosan molecules into the PU backbone. Copyright © 2014 Elsevier B.V. All rights reserved.
Featureless classification of light curves
NASA Astrophysics Data System (ADS)
Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.
2015-08-01
In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.
Historical floods reconstruction using NOAA 20CR global climate reanalysis over the last 150 years
NASA Astrophysics Data System (ADS)
Mathevet, T.; Brigode, P.; Jégonday, S.; Hingray, B.; Gailhard, J.; Wilhelm, B.
2017-12-01
Since several years, climatologists are producing long reanalysis for studying the variability of global climate over the last 150 years. For hydrologists, these datasets offer interesting opportunities for reconstructing historical flood events, and thus increasing the sample size used for flood frequency analysis. In this study, a streamflow reconstruction method based on the analogy of atmospheric situations (using NOAA 20CR reanalysis) for the reconstruction of climatic series and on a rainfall-runoff model for the streamflow reconstruction has been applied over different French catchments at the daily timestep. The studied catchments have been selected because of the availability of long observed streamflow series (used for quantifying the performances of the flood reconstructions) and for their different hydro-climatological regimes. Different methodologies have been tested for the reconstruction of daily climatic series over the 1851-2014 period, using geopotential heights and additional variables available within the 20CR reanalysis (relative humidity, precipitable water, etc.). Long observed climatic series have also been used when available as a reference for the climatic reconstructions. The different reconstruction methods have been finally ranked in terms of their historical flood reconstruction performances, quantified by flood types (autumn or winter floods) and atmospheric genesis (using a weather pattern classification). The obtained results indicate that using additional 20CR variables to the geopotential heights only slightly improve the flood reconstructions, while using observed climatic series improves significantly the flood reconstruction over the different catchments.
Ranney, Megan L; Meisel, Zachary F; Choo, Esther K; Garro, Aris C; Sasson, Comilla; Morrow Guthrie, Kate
2015-09-01
Qualitative methods are increasingly being used in emergency care research. Rigorous qualitative methods can play a critical role in advancing the emergency care research agenda by allowing investigators to generate hypotheses, gain an in-depth understanding of health problems or specific populations, create expert consensus, and develop new intervention and dissemination strategies. In Part I of this two-article series, we provided an introduction to general principles of applied qualitative health research and examples of its common use in emergency care research, describing study designs and data collection methods most relevant to our field (observation, individual interviews, and focus groups). Here in Part II of this series, we outline the specific steps necessary to conduct a valid and reliable qualitative research project, with a focus on interview-based studies. These elements include building the research team, preparing data collection guides, defining and obtaining an adequate sample, collecting and organizing qualitative data, and coding and analyzing the data. We also discuss potential ethical considerations unique to qualitative research as it relates to emergency care research. © 2015 by the Society for Academic Emergency Medicine.
Ranney, Megan L.; Meisel, Zachary; Choo, Esther K.; Garro, Aris; Sasson, Comilla; Morrow, Kathleen
2015-01-01
Qualitative methods are increasingly being used in emergency care research. Rigorous qualitative methods can play a critical role in advancing the emergency care research agenda by allowing investigators to generate hypotheses, gain an in-depth understanding of health problems or specific populations, create expert consensus, and develop new intervention and dissemination strategies. In Part I of this two-article series, we provided an introduction to general principles of applied qualitative health research and examples of its common use in emergency care research, describing study designs and data collection methods most relevant to our field (observation, individual interviews, and focus groups). Here in Part II of this series, we outline the specific steps necessary to conduct a valid and reliable qualitative research project, with a focus on interview-based studies. These elements include building the research team, preparing data collection guides, defining and obtaining an adequate sample, collecting and organizing qualitative data, and coding and analyzing the data. We also discuss potential ethical considerations unique to qualitative research as it relates to emergency care research. PMID:26284572
Graph reconstruction using covariance-based methods.
Sulaimanov, Nurgazy; Koeppl, Heinz
2016-12-01
Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.
Takemura, Hiroyuki; Ai, Tomohiko; Kimura, Konobu; Nagasaka, Kaori; Takahashi, Toshihiro; Tsuchiya, Koji; Yang, Haeun; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Tabe, Yoko; Ohsaka, Akimichi
2018-01-01
The XN series automated hematology analyzer has been equipped with a body fluid (BF) mode to count and differentiate leukocytes in BF samples including cerebrospinal fluid (CSF). However, its diagnostic accuracy is not reliable for CSF samples with low cell concentration at the border between normal and pathologic level. To overcome this limitation, a new flow cytometry-based technology, termed "high sensitive analysis (hsA) mode," has been developed. In addition, the XN series analyzer has been equipped with the automated digital cell imaging analyzer DI-60 to classify cell morphology including normal leukocytes differential and abnormal malignant cells detection. Using various BF samples, we evaluated the performance of the XN-hsA mode and DI-60 compared to manual microscopic examination. The reproducibility of the XN-hsA mode showed good results in samples with low cell densities (coefficient of variation; % CV: 7.8% for 6 cells/μL). The linearity of the XN-hsA mode was established up to 938 cells/μL. The cell number obtained using the XN-hsA mode correlated highly with the corresponding microscopic examination. Good correlation was also observed between the DI-60 analyses and manual microscopic classification for all leukocyte types, except monocytes. In conclusion, the combined use of cell counting with the XN-hsA mode and automated morphological analyses using the DI-60 mode is potentially useful for the automated analysis of BF cells.
Novel characterization of the aerosol and gas-phase composition of aerosolized jet fuel.
Tremblay, Raphael T; Martin, Sheppard A; Fisher, Jeffrey W
2010-04-01
Few robust methods are available to characterize the composition of aerosolized complex hydrocarbon mixtures. The difficulty in separating the droplets from their surrounding vapors and preserving their content is challenging, more so with fuels, which contain hydrocarbons ranging from very low to very high volatility. Presented here is a novel method that uses commercially available absorbent tubes to measure a series of hydrocarbons in the vapor and droplets from aerosolized jet fuels. Aerosol composition and concentrations were calculated from the differential between measured total (aerosol and gas-phase) and measured gas-phase concentrations. Total samples were collected directly, whereas gas-phase only samples were collected behind a glass fiber filter to remove droplets. All samples were collected for 1 min at 400 ml min(-1) and quantified using thermal desorption-gas chromatography-mass spectrometry. This method was validated for the quantification of the vapor and droplet content from 4-h aerosolized jet fuel exposure to JP-8 and S-8 at total concentrations ranging from 200 to 1000 mg/m(3). Paired samples (gas-phase only and total) were collected every approximately 40 min. Calibrations were performed with neat fuel to calculate total concentration and also with a series of authentic standards to calculate specific compound concentrations. Accuracy was good when compared to an online GC-FID (gas chromatography-flame ionization detection) technique. Variability was 15% or less for total concentrations, the sum of all gas-phase compounds, and for most specific compound concentrations in both phases. Although validated for jet fuels, this method can be adapted to other hydrocarbon-based mixtures.
The ASAS-SN bright supernova catalogue - III. 2016
NASA Astrophysics Data System (ADS)
Holoien, T. W.-S.; Brown, J. S.; Stanek, K. Z.; Kochanek, C. S.; Shappee, B. J.; Prieto, J. L.; Dong, Subo; Brimacombe, J.; Bishop, D. W.; Bose, S.; Beacom, J. F.; Bersier, D.; Chen, Ping; Chomiuk, L.; Falco, E.; Godoy-Rivera, D.; Morrell, N.; Pojmanski, G.; Shields, J. V.; Strader, J.; Stritzinger, M. D.; Thompson, Todd A.; Woźniak, P. R.; Bock, G.; Cacella, P.; Conseil, E.; Cruz, I.; Fernandez, J. M.; Kiyota, S.; Koff, R. A.; Krannich, G.; Marples, P.; Masi, G.; Monard, L. A. G.; Nicholls, B.; Nicolas, J.; Post, R. S.; Stone, G.; Wiethoff, W. S.
2017-11-01
This catalogue summarizes information for all supernovae discovered by the All-Sky Automated Survey for SuperNovae (ASAS-SN) and all other bright (mpeak ≤ 17), spectroscopically confirmed supernovae discovered in 2016. We then gather the near-infrared through ultraviolet magnitudes of all host galaxies and the offsets of the supernovae from the centres of their hosts from public data bases. We illustrate the results using a sample that now totals 668 supernovae discovered since 2014 May 1, including the supernovae from our previous catalogues, with type distributions closely matching those of the ideal magnitude limited sample from Li et al. This is the third of a series of yearly papers on bright supernovae and their hosts from the ASAS-SN team.
NASA Astrophysics Data System (ADS)
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
Visibility Graph Based Time Series Analysis.
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.
NASA Astrophysics Data System (ADS)
Hudson, Samuel M.; Johnson, Cari L.; Efendiyeva, Malakhat A.; Rowe, Harold D.; Feyzullayev, Akper A.; Aliyev, Chingiz S.
2008-04-01
The Oligocene-Miocene Maikop Series is a world-class source rock responsible for much of the oil and gas found in the South Caspian Basin. It is composed of up to 3 km of marine mudstone, and contains a nearly continuous record of deposition during progressive tectonic closure of the basin as the Arabian Peninsula converged northward into Eurasia. Historically, the stratigraphy of this interval has been difficult to define due to the homogenous nature of the fine-grained, clay-dominated strata. Outcrop exposures in eastern Azerbaijan allow direct observation and detailed sampling of the interval, yielding a more comprehensive stratigraphic context and a more advanced understanding of syndepositional conditions in the eastern Paratethys Sea. Specifically, the present investigation reveals that coupling field-based stratigraphic characterization with geochemical analyses (e.g., bulk elemental geochemistry, Rock-Eval pyrolysis, bulk stable isotope geochemistry) yields a more robust understanding of internal variations within the Maikop Series. Samples from seven sections located within the Shemakha-Gobustan oil province reveal consistent stratigraphic and spatial geochemical trends. It is proposed that the Maikop Series be divided into three members based on these data along with lithostratigraphic and biostratigraphic data reported herein. When comparing Rupelian (Early Oligocene) and Chattian (Late Oligocene) strata, the Rupelian-age strata commonly possess higher TOC values, more negative δ 15N tot values, more positive δ 13C org values, and higher radioactivity relative to Chattian-age rocks. The trace metals Mo and V (normalized to Al) are positively correlated with TOC, with maximum values occurring at the Rupelian-Chattian boundary and overall higher average values in the Rupelian. Across the Oligocene-Miocene boundary, a slight drop in V/Al, Mo/Al ratios is observed, along with drops in %S and TOC. These results indicate that geochemical signatures of the Maikop Series are regional in nature, and furthermore that analogous fine-grained sections may be better characterized and subdivided using similar techniques. In general, geochemical indicators suggest that the basin was in limited communication with the Tethys Sea throughout the Oligocene-Early Miocene, with suboxic to anoxic conditions present during the Oligocene and to a lesser extent in the Miocene. This increased isolation was likely due to tectonic uplift to both the south and north of the study area, and greatly enhanced by global sea-level fluctuations. These data serve as the basis for a more detailed understanding of the tectonic evolution of the region, and support a standardized chemostratigraphic division of the important petroleum source interval.
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.
El Mamoney, M H; Khater, Ashraf E M
2004-01-01
The Red Sea is a deep semi-enclosed and narrow basin connected to the Indian Ocean by a narrow sill in the south and to the Suez Canal in the north. Oil industries in the Gulf of Suez, phosphate ore mining activities in Safaga-Quseir region and intensified navigation activities are non-nuclear pollution sources that could have serious radiological impacts on the marine environment and the coastal ecosystems of the Red Sea. It is essential to establish the radiological base-line data, which does not exist yet, and to investigate the present radio-ecological impact of the non-nuclear industries to preserve and protect the coastal environment of the Red Sea. Some natural and man-made radionuclides have been measured in shore sediment samples collected from the Egyptian coast of the Red Sea. The specific activities of 226Ra and 210Pb (238U) series, 232Th series, 40K and 137Cs (Bq/kg dry weight) were measured using gamma ray spectrometers based on hyper-pure germanium detectors. The specific activities of 210Po (210Pb) and uranium isotopes (238U, 235U and 234U) (Bq/kg dry weight) were measured using alpha spectrometers based on surface barrier (PIPS) detectors after radiochemical separation. The absorbed radiation dose rates in air (nGy/h) due to natural radionuclides in shore sediment and radium equivalent activity index (Bq/kg) were calculated. The specific activity ratios of 228Ra/226Ra, 210Pb/226Ra, 226Ra/238U and 234U/238U were calculated for evaluation of the geo-chemical behaviour of these radionuclides. The average specific activity of 226Ra (238U) series, 232Th series, 40K and 210Pb were 24.7, 31.4, 427.5 and 25.6 Bq/kg, respectively. The concentration of 137Cs in the sediment samples was less than the lower limit of detection. The Red Sea coast is an arid region with very low rainfall and the sediment is mainly composed of sand. The specific activity of 238U, 235U and 234U were 25.3, 2.9 and 25.0 Bq/kg. The average specific activity ratios of 226Ra/228Ra, 210Pb/226Ra and 234U/238U were 1.67, 1.22 and 1.0, respectively. The relationship between 226Ra/228Ra activity ratio and sample locations along the coastal shoreline indicates the increase of this ratio in the direction of the Shuqeir in the north and Safaga in the south where the oil exploration and phosphate mining activities are located. These activities may contribute a high flux of 226Ra. The concentration and distribution pattern of 226Ra in sediment can be used to trace the radiological impact of the non-nuclear industries on the Red Sea coast.
Advancing microwave technology for dehydration processing of biologics.
Cellemme, Stephanie L; Van Vorst, Matthew; Paramore, Elisha; Elliott, Gloria D
2013-10-01
Our prior work has shown that microwave processing can be effective as a method for dehydrating cell-based suspensions in preparation for anhydrous storage, yielding homogenous samples with predictable and reproducible drying times. In the current work an optimized microwave-based drying process was developed that expands upon this previous proof-of-concept. Utilization of a commercial microwave (CEM SAM 255, Matthews, NC) enabled continuous drying at variable low power settings. A new turntable was manufactured from Ultra High Molecular Weight Polyethylene (UHMW-PE; Grainger, Lake Forest, IL) to provide for drying of up to 12 samples at a time. The new process enabled rapid and simultaneous drying of multiple samples in containment devices suitable for long-term storage and aseptic rehydration of the sample. To determine sample repeatability and consistency of drying within the microwave cavity, a concentration series of aqueous trehalose solutions were dried for specific intervals and water content assessed using Karl Fischer Titration at the end of each processing period. Samples were dried on Whatman S-14 conjugate release filters (Whatman, Maidestone, UK), a glass fiber membrane used currently in clinical laboratories. The filters were cut to size for use in a 13 mm Swinnex(®) syringe filter holder (Millipore(™), Billerica, MA). Samples of 40 μL volume could be dehydrated to the equilibrium moisture content by continuous processing at 20% with excellent sample-to-sample repeatability. The microwave-assisted procedure enabled high throughput, repeatable drying of multiple samples, in a manner easily adaptable for drying a wide array of biological samples. Depending on the tolerance for sample heating, the drying time can be altered by changing the power level of the microwave unit.
NASA Astrophysics Data System (ADS)
Gun'ko, V. M.; Blitz, J. P.; Bandaranayake, B.; Pakhlov, E. M.; Zarko, V. I.; Sulym, I. Ya.; Kulyk, K. S.; Galaburda, M. V.; Bogatyrev, V. M.; Oranska, O. I.; Borysenko, M. V.; Leboda, R.; Skubiszewska-Zięba, J.; Janush, W.
2012-06-01
A series of photocatalysts based on silica (nanoparticulate) supported titania, ceria, and ceria/zirconia were synthesized and characterized by a variety of techniques including surface area measurements, X-ray diffraction, Fourier transform infrared spectroscopy, zeta potential, surface charge density, and photocatalytic behavior toward methylene blue decomposition. Thermal treatment at 600 °C increases the anatase content of the titania based catalysts detected by XRD. Changes in the infrared spectra before and after thermal treatment indicate that at low temperature there are more tbnd Sisbnd Osbnd Titbnd bonds than at high temperature. As these bonds break upon heating the SiO2 and TiO2 separate, allowing the TiO2 anatase phase to form. This results in an increased catalytic activity for the thermally treated samples. Nearly all titania based samples exhibit a negative surface charge density at pH 7 (initial pH of photocatalytic studies) which aids adsorption of methylene blue. The crystallinity of ceria and ceria/zirconia based catalysts are in some cases limited, and in others non-existent. Even though the energy band gap (Eg) can be lower for these catalysts than for the titania based catalysts, their photocatalytic properties are inferior.
Paces, James B.; Nichols, Paul J.; Neymark, Leonid A.; Rajaram, Harihar
2013-01-01
Groundwater flow through fractured felsic tuffs and lavas at the Nevada National Security Site represents the most likely mechanism for transport of radionuclides away from underground nuclear tests at Pahute Mesa. To help evaluate fracture flow and matrix–water exchange, we have determined U-series isotopic compositions on more than 40 drill core samples from 5 boreholes that represent discrete fracture surfaces, breccia zones, and interiors of unfractured core. The U-series approach relies on the disruption of radioactive secular equilibrium between isotopes in the uranium-series decay chain due to preferential mobilization of 234U relative to 238U, and U relative to Th. Samples from discrete fractures were obtained by milling fracture surfaces containing thin secondary mineral coatings of clays, silica, Fe–Mn oxyhydroxides, and zeolite. Intact core interiors and breccia fragments were sampled in bulk. In addition, profiles of rock matrix extending 15 to 44 mm away from several fractures that show evidence of recent flow were analyzed to investigate the extent of fracture/matrix water exchange. Samples of rock matrix have 234U/238U and 230Th/238U activity ratios (AR) closest to radioactive secular equilibrium indicating only small amounts of groundwater penetrated unfractured matrix. Greater U mobility was observed in welded-tuff matrix with elevated porosity and in zeolitized bedded tuff. Samples of brecciated core were also in secular equilibrium implying a lack of long-range hydraulic connectivity in these cases. Samples of discrete fracture surfaces typically, but not always, were in radioactive disequilibrium. Many fractures had isotopic compositions plotting near the 230Th-234U 1:1 line indicating a steady-state balance between U input and removal along with radioactive decay. Numerical simulations of U-series isotope evolution indicate that 0.5 to 1 million years are required to reach steady-state compositions. Once attained, disequilibrium 234U/238U and 230Th/238U AR values can be maintained indefinitely as long as hydrological and geochemical processes remain stable. Therefore, many Pahute Mesa fractures represent stable hydrologic pathways over million-year timescales. A smaller number of samples have non-steady-state compositions indicating transient conditions in the last several hundred thousand years. In these cases, U mobility is dominated by overall gains rather than losses of U.
Case Series Investigations in Cognitive Neuropsychology
Schwartz, Myrna F.; Dell, Gary S.
2011-01-01
Case series methodology involves the systematic assessment of a sample of related patients, with the goal of understanding how and why they differ from one another. This method has become increasingly important in cognitive neuropsychology, which has long been identified with single-subject research. We review case series studies dealing with impaired semantic memory, reading, and language production, and draw attention to the affinity of this methodology for testing theories that are expressed as computational models and for addressing questions about neuroanatomy. It is concluded that case series methods usefully complement single-subject techniques. PMID:21714756
Vázquez-Martínez, Guadalupe; Rodriguez, Mario H; Hernández-Hernández, Fidel; Ibarra, Jorge E
2004-04-01
An efficient strategy, based on a combination of procedures, was developed to obtain axenic cultures from field-collected samples of the cyanobacterium Phormidium animalis. Samples were initially cultured in solid ASN-10 medium, and a crude separation of major contaminants from P. animalis filaments was achieved by washing in a series of centrifugations and resuspensions in liquid medium. Then, manageable filament fragments were obtained by probe sonication. Fragmentation was followed by forceful washing, using vacuum-driven filtration through an 8-microm pore size membrane and an excess of water. Washed fragments were cultured and treated with a sequential exposure to four different antibiotics. Finally, axenic cultures were obtained from serial dilutions of treated fragments. Monitoring under microscope examination and by inoculation in Luria-Bertani (LB) agar plates indicated either axenicity or the degree of contamination throughout the strategy.
NASA Astrophysics Data System (ADS)
Bălău, Oana; Bica, Doina; Koneracka, Martina; Kopčansky, Peter; Susan-Resiga, Daniela; Vékás, Ladislau
Rheological and magnetorheological behaviour of monolayer and double layer sterically stabilized magnetic fluids, with transformer oil (UTR), diloctilsebacate (DOS), heptanol (Hept), pentanol (Pent) and water (W) as carrier liquids, were investigated. The data for volumic concentration dependence of dynamic viscosity of high colloidal stability UTR, DOS, Hept and Pent samples are particularly well fitted by the formulas given by Vand (1948) and Chow (1994). The Chow type dependence proved its universal character as the viscosity data for dilution series of various magnetic fluids are well fitted by the same curve, regardless the nonpolar or polar charcater of the sample. The magnetorheological effect measured for low and medium concentration water based magnetic fluids is much higher, due to agglomerate formation process, than the corresponding values obtained for the well stabilized UTR, DOS, Hept and Pent samples, even at very high volumic fraction of magnetic nanoparticles.
Hierarchical LiFePO4 with a controllable growth of the (010) facet for lithium-ion batteries.
Guo, Binbin; Ruan, Hongcheng; Zheng, Cheng; Fei, Hailong; Wei, Mingdeng
2013-09-27
Hierarchically structured LiFePO4 was successfully synthesized by ionic liquid solvothermal method. These hierarchically structured LiFePO4 samples were constructed from nanostructured platelets with their (010) facets mainly exposed. To the best of our knowledge, facet control of a hierarchical LiFePO4 crystal has not been reported yet. Based on a series of experimental results, a tentative mechanism for the formation of these hierarchical structures was proposed. After these hierarchically structured LiFePO4 samples were coated with a thin carbon layer and used as cathode materials for lithium-ion batteries, they exhibited excellent high-rate discharge capability and cycling stability. For instance, a capacity of 95% can be maintained for the LiFePO4 sample at a rate as high as 20 C, even after 1000 cycles.
Surveillance of wild birds for avian influenza virus.
Hoye, Bethany J; Munster, Vincent J; Nishiura, Hiroshi; Klaassen, Marcel; Fouchier, Ron A M
2010-12-01
Recent demand for increased understanding of avian influenza virus in its natural hosts, together with the development of high-throughput diagnostics, has heralded a new era in wildlife disease surveillance. However, survey design, sampling, and interpretation in the context of host populations still present major challenges. We critically reviewed current surveillance to distill a series of considerations pertinent to avian influenza virus surveillance in wild birds, including consideration of what, when, where, and how many to sample in the context of survey objectives. Recognizing that wildlife disease surveillance is logistically and financially constrained, we discuss pragmatic alternatives for achieving probability-based sampling schemes that capture this host-pathogen system. We recommend hypothesis-driven surveillance through standardized, local surveys that are, in turn, strategically compiled over broad geographic areas. Rethinking the use of existing surveillance infrastructure can thereby greatly enhance our global understanding of avian influenza and other zoonotic diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pern, F. J.; Glick, S. H.
We have conducted a series of accelerated exposure test (AET) studies for various crystalline-Si (c-Si) and amorphous-Si (a-Si) cell samples that were encapsulated with different superstrates, pottants, and substrates. Nonuniform browning patterns of ethylene vinyl acetate (EVA) pottants were observed for glass/EVA/glass-encapsulated c-Si cell samples under solar simulator exposures at elevated temperatures. The polymer/polymer-configured laminates with Tedlar or Tefzel did not discolor because of photobleaching reactions, but yellowed with polyester or nylon top films. Delamination was observed for the polyester/EVA layers on a-Si minimodules and for a polyolefin-based thermoplastic pottant at high temperatures. For all tested c-Si cell samples, irregularmore » changes in the current-voltage parameters were observed that could not be accounted for simply by the transmittance changes of the superstrate/pottant layers. Silicone-type adhesives used under UV-transmitting polymer top films were observed to cause greater cell current/efficiency loss than EVA or polyethylene pottants.« less
Selection of 3013 Containers for Field Surveillance. Fiscal Year 2016 Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Elizabeth J.; Berg, John M.; Cheadle, Jesse
2016-04-19
This update is the eighth in a series of reports that document the binning and sample selection of 3013 containers for the Field Surveillance program as part of the Integrated Surveillance Program. This report documents changes made to both the container binning assignments and the sample selection approach. Binning changes documented in this update are a result of changes to the prompt gamma calibration curves and the reassignment of a small number of Hanford items from the Pressure bin to the Pressure and Corrosion (P&C) bin. Field Surveillance sample selection changes are primarily a result of focusing future destructive examinationsmore » (DEs) on the potential for stress corrosion cracking in higher moisture containers in the P&C bin. The decision to focus the Field Surveillance program on higher moisture items is based on findings from both the Shelf-life testing program and DEs.« less
Yang, Jonghee; Park, Taehee; Lee, Jongtaek; Lee, Junyoung; Shin, Hokyeong; Yi, Whikun
2016-03-01
We fabricated a series of linker-assisted quantum-dot-sensitized solar cells based on the ex situ self-assembly of CdSe quantum dots (QDs) onto TiO2 electrode using sulfide/polysulfide (S(2-)/Sn(2-)) as an electrolyte and Au cathode. Our cell were combined with single-walled carbon nanotubes (SWNTs) by two techniques; One was mixing SWNTs with TiO2 electrode and the other was spraying SWNTs onto Au electrode. Absorption spectra were used to confirm the adsorption of QDs onto TiO2 electrode. Cell performance was measured on samples containing and not-containing SWNTs. Samples mixing SWNTs with TiO2 showed higher cell efficiency, on the while sample spraying SWNTs onto Au electrode showed lower efficiency compared with pristine sample (not-containing SWNTs). Electrochemical impedance spectroscopy analysis suggested that SWNTs can act as either barriers or excellent carrier transfers according their position and mixing method.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Genovo: De Novo Assembly for Metagenomes
NASA Astrophysics Data System (ADS)
Laserson, Jonathan; Jojic, Vladimir; Koller, Daphne
Next-generation sequencing technologies produce a large number of noisy reads from the DNA in a sample. Metagenomics and population sequencing aim to recover the genomic sequences of the species in the sample, which could be of high diversity. Methods geared towards single sequence reconstruction are not sensitive enough when applied in this setting. We introduce a generative probabilistic model of read generation from environmental samples and present Genovo, a novel de novo sequence assembler that discovers likely sequence reconstructions under the model. A Chinese restaurant process prior accounts for the unknown number of genomes in the sample. Inference is made by applying a series of hill-climbing steps iteratively until convergence. We compare the performance of Genovo to three other short read assembly programs across one synthetic dataset and eight metagenomic datasets created using the 454 platform, the largest of which has 311k reads. Genovo's reconstructions cover more bases and recover more genes than the other methods, and yield a higher assembly score.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Determination of uranium in tap water by ICP-MS.
El Himri, M; Pastor, A; de la Guardia, M
2000-05-01
A fast and accurate procedure has been developed for the determination of uranium at microg L(-1) level in tap and mineral water. The method is based on the direct introduction of samples, without any chemical pre-treatment, into an inductively coupled plasma mass spectrometer (ICP-MS). Uranium was determined at the mass number 238 using Rh as internal standard. The method provides a limit of detection of 2 ng L(-1) and a good repeatability with relative standard deviation values (RSD) about 3% for five independent analyses of samples containing 73 microg L(-1) of uranium. Recovery percentage values found for the determination of uranium in spiked natural samples varied between 91% and 106%. Results obtained are comparable with those found by radiochemical methods for natural samples and of the same order for the certified content of a reference material, thus indicating the accuracy of the ICP-MS procedure without the need of using isotope dilution. A series of mineral and tap waters from different parts of Spain and Morocco were analysed.
NASA Astrophysics Data System (ADS)
Sanchez, M.; Probst, L.; Blazevic, E.; Nakao, B.; Northrup, M. A.
2011-11-01
We describe a fully automated and autonomous air-borne biothreat detection system for biosurveillance applications. The system, including the nucleic-acid-based detection assay, was designed, built and shipped by Microfluidic Systems Inc (MFSI), a new subsidiary of PositiveID Corporation (PSID). Our findings demonstrate that the system and assay unequivocally identify pathogenic strains of Bacillus anthracis, Yersinia pestis, Francisella tularensis, Burkholderia mallei, and Burkholderia pseudomallei. In order to assess the assay's ability to detect unknown samples, our team also challenged it against a series of blind samples provided by the Department of Homeland Security (DHS). These samples included natural occurring isolated strains, near-neighbor isolates, and environmental samples. Our results indicate that the multiplex assay was specific and produced no false positives when challenged with in house gDNA collections and DHS provided panels. Here we present another analytical tool for the rapid identification of nine Centers for Disease Control and Prevention category A and B biothreat organisms.
Critical review of the United Kingdom's "gold standard" survey of public attitudes to science.
Smith, Benjamin K; Jensen, Eric A
2016-02-01
Since 2000, the UK government has funded surveys aimed at understanding the UK public's attitudes toward science, scientists, and science policy. Known as the Public Attitudes to Science series, these surveys and their predecessors have long been used in UK science communication policy, practice, and scholarship as a source of authoritative knowledge about science-related attitudes and behaviors. Given their importance and the significant public funding investment they represent, detailed academic scrutiny of the studies is needed. In this essay, we critically review the most recently published Public Attitudes to Science survey (2014), assessing the robustness of its methods and claims. The review casts doubt on the quality of key elements of the Public Attitudes to Science 2014 survey data and analysis while highlighting the importance of robust quantitative social research methodology. Our analysis comparing the main sample and booster sample for young people demonstrates that quota sampling cannot be assumed equivalent to probability-based sampling techniques. © The Author(s) 2016.
Nielsen, H Bjørn; Almeida, Mathieu; Juncker, Agnieszka Sierakowska; Rasmussen, Simon; Li, Junhua; Sunagawa, Shinichi; Plichta, Damian R; Gautier, Laurent; Pedersen, Anders G; Le Chatelier, Emmanuelle; Pelletier, Eric; Bonde, Ida; Nielsen, Trine; Manichanh, Chaysavanh; Arumugam, Manimozhiyan; Batto, Jean-Michel; Quintanilha Dos Santos, Marcelo B; Blom, Nikolaj; Borruel, Natalia; Burgdorf, Kristoffer S; Boumezbeur, Fouad; Casellas, Francesc; Doré, Joël; Dworzynski, Piotr; Guarner, Francisco; Hansen, Torben; Hildebrand, Falk; Kaas, Rolf S; Kennedy, Sean; Kristiansen, Karsten; Kultima, Jens Roat; Léonard, Pierre; Levenez, Florence; Lund, Ole; Moumen, Bouziane; Le Paslier, Denis; Pons, Nicolas; Pedersen, Oluf; Prifti, Edi; Qin, Junjie; Raes, Jeroen; Sørensen, Søren; Tap, Julien; Tims, Sebastian; Ussery, David W; Yamada, Takuji; Renault, Pierre; Sicheritz-Ponten, Thomas; Bork, Peer; Wang, Jun; Brunak, Søren; Ehrlich, S Dusko
2014-08-01
Most current approaches for analyzing metagenomic data rely on comparisons to reference genomes, but the microbial diversity of many environments extends far beyond what is covered by reference databases. De novo segregation of complex metagenomic data into specific biological entities, such as particular bacterial strains or viruses, remains a largely unsolved problem. Here we present a method, based on binning co-abundant genes across a series of metagenomic samples, that enables comprehensive discovery of new microbial organisms, viruses and co-inherited genetic entities and aids assembly of microbial genomes without the need for reference sequences. We demonstrate the method on data from 396 human gut microbiome samples and identify 7,381 co-abundance gene groups (CAGs), including 741 metagenomic species (MGS). We use these to assemble 238 high-quality microbial genomes and identify affiliations between MGS and hundreds of viruses or genetic entities. Our method provides the means for comprehensive profiling of the diversity within complex metagenomic samples.
Beda, Alessandro; Simpson, David M; Faes, Luca
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394
NASA Astrophysics Data System (ADS)
Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao
2012-11-01
Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.