Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
40 CFR 761.283 - Determination of the number of samples to collect and sample collection locations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sampling points after the recleaning, but select three new pairs of sampling coordinates. (i) Beginning in the southwest corner (lower left when facing magnetic north) of the area to be sampled, measure in... new pair of sampling coordinates. Continue to select pairs of sampling coordinates until three are...
[Demonstration plan used in the study of human reproduction in the district of Sao Paulo. 1967].
Silva, Eunice Pinho de Castro
2006-10-01
This work presents the sampling procedure used to select the sample got for a "Human Reproduction Study in the District of São Paulo" (Brazil), done by the Department of Applied Statistics of "Faculdade de Higiene e Saúde Pública da Universidade de São Paulo". The procedure tried to solve the situation which resulted from the limitation in cost, time and lack of a frame that could be used in order to get a probability sample in the fixed term of time and with the fixed cost. It consisted in a two stage sampling with dwelling-units as primary units and women as secondary units. At the first stage, it was used stratified sampling in which sub-districts were taken as strata. In order to select primary units, there was a selection of points ("starting points") on the maps of subdistricts by a procedure that was similar to that one called "square grid" but differed from this in several aspects. There were fixed rules to establish a correspondence between each selected "starting point" and a set of three dwelling units where at least one woman of the target population lived. In the selected dwelling units where more than one woman of target population lived, there was a sub-sampling in order to select one of them. In this selection each woman living in the dwelling unit had equal probability of selection. Several "no-answer" cases and correspondent instructions to be followed by the interviewers are presented too.
A generalized approach to computer synthesis of digital holograms
NASA Technical Reports Server (NTRS)
Hopper, W. A.
1973-01-01
Hologram is constructed by taking number of digitized sample points and blending them together to form ''continuous'' picture. New system selects better set of sample points resulting in improved hologram from same amount of information.
Estimating the carbon in coarse woody debris with perpendicular distance sampling. Chapter 6
Harry T. Valentine; Jeffrey H. Gove; Mark J. Ducey; Timothy G. Gregoire; Michael S. Williams
2008-01-01
Perpendicular distance sampling (PDS) is a design for sampling the population of pieces of coarse woody debris (logs) in a forested tract. In application, logs are selected at sample points with probability proportional to volume. Consequently, aggregate log volume per unit land area can be estimated from tallies of logs at sample points. In this chapter we provide...
40 CFR 60.74 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... select the sampling site, and the sampling point shall be the centroid of the stack or duct or at a point... the production rate (P) of 100 percent nitric acid for each run. Material balance over the production...
Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)
NASA Astrophysics Data System (ADS)
Aksoy, A.; Yenilmez, F.; Duzgun, S.
2013-12-01
Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water quality in the reservoir due to inflows, point and diffuse inputs, and reservoir hydromorphology. Moreover, hot spots were determined based on kriging and standard error maps. Locations of minimum number of sampling points that represent the actual spatial structure of DO distribution in the Porsuk Dam Reservoir
CRUMP 2003 Selected Water Sample Results
Point locations and water sampling results performed in 2003 by the Church Rock Uranium Monitoring Project (CRUMP) a consortium of organizations (Navajo Nation Environmental Protection Agency, US Environmental Protection Agency, New Mexico Scientific Laboratory Division, Navajo Tribal Utility Authority and NM Water Quality Control Commission). Samples include general description of the wells sampled, general chemistry, heavy metals and aestheic parameters, and selected radionuclides. Here only six sampling results are presented in this point shapefile, including: Gross Alpha (U-Nat Ref.) (pCi/L), Gross Beta (Sr/Y-90 Ref.) (pCi/L), Radium-226 (pCi/L), Radium-228 (pCi/L), Total Uranium (pCi/L), and Uranium mass (ug/L). The CRUMP samples were collected in the area of Churchrock, NM in the Eastern AUM Region of the Navajo Nation.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
NASA Astrophysics Data System (ADS)
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.
Experimental Design in Clinical 'Omics Biomarker Discovery.
Forshed, Jenny
2017-11-03
This tutorial highlights some issues in the experimental design of clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses, and how to select samples to improve the chance of answering the clinical question at issue. This includes the importance of defining clinical aim and end point, knowing the variability in the results, randomization of samples, sample size, statistical power, and how to avoid confounding factors by including clinical data in the sample selection, that is, how to avoid unpleasant surprises at the point of statistical analysis. The aim of this Tutorial is to help translational clinical and preclinical biomarker candidate research and to improve the validity and potential of future biomarker candidate findings.
Instance-based learning: integrating sampling and repeated decisions from experience.
Gonzalez, Cleotilde; Dutt, Varun
2011-10-01
In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordienko, A V; Mavritskii, O B; Egorov, A N
2014-12-31
The statistics of the ionisation response amplitude measured at selected points and their surroundings within sensitive regions of integrated circuits (ICs) under focused femtosecond laser irradiation is obtained for samples chosen from large batches of two types of ICs. A correlation between these data and the results of full-chip scanning is found for each type. The criteria for express validation of IC single-event effect (SEE) hardness based on ionisation response measurements at selected points are discussed. (laser applications and other topics in quantum electronics)
Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.
2014-01-01
Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973
Westfall, Arthur O.
1976-01-01
A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.
Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs
NASA Astrophysics Data System (ADS)
Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.
2016-07-01
Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.
USDA-ARS?s Scientific Manuscript database
The concentration of mercury, cadmium, lead, and arsenic along with glyphosate and an extensive array of pesticides in the U.S. peanut crop was assessed for crop years 2013-2015. Samples were randomly selected from various buying points during the grading process. Samples were selected from the thre...
Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.
ERIC Educational Resources Information Center
Kratochvil, Byron
1980-01-01
Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)
Davenport, M.S.
1993-01-01
Water and bottom-sediment samples were collected at 26 sites in the 65-square-mile High Point Lake watershed area of Guilford County, North Carolina, from December 1988 through December 1989. Sampling locations included 10 stream sites, 8 lake sites, and 8 ground-water sites. Generally, six steady-flow samples were collected at each stream site and three storm samples were collected at five sites. Four lake samples and eight ground-water samples also were collected. Chemical analyses of stream and lake sediments and particle-size analyses of lake sediments were performed once during the study. Most stream and lake samples were analyzed for field characteristics, nutrients, major ions, trace elements, total organic carbon, and chemical-oxygen demand. Analyses were performed to detect concentrations of 149 selected organic compounds, including acid and base/neutral extractable and volatile constituents and carbamate, chlorophenoxy acid, triazine, organochlorine, and organophosphorus pesticides and herbicides. Selected lake samples were analyzed for all constituents listed in the Safe Drinking Water Act of 1986, including Giardia, Legionella, radiochemicals, asbestos, and viruses. Various chromatograms from organic analyses were submitted to computerized library searches. The results of these and all other analyses presented in this report are in tabular form.
The Effect of Curriculum Sample Selection for Medical School
ERIC Educational Resources Information Center
de Visser, Marieke; Fluit, Cornelia; Fransen, Jaap; Latijnhouwers, Mieke; Cohen-Schotanus, Janke; Laan, Roland
2017-01-01
In the Netherlands, students are admitted to medical school through (1) selection, (2) direct access by high pre-university Grade Point Average (pu-GPA), (3) lottery after being rejected in the selection procedure, or (4) lottery. At Radboud University Medical Center, 2010 was the first year we selected applicants. We designed a procedure based on…
A robust method of thin plate spline and its application to DEM construction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan
2012-11-01
In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.
Point-Sampling and Line-Sampling Probability Theory, Geometric Implications, Synthesis
L.R. Grosenbaugh
1958-01-01
Foresters concerned with measuring tree populations on definite areas have long employed two well-known methods of representative sampling. In list or enumerative sampling the entire tree population is tallied with a known proportion being randomly selected and measured for volume or other variables. In area sampling all trees on randomly located plots or strips...
On estimation in k-tree sampling
Christoph Kleinn; Frantisek Vilcko
2007-01-01
The plot design known as k-tree sampling involves taking the k nearest trees from a selected sample point as sample trees. While this plot design is very practical and easily applied in the field for moderate values of k, unbiased estimation remains a problem. In this article, we give a brief introduction to the...
NASA Astrophysics Data System (ADS)
Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi
2012-04-01
Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.
Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis
ERIC Educational Resources Information Center
Logan, Jessica
2017-01-01
The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Higher order correlations of IRAS galaxies
NASA Technical Reports Server (NTRS)
Meiksin, Avery; Szapudi, Istvan; Szalay, Alexander
1992-01-01
The higher order irreducible angular correlation functions are derived up to the eight-point function, for a sample of 4654 IRAS galaxies, flux-limited at 1.2 Jy in the 60 microns band. The correlations are generally found to be somewhat weaker than those for the optically selected galaxies, consistent with the visual impression of looser clusters in the IRAS sample. It is found that the N-point correlation functions can be expressed as the symmetric sum of products of N - 1 two-point functions, although the correlations above the four-point function are consistent with zero. The coefficients are consistent with the hierarchical clustering scenario as modeled by Hamilton and by Schaeffer.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
Compendium of selected methods for sampling and analysis at geothermal facilities
NASA Astrophysics Data System (ADS)
Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.
1984-06-01
An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.
Improved selection criteria for H II regions, based on IRAS sources
NASA Astrophysics Data System (ADS)
Yan, Qing-Zeng; Xu, Ye; Walsh, A. J.; Macquart, J. P.; MacLeod, G. C.; Zhang, Bo; Hancock, P. J.; Chen, Xi; Tang, Zheng-Hong
2018-05-01
We present new criteria for selecting H II regions from the Infrared Astronomical Satellite (IRAS) Point Source Catalogue (PSC), based on an H II region catalogue derived manually from the all-sky Wide-field Infrared Survey Explorer (WISE). The criteria are used to augment the number of H II region candidates in the Milky Way. The criteria are defined by the linear decision boundary of two samples: IRAS point sources associated with known H II regions, which serve as the H II region sample, and IRAS point sources at high Galactic latitudes, which serve as the non-H II region sample. A machine learning classifier, specifically a support vector machine, is used to determine the decision boundary. We investigate all combinations of four IRAS bands and suggest that the optimal criterion is log(F_{60}/F_{12})≥ ( -0.19 × log(F_{100}/F_{25})+ 1.52), with detections at 60 and 100 {μ}m. This selects 3041 H II region candidates from the IRAS PSC. We find that IRAS H II region candidates show evidence of evolution on the two-colour diagram. Merging the WISE H II catalogue with IRAS H II region candidates, we estimate a lower limit of approximately 10 200 for the number of H II regions in the Milky Way.
Generality of the Matching Law as a Descriptor of Shot Selection in Basketball
ERIC Educational Resources Information Center
Alferink, Larry A.; Critchfield, Thomas S.; Hitt, Jennifer L.; Higgins, William J.
2009-01-01
Based on a small sample of highly successful teams, past studies suggested that shot selection (two- vs. three-point field goals) in basketball corresponds to predictions of the generalized matching law. We examined the generality of this finding by evaluating shot selection of college (Study 1) and professional (Study 3) players. The matching law…
Partially Identified Treatment Effects for Generalizability
ERIC Educational Resources Information Center
Chan, Wendy
2017-01-01
Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
Instance-Based Learning: Integrating Sampling and Repeated Decisions from Experience
ERIC Educational Resources Information Center
Gonzalez, Cleotilde; Dutt, Varun
2011-01-01
In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or…
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
1988-10-01
sample these ducts. This judgement was based on the following factors : 1. The ducts were open to the atmosphere. 2. RMA records of building area samples...selected based on several factors including piping arrangements, volume to be sampled, sampling equipment flow rates, and the flow rate necessary for...effective sampling. Therefore, each sampling point strategy and procedure was customized based on these factors . The individual specific sampling
Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J
2014-04-01
The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.
Arrayed Micro-Ring Spectrometer System and Method of Use
NASA Technical Reports Server (NTRS)
Choi, Sang H. (Inventor); Park, Yeonjoon (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor)
2012-01-01
A spectrometer system includes an array of micro-zone plates (MZP) each having coaxially-aligned ring gratings, a sample plate for supporting and illuminating a sample, and an array of photon detectors for measuring a spectral characteristic of the predetermined wavelength. The sample plate emits an evanescent wave in response to incident light, which excites molecules of the sample to thereby cause an emission of secondary photons. A method of detecting the intensity of a selected wavelength of incident light includes directing the incident light onto an array of MZP, diffracting a selected wavelength of the incident light onto a target focal point using the array of MZP, and detecting the intensity of the selected portion using an array of photon detectors. An electro-optic layer positioned adjacent to the array of MZP may be excited via an applied voltage to select the wavelength of the incident light.
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
GENERALITY OF THE MATCHING LAW AS A DESCRIPTOR OF SHOT SELECTION IN BASKETBALL
Alferink, Larry A; Critchfield, Thomas S; Hitt, Jennifer L; Higgins, William J
2009-01-01
Based on a small sample of highly successful teams, past studies suggested that shot selection (two- vs. three-point field goals) in basketball corresponds to predictions of the generalized matching law. We examined the generality of this finding by evaluating shot selection of college (Study 1) and professional (Study 3) players. The matching law accounted for the majority of variance in shot selection, with undermatching and a bias for taking three-point shots. Shot-selection matching varied systematically for players who (a) were members of successful versus unsuccessful teams, (b) competed at different levels of collegiate play, and (c) served as regulars versus substitutes (Study 2). These findings suggest that the matching law is a robust descriptor of basketball shot selection, although the mechanism that produces matching is unknown. PMID:20190921
Precipitation in a lead calcium tin anode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Gonzalez, Francisco A., E-mail: fco.aurelio@inbox.com; Centro de Innovacion, Investigacion y Desarrollo en Ingenieria y Tecnologia, Universidad Autonoma de Nuevo Leon; Camurri, Carlos G., E-mail: ccamurri@udec.cl
Samples from a hot rolled sheet of a tin and calcium bearing lead alloy were solution heat treated at 300 Degree-Sign C and cooled down to room temperature at different rates; these samples were left at room temperature to study natural precipitation of CaSn{sub 3} particles. The samples were aged for 45 days before analysing their microstructure, which was carried out in a scanning electron microscope using secondary and backscattered electron detectors. Selected X-ray spectra analyses were conducted to verify the nature of the precipitates. Images were taken at different magnifications in both modes of observation to locate the precipitatesmore » and record their position within the images and calculate the distance between them. Differential scanning calorimeter analyses were conducted on selected samples. It was found that the mechanical properties of the material correlate with the minimum average distance between precipitates, which is related to the average cooling rate from solution heat treatment. - Highlights: Black-Right-Pointing-Pointer The distance between precipitates in a lead alloy is recorded. Black-Right-Pointing-Pointer The relationship between the distance and the cooling rate is established. Black-Right-Pointing-Pointer It is found that the strengthening of the alloy depends on the distance between precipitates.« less
A Statistical Guide to the Design of Deep Mutational Scanning Experiments
Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia
2016-01-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710
Dense mesh sampling for video-based facial animation
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
NASA Astrophysics Data System (ADS)
Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid
2012-12-01
This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.
Holliday, Trenton W; Hilton, Charles E
2010-06-01
Given the well-documented fact that human body proportions covary with climate (presumably due to the action of selection), one would expect that the Ipiutak and Tigara Inuit samples from Point Hope, Alaska, would be characterized by an extremely cold-adapted body shape. Comparison of the Point Hope Inuit samples to a large (n > 900) sample of European and European-derived, African and African-derived, and Native American skeletons (including Koniag Inuit from Kodiak Island, Alaska) confirms that the Point Hope Inuit evince a cold-adapted body form, but analyses also reveal some unexpected results. For example, one might suspect that the Point Hope samples would show a more cold-adapted body form than the Koniag, given their more extreme environment, but this is not the case. Additionally, univariate analyses seldom show the Inuit samples to be more cold-adapted in body shape than Europeans, and multivariate cluster analyses that include a myriad of body shape variables such as femoral head diameter, bi-iliac breadth, and limb segment lengths fail to effectively separate the Inuit samples from Europeans. In fact, in terms of body shape, the European and the Inuit samples tend to be cold-adapted and tend to be separated in multivariate space from the more tropically adapted Africans, especially those groups from south of the Sahara. Copyright 2009 Wiley-Liss, Inc.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Astrophysics Data System (ADS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lonsdale, C.J.; Hacking, P.B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained inmore » terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.« less
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Technical Reports Server (NTRS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-01-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
The Impact of School Bullying on Students' Academic Achievement from Teachers Point of View
ERIC Educational Resources Information Center
Al-Raqqad, Hana Khaled; Al-Bourini, Eman Saeed; Al Talahin, Fatima Mohammad; Aranki, Raghda Michael Elias
2017-01-01
The study aimed to investigate school bullying impact on students' academic achievement from teachers' perspective in Jordanian schools. The study used a descriptive analytical methodology. The research sample consisted of all schools' teachers in Amman West Area (in Jordan). The sample size consisted of 200 teachers selected from different…
Methods for measuring populations of small, diurnal forest birds.
D.A. Manuwal; A.B. Carey
1991-01-01
Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...
Code of Federal Regulations, 2010 CFR
2010-07-01
... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...
Unidentified point sources in the IRAS minisurvey
NASA Technical Reports Server (NTRS)
Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.
1984-01-01
Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.
Analysis of selected volatile organic compounds at background level in South Africa.
NASA Astrophysics Data System (ADS)
Ntsasa, Napo; Tshilongo, James; Lekoto, Goitsemang
2017-04-01
Volatile organic compounds (VOC) are measured globally at urban air pollution monitoring and background level at specific locations such as the Cape Point station. The urban pollution monitoring is legislated at government level; however, the background levels are scientific outputs of the World Meteorological Organisation Global Atmospheric Watch program (WMO/GAW). The Cape Point is a key station in the Southern Hemisphere which monitors greenhouse gases and halocarbons, with reported for over the past decade. The Cape Point station does not have the measurement capability VOC's currently. A joint research between the Cape Point station and the National Metrology Institute of South Africa (NMISA) objective is to perform qualitative and quantitative analysis of volatile organic compounds listed in the GAW program. NMISA is responsible for development, maintain and disseminate primary reference gas mixtures which are directly traceable to the International System of Units (SI) The results of some volatile organic compounds which where sampled in high pressure gas cylinders will be presented. The analysis of samples was performed on the gas chromatography with flame ionisation detector and mass selective detector (GC-FID/MSD) with a dedicate cryogenic pre-concentrator system. Keywords: volatile organic compounds, gas chromatography, pre-concentrator
Boka, V; Arapostathis, K; Karagiannis, V; Kotsanos, N; van Loveren, C; Veerkamp, J
2017-03-01
To present: the normative data on dental fear and caries status; the dental fear cut-off points of young children in the city of Thessaloniki, Greece. Study Design: This is a cross-sectional study with two independent study groups. A first representative sample consisted of 1484 children from 15 primary public schools of Thessaloniki. A second sample consisted of 195 randomly selected age-matched children, all patients of the Postgraduate Paediatric Dental Clinic of Aristotle University of Thessaloniki. First sample: In order to select data on dental fear and caries, dental examination took place in the classroom with disposable mirrors and a penlight. All the children completed the Dental Subscale of the Children's Fear Survey Schedule (CFSS-DS). Second sample: In order to define the cut-off points of the CFSS-DS, dental treatment of the 195 children was performed at the University Clinic. Children⁁s dental fear was assessed using the CFSS-DS and their behaviour during dental treatment was observed by one calibrated examiner using the Venham scale. Statistical analysis of the data was performed with IBM SPSS Statistics 20 at a statistical significance level of <0.05. First sample: The mean CFSS-DS score was 27.1±10.8. Age was significantly (p<0.05) related to dental fear. Mean differences between boys and girls were not significant. Caries was not correlated with dental fear. Second sample: CFSS-DS< 33 was defined as 'no dental fear', scores 33-37 as 'borderline' and scores > 37 as 'dental fear'. In the first sample, 84.6% of the children did not suffer from dental fear (CFSS-DS<33). Dental fear was correlated to age and not to caries and gender. The dental fear cut-off point for the CFSS-DS was estimated at 37 for 6-12 year old children (33-37 borderlines).
Davis, Rosemary H; Valadez, Joseph J
2014-12-01
Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.
Measurement of Moisture Sorption Isotherm by DVS Hydrosorb
NASA Astrophysics Data System (ADS)
Kurniawan, Y. R.; Purwanto, Y. A.; Purwanti, N.; Budijanto, S.
2018-05-01
Artificial rice made from corn flour, sago, glycerol monostearate, vegetable oil, water and jelly powder was developed by extrusion method through the process stages of material mixing, extrusion, drying, packaging and storage. Sorption isotherm pattern information on food ingredients used to design and optimize the drying process, packaging, storage. Sorption isotherm of water of artificial rice was measured using humidity generating method with Dynamic Vapor Sorption device that has an advantage of equilibration time is about 10 to 100 times faster than saturated salt slurry method. Relative humidity modification technique are controlled automatically by adjusting the proportion of mixture of dry air and water saturated air. This paper aims to develop moisture sorption isotherm using the Hydrosorb 1000 Water Vapor Sorption Analyzer. Sample preparation was conducted by degassing sample in a heating mantle of 65°C. Analysis parameters need to be fulfilled were determination of Po, sample data, selection of water activity points, and equilibrium conditions. The selected analytical temperatures were 30°C and 45°C. Analysis lasted for 45 hours and curves of adsorption and desorption were obtained. Selected bottom point of water activity 0.05 at 30°C and 45°C yielded adsorbed mass of 0.1466 mg/g and 0.3455 mg/g, respectively, whereas selected top water activity point 0.95 at 30°C and 45°C yielded adsorbed mass of 190.8734 mg/g and 242.4161mg/g, respectively. Moisture sorption isotherm measurements of articial rice made from corn flour at temperature of 30°C and 45°C using Hydrosorb showed that the moisture sorption curve approximates sigmoid-shaped type II curve commonly found in corn-based foodstuffs (high- carbohydrate).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenger, Drake C., E-mail: drake.stenger@ars.usda.
Population structure of Homalodisca coagulata Virus-1 (HoCV-1) among and within field-collected insects sampled from a single point in space and time was examined. Polymorphism in complete consensus sequences among single-insect isolates was dominated by synonymous substitutions. The mutant spectrum of the C2 helicase region within each single-insect isolate was unique and dominated by nonsynonymous singletons. Bootstrapping was used to correct the within-isolate nonsynonymous:synonymous arithmetic ratio (N:S) for RT-PCR error, yielding an N:S value ~one log-unit greater than that of consensus sequences. Probability of all possible single-base substitutions for the C2 region predicted N:S values within 95% confidence limits of themore » corrected within-isolate N:S when the only constraint imposed was viral polymerase error bias for transitions over transversions. These results indicate that bottlenecks coupled with strong negative/purifying selection drive consensus sequences toward neutral sequence space, and that most polymorphism within single-insect isolates is composed of newly-minted mutations sampled prior to selection. -- Highlights: •Sampling protocol minimized differential selection/history among isolates. •Polymorphism among consensus sequences dominated by negative/purifying selection. •Within-isolate N:S ratio corrected for RT-PCR error by bootstrapping. •Within-isolate mutant spectrum dominated by new mutations yet to undergo selection.« less
Facial expression reconstruction on the basis of selected vertices of triangle mesh
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments.
Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia
2016-09-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.
IRAS variables as galactic structure tracers - Classification of the bright variables
NASA Technical Reports Server (NTRS)
Allen, L. E.; Kleinmann, S. G.; Weinberg, M. D.
1993-01-01
The characteristics of the 'bright infrared variables' (BIRVs), a sample consisting of the 300 brightest stars in the IRAS Point Source Catalog with IRAS variability index VAR of 98 or greater, are investigated with the purpose of establishing which of IRAS variables are AGB stars (e.g., oxygen-rich Miras and carbon stars, as was assumed by Weinberg (1992)). Results of the analysis of optical, infrared, and microwave spectroscopy of these stars indicate that, out of 88 stars in the BIRV sample identified with cataloged variables, 86 can be classified as Miras. Results of a similar analysis performed for a color-selected sample of stars, using the color limits employed by Habing (1988) to select AGB stars, showed that, out of 52 percent of classified stars, 38 percent are non-AGB stars, including H II regions, planetary nebulae, supergiants, and young stellar objects, indicating that studies using color-selected samples are subject to misinterpretation.
Investigation of water quality parameters at selected points on the Tennessee River
NASA Technical Reports Server (NTRS)
1972-01-01
The thermal and water quality parameters in the vicinity of widows Creek Steam Generation Plant were investigated. The water quality analysis and temperature profiles are presented for 24 sampling sites.
FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)
NASA Astrophysics Data System (ADS)
Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.
2017-02-01
This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.
Challenges in early clinical development of adjuvanted vaccines.
Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David
2015-06-08
A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.
Watts, Sarah E; Weems, Carl F
2006-12-01
The purpose of this study was to examine the linkages among selective attention, memory bias, cognitive errors, and anxiety problems by testing a model of the interrelations among these cognitive variables and childhood anxiety disorder symptoms. A community sample of 81 youth (38 females and 43 males) aged 9-17 years and their parents completed measures of the child's anxiety disorder symptoms. Youth completed assessments measuring selective attention, memory bias, and cognitive errors. Results indicated that selective attention, memory bias, and cognitive errors were each correlated with childhood anxiety problems and provide support for a cognitive model of anxiety which posits that these three biases are associated with childhood anxiety problems. Only limited support for significant interrelations among selective attention, memory bias, and cognitive errors was found. Finally, results point towards an effective strategy for moving the assessment of selective attention to younger and community samples of youth.
JoAnn M. Hanowski; Gerald J. Niemi
1995-01-01
We established bird monitoring programs in two regions of Minnesota: the Chippewa National Forest and the Superior National Forest. The experimental design defined forest cover types as strata in which samples of forest stands were randomly selected. Subsamples (3 point counts) were placed in each stand to maximize field effort and to assess within-stand and between-...
NASA Astrophysics Data System (ADS)
Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel
2017-06-01
Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.
Magnetic phase composition of strontium titanate implanted with iron ions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dulov, E.N., E-mail: evgeny.dulov@ksu.ru; Ivoilov, N.G.; Strebkov, O.A.
2011-12-15
Highlights: Black-Right-Pointing-Pointer The origin of RT-ferromagnetism in iron implanted strontium titanate. Black-Right-Pointing-Pointer Metallic iron nanoclusters form during implantation and define magnetic behaviour. Black-Right-Pointing-Pointer Paramagnetic at room temperature iron-substituted strontium titanate identified. -- Abstract: Thin magnetic films were synthesized by means of implantation of iron ions into single-crystalline (1 0 0) substrates of strontium titanate. Depth-selective conversion electron Moessbauer spectroscopy (DCEMS) indicates that origin of the samples magnetism is {alpha}-Fe nanoparticles. Iron-substituted strontium titanate was also identified but with paramagnetic behaviour at room temperature. Surface magneto-optical Kerr effect (SMOKE) confirms that the films reveal superparamagnetism (the low-fluence sample) or ferromagnetism (themore » high-fluence sample), and demonstrate absence of magnetic in-plane anisotropy. These findings highlight iron implanted strontium titanate as a promising candidate for composite multiferroic material and also for gas sensing applications.« less
Study of coherent reflectometer for imaging internal structures of highly scattering media
NASA Astrophysics Data System (ADS)
Poupardin, Mathieu; Dolfi, Agnes
1996-01-01
Optical reflectometers are potentially useful tools for imaging internal structures of turbid media, particularly of biological media. To get a point by point image, an active imaging system has to distinguish light scattered from a sample volume and light scattered by other locations in the media. Operating this discrimination of light with reflectometers based on coherence can be realized in two ways: assuring a geometric selection or a temporal selection. In this paper we present both methods, showing in each case the influence of the different parameters on the size of the sample volume under the assumption of single scattering. We also study the influence on the detection efficiency of the coherence loss of the incident light resulting from multiple scattering. We adapt a model, first developed for atmospheric lidar in turbulent atmosphere, to get an analytical expression of this detection efficiency in the function of the optical coefficients of the media.
Filtering device. [removing electromagnetic noise from voice communication signals
NASA Technical Reports Server (NTRS)
Edwards, T. R.; Zeanah, H. W. (Inventor)
1976-01-01
An electrical filter for removing noise from a voice communications signal is reported; seven sample values of the signal are obtained continuously, updated and subjected to filtering. Filtering is accomplished by adding balanced, with respect to a mid-point sample, spaced pairs of the sampled values, and then multiplying each pair by a selected filter constant. The signal products thus obtained are summed to provide a filtered version of the original signal.
Groenewold, Matthew R
2006-01-01
Local health departments are among the first agencies to respond to disasters or other mass emergencies. However, they often lack the ability to handle large-scale events. Plans including locally developed and deployed tools may enhance local response. Simplified cluster sampling methods can be useful in assessing community needs after a sudden-onset, short duration event. Using an adaptation of the methodology used by the World Health Organization Expanded Programme on Immunization (EPI), a Microsoft Access-based application for two-stage cluster sampling of residential addresses in Louisville/Jefferson County Metro, Kentucky was developed. The sampling frame was derived from geographically referenced data on residential addresses and political districts available through the Louisville/Jefferson County Information Consortium (LOJIC). The program randomly selected 30 clusters, defined as election precincts, from within the area of interest, and then, randomly selected 10 residential addresses from each cluster. The program, called the Rapid Assessment Tools Package (RATP), was tested in terms of accuracy and precision using data on a dichotomous characteristic of residential addresses available from the local tax assessor database. A series of 30 samples were produced and analyzed with respect to their precision and accuracy in estimating the prevalence of the study attribute. Point estimates with 95% confidence intervals were calculated by determining the proportion of the study attribute values in each of the samples and compared with the population proportion. To estimate the design effect, corresponding simple random samples of 300 addresses were taken after each of the 30 cluster samples. The sample proportion fell within +/-10 absolute percentage points of the true proportion in 80% of the samples. In 93.3% of the samples, the point estimate fell within +/-12.5%, and 96.7% fell within +/-15%. All of the point estimates fell within +/-20% of the true proportion. Estimates of the design effect ranged from 0.926 to 1.436 (mean = 1.157, median = 1.170) for the 30 samples. Although prospective evaluation of its performance in field trials or a real emergency is required to confirm its utility, this study suggests that the RATP, a locally designed and deployed tool, may provide population-based estimates of community needs or the extent of event-related consequences that are precise enough to serve as the basis for the initial post-event decisions regarding relief efforts.
Efficient robust doubly adaptive regularized regression with applications.
Karunamuni, Rohana J; Kong, Linglong; Tu, Wei
2018-01-01
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
Meldrum, R J; Garside, J; Mannion, P; Charles, D; Ellis, P
2012-12-01
The Welsh Food Microbiological Forum "shopping basket" survey is a long running, structured surveillance program examining ready-to-eat food randomly sampled from the point of sale or service in Wales, United Kingdom. The annual unsatisfactory rates for selected indicators and pathogens for 1998 through 2008 were examined. All the annual unsatisfactory rates for the selected pathogens were <0.5%, and no pattern with the annual rate was observed. There was also no discernible trend observed for the annual rates of Listeria spp. (not moncytogenes), with all rates <0.5%. However, there was a trend observed for Esherichia coli, with a decrease in rate between 1998 and 2003, rapid in the first few years, and then a gradual increase in rate up to 2008. It was concluded that there was no discernible pattern to the annual unsatisfactory rates for Listeria spp. (not monocytogenes), L. monocytogenes, Staphylococcus aureus, and Bacillus cereus, but that a definite trend had been observed for E. coli.
Time as a dimension of the sample design in national-scale forest inventories
Francis Roesch; Paul Van Deusen
2013-01-01
Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...
Wet atmospheric generation apparatus
NASA Technical Reports Server (NTRS)
Hamner, Richard M. (Inventor); Allen, Janice K. (Inventor)
1990-01-01
The invention described relates to an apparatus for providing a selectively humidified gas to a camera canister containing cameras and film used in space. A source of pressurized gas (leak test gas or motive gas) is selected by a valve, regulated to a desired pressure by a regulator, and routed through an ejector (venturi device). A regulated source of water vapor in the form of steam from a heated reservoir is coupled to a low pressure region of the ejector which mixes with high velocity gas flow through the ejector. This mixture is sampled by a dew point sensor to obtain dew point thereof (ratio of water vapor to gas) and the apparatus adjusted by varying gas pressure or water vapor to provide a mixture at a connector having selected humidity content.
Search for point sources of high energy neutrinos with final data from AMANDA-II
NASA Astrophysics Data System (ADS)
Abbasi, R.; Ackermann, M.; Adams, J.; Ahlers, M.; Ahrens, J.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Baret, B.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Braun, J.; Breder, D.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Davour, A.; Day, C. T.; Depaepe, O.; de Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, R.; Hasegawa, Y.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hickford, S.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hughey, B.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hundertmark, S.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kawai, H.; Kelley, J. L.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Leier, D.; Lewis, C.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Meli, A.; Merck, M.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, W. J.; Rodriguez, J.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schultz, O.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Song, C.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; Viscomi, V.; Vogt, C.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, C.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.
2009-03-01
We present a search for point sources of high energy neutrinos using 3.8 yr of data recorded by AMANDA-II during 2000-2006. After reconstructing muon tracks and applying selection criteria designed to optimally retain neutrino-induced events originating in the northern sky, we arrive at a sample of 6595 candidate events, predominantly from atmospheric neutrinos with primary energy 100 GeV to 8 TeV. Our search of this sample reveals no indications of a neutrino point source. We place the most stringent limits to date on E-2 neutrino fluxes from points in the northern sky, with an average upper limit of E2Φνμ+ντ≤5.2×10-11TeVcm-2s-1 on the sum of νμ and ντ fluxes, assumed equal, over the energy range from 1.9 TeV to 2.5 PeV.
Robust model selection and the statistical classification of languages
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. L.
2012-10-01
In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.
Automated liver sampling using a gradient dual-echo Dixon-based technique.
Bashir, Mustafa R; Dale, Brian M; Merkle, Elmar M; Boll, Daniel T
2012-05-01
Magnetic resonance spectroscopy of the liver requires input from a physicist or physician at the time of acquisition to insure proper voxel selection, while in multiecho chemical shift imaging, numerous regions of interest must be manually selected in order to ensure analysis of a representative portion of the liver parenchyma. A fully automated technique could improve workflow by selecting representative portions of the liver prior to human analysis. Complete volumes from three-dimensional gradient dual-echo acquisitions with two-point Dixon reconstruction acquired at 1.5 and 3 T were analyzed in 100 subjects, using an automated liver sampling algorithm, based on ratio pairs calculated from signal intensity image data as fat-only/water-only and log(in-phase/opposed-phase) on a voxel-by-voxel basis. Using different gridding variations of the algorithm, the average correct liver volume samples ranged from 527 to 733 mL. The average percentage of sample located within the liver ranged from 95.4 to 97.1%, whereas the average incorrect volume selected was 16.5-35.4 mL (2.9-4.6%). Average run time was 19.7-79.0 s. The algorithm consistently selected large samples of the hepatic parenchyma with small amounts of erroneous extrahepatic sampling, and run times were feasible for execution on an MRI system console during exam acquisition. Copyright © 2011 Wiley Periodicals, Inc.
A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.
Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo
2018-04-01
Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.
Treatment of atomic and molecular line blanketing by opacity sampling
NASA Technical Reports Server (NTRS)
Johnson, H. R.; Krupp, B. M.
1976-01-01
A sampling technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is subjected to several tests. In this opacity sampling (OS) technique, the global opacity is sampled at only a selected set of frequencies, and at each of these frequencies the total monochromatic opacity is obtained by summing the contribution of every relevant atomic and molecular line. In accord with previous results, we find that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 100 frequency points are adequate for many purposes. The effects of atomic and molecular lines are separately studied. A test model computed using the OS method agrees very well with a model having identical atmospheric parameters, but computed with the giant line (opacity distribution function) method.
Long-term purity assessment in succinonitrile
NASA Astrophysics Data System (ADS)
Rubinstein, E. R.; Tirmizi, S. H.; Glicksman, M. E.
1990-11-01
Container materials for crystal growth chambers must be carefully selected in order to prevent sample contamination. To address the issue of contamination, high purity SCN was exposed to a variety of potential chamber construction materials, e.g., metal alloys, soldering materials, and sealants, at a temperature approximately 25 K above the melting point of SCN (58°C), over periods of up to one year. Acceptability, or lack thereof, of candidate chamber materials was determined by performing periodic melting point checks of the exposed samples. Those materials which did not measurably affect the melting point of SCN over a one-year period were considered to be chemically compatible and therefore eligible for use in constructing the flight chamber. A growth chamber constructed from compatible materials (304 SS and borosilicate glass) was filled with pure SCN. A thermistor probe placed within the chamber permitted in situ measurement of the melting point and, indirectly, of the purity of the SCN. Melting point plateaus were then determined, to assess the actual chamber performance.
NASA Astrophysics Data System (ADS)
Elhelou, O.; Richter, C.
2015-12-01
Atmospheric deposition of pollutants is a major health and environmental concern. In a 2010 study, the CATF attributed over 13,000 deaths each year to fly ash and other fine particles emitted by U.S. coal-burning power plants. The magnetic properties of fly ash allows for mapping an area suspect of PM pollution faster and more efficiently than by conducting chemical analysis as the former alternative. The objective of this study is to detect the presence of magnetic particles related to the migration of fly ash from a nearby coal power plant over parts of Pointe Coupee Parish, LA. This is based on the idea that the fly ash that is released into the atmosphere during the coal burning process contains heavy metals and magnetic particles in the form of ferrospheres, which can be used to trace back to the source. Maps of the top and sub soil were generated to differentiate the magnetic susceptibility values of the heavy metals potentially attributed to the migration and settling of fly ash onto the surface from any pre-existing or naturally occurring heavy metals in the sub soil. A 60 km2 area in Pointe Coupee Parish was investigated in approximately 0.5 km2 subsets. The area in Pointe Coupee Parish, LA was selected because land use is predominantly rural with the Big Cajun II power plant as the main contributor for air borne contaminants. Samples of fly ash obtained directly from the source below one of the power plant's precipitators were also analyzed to verify the field and laboratory analysis. Contour maps representing the spatial distribution of fly ash over Pointe Coupee, LA, along with histograms of magnetic susceptibility values, and chemical analysis all indicate a correlation between the proximity to the power plant and the predominant wind direction. Acquisition curves of the isothermal remnant magnetization demonstrate the presence of predominantly low coercivity minerals (magnetite) with a small amount of a high-coercivity phase. The microstructure of the magnetic fractions of the fly ash along with select top and sub soil samples were observed using a reflective light microscope for identifying and confirming the presence of ferrospheres associated with fly ash. Chemical analyses of select samples revealed their heavy metal composition and the correlation with their SIRM and low field mass susceptibility values.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk
2016-01-01
A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the Administrator formaldehyde concentration must be corrected to 15 percent O2, dry basis. Results of... 100 percent load. b. select the sampling port location and the number of traverse points AND Method 1... concentration at the sampling port location AND Method 3A or 3B of 40 CFR part 60, appendix A measurements to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... appendix A to 40 CFR part 60: (i) Method 1 to select sampling port locations and the number of traverse points. Sampling ports must be located at the outlet of the control device and prior to any releases to... = Concentration of chlorine or hydrochloric acid in the gas stream, milligrams per dry standard cubic meter (mg...
Code of Federal Regulations, 2010 CFR
2010-07-01
... appendix A to 40 CFR part 60: (i) Method 1 to select sampling port locations and the number of traverse points. Sampling ports must be located at the outlet of the control device and prior to any releases to... = Concentration of chlorine or hydrochloric acid in the gas stream, milligrams per dry standard cubic meter (mg...
Development of spatial scaling technique of forest health sample point information
NASA Astrophysics Data System (ADS)
Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.
2017-12-01
Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.
Muinde, R K; Kiinyukia, C; Rombo, G O; Muoki, M A
2012-12-01
To determine the microbial load in food, examination of safety measures and possibility of implementing an Hazard Analysis Critical Control Points (HACCP) system. The target population for this study consisted of restaurants owners in Thika. Municipality (n = 30). Simple randomsamples of restaurantswere selected on a systematic sampling method of microbial analysis in cooked, non-cooked, raw food and water sanitation in the selected restaurants. Two hundred and ninety eight restaurants within Thika Municipality were selected. Of these, 30 were sampled for microbiological testing. From the study, 221 (74%) of the restaurants were ready to eat establishments where food was prepared early enough to hold and only 77(26%) of the total restaurants, customers made an order of food they wanted. 118(63%) of the restaurant operators/staff had knowledge on quality control on food safety measures, 24 (8%) of the restaurants applied these knowledge while 256 (86%) of the restaurants staff showed that food contains ingredients that were hazard if poorly handled. 238 (80%) of the resultants used weighing and sorting of food materials, 45 (15%) used preservation methods and the rest used dry foods as critical control points on food safety measures. The study showed that there was need for implementation of Hazard Analysis Critical Control Points (HACCP) system to enhance food safety. Knowledge of HACCP was very low with 89 (30%) of the restaurants applying some of quality measures to the food production process systems. There was contamination with Coliforms, Escherichia coli and Staphylococcus aureus microbial though at very low level. The means of Coliforms, Escherichia coli and Staphylococcus aureas microbial in sampled food were 9.7 x 103CFU/gm, 8.2 x 103 CFU/gm and 5.4 x 103 CFU/gm respectively with Coliforms taking the highest mean.
Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül
2015-01-01
In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.
Forester, James D; Im, Hae Kyung; Rathouz, Paul J
2009-12-01
Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to modeling resource selection is easily implemented using common statistical tools and promises to provide deeper insight into the movement ecology of animals.
Distribution majorization of corner points by reinforcement learning for moving object detection
NASA Astrophysics Data System (ADS)
Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang
2018-04-01
Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.
Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran
2010-05-01
Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.
NASA Astrophysics Data System (ADS)
Buat, V.; Takeuchi, T. T.; Iglesias-Páramo, J.; Xu, C. K.; Burgarella, D.; Boselli, A.; Barlow, T.; Bianchi, L.; Donas, J.; Forster, K.; Friedman, P. G.; Heckman, T. M.; Lee, Y.-W.; Madore, B. F.; Martin, D. C.; Milliard, B.; Morissey, P.; Neff, S.; Rich, M.; Schiminovich, D.; Seibert, M.; Small, T.; Szalay, A. S.; Welsh, B.; Wyder, T.; Yi, S. K.
2007-12-01
We select far-infrared (FIR: 60 μm) and far-ultraviolet (FUV: 530 Å) samples of nearby galaxies in order to discuss the biases encountered by monochromatic surveys (FIR or FUV). Very different volumes are sampled by each selection, and much care is taken to apply volume corrections to all the analyses. The distributions of the bolometric luminosity of young stars are compared for both samples: they are found to be consistent with each other for galaxies of intermediate luminosities, but some differences are found for high (>5×1010 Lsolar) luminosities. The shallowness of the IRAS survey prevents us from securing a comparison at low luminosities (<2×109 Lsolar). The ratio of the total infrared (TIR) luminosity to the FUV luminosity is found to increase with the bolometric luminosity in a similar way for both samples up to 5×1010 Lsolar. Brighter galaxies are found to have a different behavior according to their selection: the LTIR/LFUV ratio of the FUV-selected galaxies brighter than 5×1010 Lsolar reaches a plateau, whereas LTIR/LFUV continues to increase with the luminosity of bright galaxies selected in FIR. The volume-averaged specific star formation rate (SFR per unit galaxy stellar mass, SSFR) is found to decrease toward massive galaxies within each selection. The mean values of the SSFR are found to be larger than those measured for optical and NIR-selected samples over the whole mass range for the FIR selection, and for masses larger than 1010 Msolar for the FUV selection. Luminous and massive galaxies selected in FIR appear as active as galaxies with similar characteristics detected at z~0.7.
Health information needs of professional nurses required at the point of care.
Ricks, Esmeralda; ten Ham, Wilma
2015-06-11
Professional nurses work in dynamic environments and need to keep up to date with relevant information for practice in nursing to render quality patient care. Keeping up to date with current information is often challenging because of heavy workload, diverse information needs and the accessibility of the required information at the point of care. The aim of the study was to explore and describe the information needs of professional nurses at the point of care in order to make recommendations to stakeholders to develop a mobile library accessible by means of smart phones when needed. The researcher utilised a quantitative, descriptive survey design to conduct this study. The target population comprised 757 professional nurses employed at a state hospital. Simple random sampling was used to select a sample of the wards, units and departments for inclusion in the study. A convenience sample of 250 participants was selected. Two hundred and fifty structured self-administered questionnaires were distributed amongst the participants. Descriptive statistics were used to analyse the data. A total of 136 completed questionnaires were returned. The findings highlighted the types and accessible sources of information. Information needs of professional nurses were identified such as: extremely drug-resistant tuberculosis, multi-drug-resistant tuberculosis, HIV, antiretrovirals and all chronic lifestyle diseases. This study has enabled the researcher to identify the information needs required by professional nurses at the point of care to enhance the delivery of patient care. The research results were used to develop a mobile library that could be accessed by professional nurses.
Efficacy of adrenal venous sampling is increased by point of care cortisol analysis
Viste, Kristin; Grytaas, Marianne A; Jørstad, Melissa D; Jøssang, Dag E; Høyden, Eivind N; Fotland, Solveig S; Jensen, Dag K; Løvås, Kristian; Thordarson, Hrafnkell; Almås, Bjørg; Mellgren, Gunnar
2013-01-01
Primary aldosteronism (PA) is a common cause of secondary hypertension and is caused by unilateral or bilateral adrenal disease. Treatment options depend on whether the disease is lateralized or not, which is preferably evaluated with selective adrenal venous sampling (AVS). This procedure is technically challenging, and obtaining representative samples from the adrenal veins can prove difficult. Unsuccessful AVS procedures often require reexamination. Analysis of cortisol during the procedure may enhance the success rate. We invited 21 consecutive patients to participate in a study with intra-procedural point of care cortisol analysis. When this assay showed nonrepresentative sampling, new samples were drawn after redirection of the catheter. The study patients were compared using the 21 previous procedures. The intra-procedural cortisol assay increased the success rate from 10/21 patients in the historical cohort to 17/21 patients in the study group. In four of the 17 successful procedures, repeated samples needed to be drawn. Successful sampling at first attempt improved from the first seven to the last seven study patients. Point of care cortisol analysis during AVS improves success rate and reduces the need for reexaminations, in accordance with previous studies. Successful AVS is crucial when deciding which patients with PA will benefit from surgical treatment. PMID:24169597
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
Assessment of the trace element distribution in soils in the parks of the city of Zagreb (Croatia).
Roje, Vibor; Orešković, Marko; Rončević, Juraj; Bakšić, Darko; Pernar, Nikola; Perković, Ivan
2018-02-07
This paper presents the results of the preliminary testing of the selected trace elements in the soils of several parks in the city of Zagreb, Republic of Croatia. In each park, the samples were taken from several points-at various distances from the roads. The samples were taken at two different depths: 0-5 and 30-45 cm. Composite samples were done for each sampling point. Microwave-assisted wet digestion of the soil samples was performed and the determination by ICP-AES technique was done. Results obtained for Al, As, Ba, Mn, Ti, V, and K are in a good agreement with the results published in the scientific literature so far. The mass fraction values of Cd, Cr, Cu, Ni, Pb, and Zn are somewhat higher than the maximum values given in the Croatian Directive on agricultural land protection against pollution. Be, Mo, Sb, Se, and Tl in the samples were present in the concentrations that are lower than their method detection limit values.
Thurison, Tine; Christensen, Ib J; Lund, Ida K; Nielsen, Hans J; Høyer-Hansen, Gunilla
2015-01-15
High levels of circulating forms of the urokinase-type plasminogen activator receptor (uPAR) are significantly associated to poor prognosis in cancer patients. Our aim was to determine biological variations and reference intervals of the uPAR forms in blood, and in addition, to test the clinical relevance of using these as cut-points in colorectal cancer (CRC) prognosis. uPAR forms were measured in citrated and EDTA plasma samples using time-resolved fluorescence immunoassays. Diurnal, intra- and inter-individual variations were assessed in plasma samples from cohorts of healthy individuals. Reference intervals were determined in plasma from healthy individuals randomly selected from a Danish multi-center cross-sectional study. A cohort of CRC patients was selected from the same cross-sectional study. The reference intervals showed a slight increase with age and women had ~20% higher levels. The intra- and inter-individual variations were ~10% and ~20-30%, respectively and the measured levels of the uPAR forms were within the determined 95% reference intervals. No diurnal variation was found. Applying the normal upper limit of the reference intervals as cut-point for dichotomizing CRC patients revealed significantly decreased overall survival of patients with levels above this cut-point of any uPAR form. The reference intervals for the different uPAR forms are valid and the upper normal limits are clinically relevant cut-points for CRC prognosis. Copyright © 2014 Elsevier B.V. All rights reserved.
MAGENCO: A map generalization controller for Arc/Info
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.; Cashwell, J.W.
The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyzemore » the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.« less
A new adaptive light beam focusing principle for scanning light stimulation systems.
Bitzer, L A; Meseth, M; Benson, N; Schmechel, R
2013-02-01
In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.
Makino, Yoshinori; Watanabe, Michiko; Makihara, Reiko Ando; Nokihara, Hiroshi; Yamamoto, Noboru; Ohe, Yuichiro; Sugiyama, Erika; Sato, Hitoshi; Hayashi, Yoshikazu
2016-09-01
Limited sampling points for both amrubicin (AMR) and its active metabolite amrubicinol (AMR-OH) were simultaneously optimized using Akaike's information criterion (AIC) calculated by pharmacokinetic modeling. In this pharmacokinetic study, 40 mg/m(2) of AMR was administered as a 5-min infusion on three consecutive days to 21 Japanese lung cancer patients. Blood samples were taken at 0, 0.08, 0.25, 0.5, 1, 2, 4, 8 and 24 h after drug infusion, and AMR and AMR-OH concentrations in plasma were quantitated using a high-performance liquid chromatography. The pharmacokinetic profile of AMR was characterized using a three-compartment model and that of AMR-OH using a one-compartment model following a first-order absorption process. These pharmacokinetic profiles were then integrated into one pharmacokinetic model for simultaneous fitting of AMR and AMR-OH. After fitting to the pharmacokinetic model, 65 combinations of four sampling points from the concentration profiles were evaluated for their AICs. Stepwise regression analysis was applied to select the sampling points for AMR and AMR-OH to predict the area under the concentration-time curves (AUCs) at best. Of the three combinations that yielded favorable AIC values, 0.25, 2, 4 and 8 h yielded the best AUC prediction for both AMR (R(2) = 0.977) and AMR-OH (R(2) = 0.886). The prediction error for AUC was less than 15%. The optimal limited sampling points of AMR and AMR-OH after AMR infusion were found to be 0.25, 2, 4 and 8 h, enabling less frequent blood sampling in further expanded pharmacokinetic studies for both AMR and AMR-OH. © 2016 John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Huang, Jian; Liu, Gui-xiong
2016-09-01
The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.
A sampling strategy to estimate the area and perimeter of irregularly shaped planar regions
Timothy G. Gregoire; Harry T. Valentine
1995-01-01
The length of a randomly oriented ray emanating from an interior point of a planar region can be used to unbiasedly estimate the region's area and perimeter. Estimators and corresponding variance estimators under various selection strategies are presented.
Luksamijarulkul, Pipat; Suknongbung, Siranee; Vatanasomboon, Pisit; Sujirarut, Dusit
2017-03-01
A large number of migrants have move to cities in Thailand seeking employment. These people may be at increased risk for environmental health problems. We studied the health status, environmental living conditions and microbial indoor air quality (IAQ) among selected groups of migrant workers and their households in Mueang District, Samut Sakhon, central Thailand. We conducted a cross sectional study of 240 migrant workers and their households randomly selected by multistage sampling. The person responsible for hygiene at each studied household was interviewed using a structured questionnaire. Two indoor air samples were taken from each household (480 indoor air samples) to determine bacterial and fungal counts using a Millipore air tester; 240 outdoor air samples were collected for comparison. Ninety-nine point six percent of study subjects were Myanmar, 74.2% were aged 21-40 years, 91.7% had a primary school level education or lower and 53.7% had stayed in Thailand less than 5 years. Eight point three percent had a history of an underlying disease, 20.8% had a recent history of pulmonary tuberculosis in a family member within the previous year. Forty-three point eight percent had a current illness related to IAQ during a previous month. Twenty-one point three were current cigarette smokers, 15.0% were current alcohol consumers, and 5.0% exercises ≥3 times per week. Forty-nine point two percent never opened the windows of their bedrooms or living rooms for ventilation, 45% never cleaned their window screens, and 38.3% never put their pillows or mattresses in the sunlight. The mean(±SD) air bacterial count was 230(±229) CFU/m3 (outdoor air = 128±82 CFU/ m3), and the mean fungal count was 630(±842) CFU/m3 (outdoor air = 138±94 CFU/ m3). When the bacterial and fungal counts were compared with the guidelines of the American Conference of Governmental Industrial Hygienists, the bacterial counts in 6.5% of houses surveyed and the fungal counts in 28.8% of house surveyed were higher than the recommended levels (<500 CFU/m3). Bacterial and fungal counts in the sample households were not significantly correlated with household hygiene practice scores (p>0.05). There was a positive correlation between bacterial counts and fungal counts in household air samples, r=0.28, p<0.001.
The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000
Fitzpatrick-Lins, Katherine
1980-01-01
Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
On the interpolation of volumetric water content in research catchments
NASA Astrophysics Data System (ADS)
Dlamini, Phesheya; Chaplot, Vincent
Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.
Northrup, Joseph M.; Hooten, Mevin B.; Anderson, Charles R.; Wittemyer, George
2013-01-01
Habitat selection is a fundamental aspect of animal ecology, the understanding of which is critical to management and conservation. Global positioning system data from animals allow fine-scale assessments of habitat selection and typically are analyzed in a use-availability framework, whereby animal locations are contrasted with random locations (the availability sample). Although most use-availability methods are in fact spatial point process models, they often are fit using logistic regression. This framework offers numerous methodological challenges, for which the literature provides little guidance. Specifically, the size and spatial extent of the availability sample influences coefficient estimates potentially causing interpretational bias. We examined the influence of availability on statistical inference through simulations and analysis of serially correlated mule deer GPS data. Bias in estimates arose from incorrectly assessing and sampling the spatial extent of availability. Spatial autocorrelation in covariates, which is common for landscape characteristics, exacerbated the error in availability sampling leading to increased bias. These results have strong implications for habitat selection analyses using GPS data, which are increasingly prevalent in the literature. We recommend researchers assess the sensitivity of their results to their availability sample and, where bias is likely, take care with interpretations and use cross validation to assess robustness.
Zhang, Xue-Lei; Feng, Wan-Wan; Zhong, Guo-Min
2011-01-01
A GIS-based 500 m x 500 m soil sampling point arrangement was set on 248 points at Wenshu Town of Yuzhou County in central Henan Province, where the typical Ustic Cambosols locates. By using soil digital data, the spatial database was established, from which, all the needed latitude and longitude data of the sampling points were produced for the field GPS guide. Soil samples (0-20 cm) were collected from 202 points, of which, bulk density measurement were conducted for randomly selected 34 points, and the ten soil property items used as the factors for soil quality assessment, including organic matter, available K, available P, pH, total N, total P, soil texture, cation exchange capacity (CEC), slowly available K, and bulk density, were analyzed for the other points. The soil property items were checked by statistic tools, and then, classified with standard criteria at home and abroad. The factor weight was given by analytic hierarchy process (AHP) method, and the spatial variation of the major 10 soil properties as well as the soil quality classes and their occupied areas were worked out by Kriging interpolation maps. The results showed that the arable Ustic Cambosols in study area was of good quality soil, over 95% of which ranked in good and medium classes and only less than 5% were in poor class.
Search for Point Sources of High Energy Neutrinos with Final Data from AMANDA-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
IceCube Collaboration; Klein, Spencer
2009-03-06
We present a search for point sources of high energy neutrinos using 3.8 years of data recorded by AMANDA-II during 2000-2006. After reconstructing muon tracks and applying selection criteria designed to optimally retain neutrino-induced events originating in the Northern Sky, we arrive at a sample of 6595 candidate events, predominantly from atmospheric neutrinos with primary energy 100 GeV to 8 TeV. Our search of this sample reveals no indications of a neutrino point source. We place the most stringent limits to date on E{sup -2} neutrino fluxes from points in the Northern Sky, with an average upper limit of E{supmore » 2}{Phi}{sub {nu}{sub {mu}}+{nu}{sub {tau}}} {le} 5.2 x 10{sup -11} TeV cm{sup -2} s{sup -1} on the sum of {nu}{sub {mu}} and {nu}{sub {tau}} fluxes, assumed equal, over the energy range from 1.9 TeV to 2.5 PeV.« less
Search for point sources of high energy neutrinos with final data from AMANDA-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbasi, R.; Andeen, K.; Baker, M.
2009-03-15
We present a search for point sources of high energy neutrinos using 3.8 yr of data recorded by AMANDA-II during 2000-2006. After reconstructing muon tracks and applying selection criteria designed to optimally retain neutrino-induced events originating in the northern sky, we arrive at a sample of 6595 candidate events, predominantly from atmospheric neutrinos with primary energy 100 GeV to 8 TeV. Our search of this sample reveals no indications of a neutrino point source. We place the most stringent limits to date on E{sup -2} neutrino fluxes from points in the northern sky, with an average upper limit of E{supmore » 2}{phi}{sub {nu}{sub {mu}}}{sub +{nu}{sub {tau}}}{<=}5.2x10{sup -11} TeV cm{sup -2} s{sup -1} on the sum of {nu}{sub {mu}} and {nu}{sub {tau}} fluxes, assumed equal, over the energy range from 1.9 TeV to 2.5 PeV.« less
Kidwell, Katherine M; Kozikowski, Chelsea; Roth, Taylor; Lundahl, Alyssa; Nelson, Timothy D
2018-06-01
To examine the associations among negative/reactive temperament, feeding styles, and selective eating in a sample of preschoolers because preschool eating behaviors likely have lasting implications for children's health. A community sample of preschoolers aged 3-5 years (M = 4.49 years, 49.5% female, 75.7% European American) in the Midwest of the United States was recruited to participate in the study (N = 297). Parents completed measures of temperament and feeding styles at two time points 6 months apart. A series of regressions indicated that children who had temperaments high in negative affectivity were significantly more likely to experience instrumental and emotional feeding styles. They were also significantly more likely to be selective eaters. These associations were present when examined both concurrently and after 6 months. This study provides a novel investigation of child temperament and eating behaviors, allowing for a better understanding of how negative affectivity is associated with instrumental feeding, emotional feeding, and selective eating. These results inform interventions to improve child health.
Succinonitrile Purification Facility
NASA Technical Reports Server (NTRS)
2003-01-01
The Succinonitrile (SCN) Purification Facility provides succinonitrile and succinonitrile alloys to several NRA selected investigations for flight and ground research at various levels of purity. The purification process employed includes both distillation and zone refining. Once the appropriate purification process is completed, samples are characterized to determine the liquidus and/or solidus temperature, which is then related to sample purity. The lab has various methods for measuring these temperatures with accuracies in the milliKelvin to tenths of milliKelvin range. The ultra-pure SCN produced in our facility is indistinguishable from the standard material provided by NIST to well within the stated +/- 1.5mK of the NIST triple point cells. In addition to delivering material to various investigations, our current activities include process improvement, characterization of impurities and triple point cell design and development. The purification process is being evaluated for each of the four vendors to determine the efficacy of each purification step. We are also collecting samples of the remainder from distillation and zone refining for analysis of the constituent impurities. The large triple point cells developed will contain SCN with a melting point of 58.0642 C +/- 1.5mK for use as a calibration standard for Standard Platinum Resistance Thermometers (SPRTs).
Bacteriological study of juvenile periodontitis in China.
Han, N M; Xiao, X R; Zhang, L S; Ri, X Q; Zhang, J Z; Tong, Y H; Yang, M R; Xiao, Z R
1991-09-01
The predominant cultivable bacteria associated with juvenile periodontitis (JP) in China were studied for the first time. Subgingival plaque samples were taken on paper points from 23 diseased sites in 15 JP patients and from 7 healthy sites in 7 control subjects. Serially diluted plaque samples were plated on nonselective blood agar and on MGB agar, a selective medium for the isolation of Actinobacillus actinomycetemcomitans. Fifteen or more isolated colonies from each sample (in sequence without selection) were purified for identification. The results indicated that the microflora in healthy sulci of the 7 control subjects was significantly different from that in diseased sites of JP patients. The predominant species in healthy sulci were Streptococcus spp. and Capnocytophaga gingivalis. In JP patients, Eubacterium sp. was found in significantly higher frequency and proportion. Actinobacillus actinomycetemcomitans was not detected in any samples. It appears that this species is not associated with juvenile periodontitis in China.
Gaikowski, M.P.; Larson, W.J.; Steuer, J.J.; Gingerich, W.H.
2004-01-01
Accurate estimates of drug concentrations in hatchery effluent are critical to assess the environmental risk of hatchery drug discharge resulting from disease treatment. This study validated two dilution simple n models to estimate chloramine-T environmental introduction concentrations by comparing measured and predicted chloramine-T concentrations using the US Geological Survey's Upper Midwest Environmental Sciences Center aquaculture facility effluent as an example. The hydraulic characteristics of our treated raceway and effluent and the accuracy of our water flow rate measurements were confirmed with the marker dye rhodamine WT. We also used the rhodamine WT data to develop dilution models that would (1) estimate the chloramine-T concentration at a given time and location in the effluent system and (2) estimate the average chloramine-T concentration at a given location over the entire discharge period. To test our models, we predicted the chloramine-T concentration at two sample points based on effluent flow and the maintenance of chloramine-T at 20 mg/l for 60 min in the same raceway used with rhodamine WT. The effluent sample points selected (sample points A and B) represented 47 and 100% of the total effluent flow, respectively. Sample point B is-analogous to the discharge of a hatchery that does not have a detention lagoon, i.e. The sample site was downstream of the last dilution water addition following treatment. We then applied four chloramine-T flow-through treatments at 20mg/l for 60 min and measured the chloramine-T concentration in water samples collected every 15 min for about 180 min from the treated raceway and sample points A and B during and after application. The predicted chloramine-T concentration at each sampling interval was similar to the measured chloramine-T concentration at sample points A and B and was generally bounded by the measured 90% confidence intervals. The predicted aver,age chloramine-T concentrations at sample points A or B (2.8 and 1.3 mg/l, respectively) were not significantly different (P > 0.05) from the average measured chloramine-T concentrations (2.7 and 1.3 mg/l, respectively). The close agreement between our predicted and measured chloramine-T concentrations indicate either of the dilution models could be used to adequately predict the chloramine-T environmental introduction concentration in Upper Midwest Environmental Sciences Center effluent. (C) 2003 Elsevier B.V. All rights reserved.
Improving the quality of extracting dynamics from interspike intervals via a resampling approach
NASA Astrophysics Data System (ADS)
Pavlova, O. N.; Pavlov, A. N.
2018-04-01
We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.
Drop-off Detection with the Long Cane: Effects of Different Cane Techniques on Performance
Kim, Dae Shik; Emerson, Robert Wall; Curtis, Amy
2010-01-01
This study compared the drop-off detection performance with the two-point touch and constant contact cane techniques using a repeated-measures design with a convenience sample of 15 cane users with visual impairments. The constant contact technique was superior to the two-point touch technique in the drop-off detection rate and the 50% detection threshold. The findings may help an orientation and mobility instructor select an appropriate technique for a particular client or training situation. PMID:21209791
Elements of Motivational Structure for Studying Mechanical Engineering
ERIC Educational Resources Information Center
Dubreta, Nikša; Miloš, Damir
2017-01-01
The article presents the findings on students' reasons for studying mechanical engineering. These reasons were covered in terms of extrinsic and intrinsic motivation additionally related to selected independent variables of the sample--students' secondary school Grade Point Average, their gender and the socio-economic status. The research was…
Catholic High Schools and Their Finances, 1980.
ERIC Educational Resources Information Center
Bredeweg, Frank H.
The information contained in this report was drawn from data provided by a national sample of 200 Catholic high schools. The schools were selected to reflect types (private, Catholic, diocesan, and parish schools), enrollment sizes, and geographic location. The report addresses these areas. First, information is provided to point out the financial…
40 CFR 63.11466 - What are the performance test requirements for new and existing sources?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (Appendix A-1) to select sampling port locations and the number of traverse points in each stack or duct... of the stack gas. (iii) Method 3, 3A, or 3B (Appendix A-2) to determine the dry molecular weight of...
Preparation of Solid Derivatives by Differential Scanning Calorimetry.
ERIC Educational Resources Information Center
Crandall, E. W.; Pennington, Maxine
1980-01-01
Describes the preparation of selected aldehydes and ketones, alcohols, amines, phenols, haloalkanes, and tertiaryamines by differential scanning calorimetry. Technique is advantageous because formation of the reaction product occurs and the melting point of the product is obtained on the same sample in a short time with no additional purification…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN
2010-08-03
A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.
Rubio-Arias, Hector; Rey, Nora I; Quintana, Rey M; Nevarez, G Virginia; Palacios, Oskar
2011-06-01
Lago de Colina (Colina Lake) is located about 180 km south of the city of Chihuahua (Mexico), and during the Semana Santa (Holy Week) vacation period its recreational use is high. The objective of this study was to quantify coliform and heavy metal levels in this water body before and after the Holy Week vacation period in 2010. Twenty sampling points were randomly selected and two water samples were collected at each point near the surface (0.30 m) and at 1 m depth. After the Holy Week vacation the same twenty points were sampled at the same depths. Therefore, a total 80 water samples were analyzed for fecal and total coliforms and levels of the following metals: Al, As, B, Ca, Cr, Cu, Fe, K, Mg, Mn, Na, Ni, Pb, Se, Si and Zn. It was hypothesized that domestic tourism contaminated this water body, and as a consequence, could have a negative impact on visitor health. An analysis of variance (ANOVA) study was performed for each element and its interactions considering a factorial design where factor A was sample date and factor B was sample depth. Fecal coliforms were only detected at eight sampling points in the first week, but after Holy Week, both fecal and total coliforms were detected at most sampling points. The concentrations of Al, B, Na, Ni and Se were only statistically different for factor A. The levels of Cr, Cu, K and Mg was different for both date and depth, but the dual factor interaction was not significant. The amount of Ca and Zn was statistically different due to date, depth and their interaction. No significant differences were found for any factor or the interaction for the elements As, Fe and Mn. Because of the consistent results, it is concluded that local tourism is contaminating the recreational area of Colina Lake, Chihuahua, Mexico.
Rubio-Arias, Hector; Rey, Nora I.; Quintana, Rey M.; Nevarez, G. Virginia; Palacios, Oskar
2011-01-01
Lago de Colina (Colina Lake) is located about 180 km south of the city of Chihuahua (Mexico), and during the Semana Santa (Holy Week) vacation period its recreational use is high. The objective of this study was to quantify coliform and heavy metal levels in this water body before and after the Holy Week vacation period in 2010. Twenty sampling points were randomly selected and two water samples were collected at each point near the surface (0.30 m) and at 1 m depth. After the Holy Week vacation the same twenty points were sampled at the same depths. Therefore, a total 80 water samples were analyzed for fecal and total coliforms and levels of the following metals: Al, As, B, Ca, Cr, Cu, Fe, K, Mg, Mn, Na, Ni, Pb, Se, Si and Zn. It was hypothesized that domestic tourism contaminated this water body, and as a consequence, could have a negative impact on visitor health. An analysis of variance (ANOVA) study was performed for each element and its interactions considering a factorial design where factor A was sample date and factor B was sample depth. Fecal coliforms were only detected at eight sampling points in the first week, but after Holy Week, both fecal and total coliforms were detected at most sampling points. The concentrations of Al, B, Na, Ni and Se were only statistically different for factor A. The levels of Cr, Cu, K and Mg was different for both date and depth, but the dual factor interaction was not significant. The amount of Ca and Zn was statistically different due to date, depth and their interaction. No significant differences were found for any factor or the interaction for the elements As, Fe and Mn. Because of the consistent results, it is concluded that local tourism is contaminating the recreational area of Colina Lake, Chihuahua, Mexico. PMID:21776236
Chandler, Mark A.; Goggin, David J.; Horne, Patrick J.; Kocurek, Gary G.; Lake, Larry W.
1989-01-01
For making rapid, non-destructive permeability measurements in the field, a portable minipermeameter of the kind having a manually-operated gas injection tip is provided with a microcomputer system which operates a flow controller to precisely regulate gas flow rate to a test sample, and reads a pressure sensor which senses the pressure across the test sample. The microcomputer system automatically turns on the gas supply at the start of each measurement, senses when a steady-state is reached, collects and records pressure and flow rate data, and shuts off the gas supply immediately after the measurement is completed. Preferably temperature is also sensed to correct for changes in gas viscosity. The microcomputer system may also provide automatic zero-point adjustment, sensor calibration, over-range sensing, and may select controllers, sensors, and set-points for obtaining the most precise measurements. Electronic sensors may provide increased accuracy and precision. Preferably one microcomputer is used for sensing instrument control and data collection, and a second microcomputer is used which is dedicated to recording and processing the data, selecting the sensors and set-points for obtaining the most precise measurements, and instructing the user how to set-up and operate the minipermeameter. To provide mass data collection and user-friendly operation, the second microcomputer is preferably a lap-type portable microcomputer having a non-volatile or battery-backed CMOS memory.
Wang, X; Chauvat, M-P; Ruterana, P; Walther, T
2017-12-01
We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
NASA Astrophysics Data System (ADS)
Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem
2017-07-01
All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.
Kazmerski, Lawrence L.
1989-01-01
A method and apparatus is disclosed for obtaining and mapping chemical compositional data for solid devices. It includes a SIMS mass analyzer or similar system capable of being rastered over a surface of the solid to sample the material at a pattern of selected points, as the surface is being eroded away by sputtering or a similar process. The data for each point sampled in a volume of the solid is digitally processed and indexed by element or molecule type, exact spacial location within the volume, and the concentration levels of the detected element or molecule types. This data can then be recalled and displayed for any desired planar view in the volume.
Kazmerski, L.L.
1985-04-30
A method and apparatus is disclosed for obtaining and mapping chemical compositional data for solid devices. It includes a SIMS mass analyzer or similar system capable of being rastered over a surface of the solid to sample the material at a pattern of selected points, as the surface is being eroded away by sputtering or a similar process. The data for each point sampled in a volume of the solid is digitally processed and indexed by element or molecule type, exact spacial location within the volume, and the concentration levels of the detected element or molecule types. This data can then be recalled and displayed for any desired planar view in the volume.
Johnson, Jared M; Im, Soohyun; Windl, Wolfgang; Hwang, Jinwoo
2017-01-01
We propose a new scanning transmission electron microscopy (STEM) technique that can realize the three-dimensional (3D) characterization of vacancies, lighter and heavier dopants with high precision. Using multislice STEM imaging and diffraction simulations of β-Ga 2 O 3 and SrTiO 3 , we show that selecting a small range of low scattering angles can make the contrast of the defect-containing atomic columns substantially more depth-dependent. The origin of the depth-dependence is the de-channeling of electrons due to the existence of a point defect in the atomic column, which creates extra "ripples" at low scattering angles. The highest contrast of the point defect can be achieved when the de-channeling signal is captured using the 20-40mrad detection angle range. The effect of sample thickness, crystal orientation, local strain, probe convergence angle, and experimental uncertainty to the depth-dependent contrast of the point defect will also be discussed. The proposed technique therefore opens new possibilities for highly precise 3D structural characterization of individual point defects in functional materials. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparative analytical study of the selected wine varieties grown in Montenegro.
Đorđević, Neda O; Novaković, Miroslav M; Pejin, Boris; Mutić, Jelena J; Vajs, Vlatka E; Pajović, Snežana B; Tešević, Vele V
2017-08-01
Samples of the selected red wine varieties grown in Montenegro (Merlot, Cabernet Sauvignon and Vranac; vintages 2010-2012) were compared according to total phenolic content, anti-DPPH radical activity, phenolic profile and elemental composition. All the samples showed profound anti-DPPH radical activity, due to high content of total phenolic compounds (R = 0.92). The most abundant phenolics were catechin and gallic acid with the highest values recorded for Merlot 2012 (43.22 and 28.65 mg/L, respectively). In addition to this, the content of essential elements including the potentially toxic ones was within healthy (safe) level for all the samples analysed. This study has actually pointed out Merlot wine variety as the best quality one, though all three varieties may be used as safe and health-promoting nutritional products.
O'Quinn, Travis G; Brooks, J Chance; Miller, Markus F
2015-02-01
A consumer study was conducted to determine palatability ratings of beef tenderloin steaks from USDA Choice, USDA Select, and USDA Select with marbling scores from Slight 50 to 100 (USDA High Select) cooked to various degrees of doneness. Steaks were randomly assigned to 1 of 3 degree of doneness categories: very-rare, medium-rare, or well-done. Consumers (N = 315) were screened for preference of degree of doneness and fed 4 samples of their preferred doneness (a warm-up and one from each USDA quality grade treatment in a random order). Consumers evaluated steaks on an 8-point verbally anchored hedonic scale for tenderness, juiciness, flavor, and overall like as well as rated steaks as acceptable or unacceptable for all palatability traits. Quality grade had no effect (P > 0.05) on consumer ratings for tenderness, juiciness, flavor, and overall like scores, with all traits averaging above a 7 ("like very much") on the 8-point scale. In addition, no differences (P > 0.05) were found in the percentage of samples rated as acceptable for all palatability traits, with more than 94% of samples rated acceptable for each trait in all quality grades evaluated. Steaks cooked to well-done had lower (P < 0.05) juiciness scores than steaks cooked to very-rare or medium-rare and were rated lower for tenderness (P < 0.05) than steaks cooked to a very-rare degree of doneness. Results indicate consumers were not able to detect differences in tenderness, juiciness, flavor, or overall like among beef tenderloin steaks from USDA Choice and Select quality grades. © 2015 Institute of Food Technologists®
[Development of a microenvironment test chamber for airborne microbe research].
Zhan, Ningbo; Chen, Feng; Du, Yaohua; Cheng, Zhi; Li, Chenyu; Wu, Jinlong; Wu, Taihu
2017-10-01
One of the most important environmental cleanliness indicators is airborne microbe. However, the particularity of clean operating environment and controlled experimental environment often leads to the limitation of the airborne microbe research. This paper designed and implemented a microenvironment test chamber for airborne microbe research in normal test conditions. Numerical simulation by Fluent showed that airborne microbes were evenly dispersed in the upper part of test chamber, and had a bottom-up concentration growth distribution. According to the simulation results, the verification experiment was carried out by selecting 5 sampling points in different space positions in the test chamber. Experimental results showed that average particle concentrations of all sampling points reached 10 7 counts/m 3 after 5 minutes' distributing of Staphylococcus aureus , and all sampling points showed the accordant mapping of concentration distribution. The concentration of airborne microbe in the upper chamber was slightly higher than that in the middle chamber, and that was also slightly higher than that in the bottom chamber. It is consistent with the results of numerical simulation, and it proves that the system can be well used for airborne microbe research.
Observation of valley-selective microwave transport in photonic crystals
NASA Astrophysics Data System (ADS)
Ye, Liping; Yang, Yuting; Hong Hang, Zhi; Qiu, Chunyin; Liu, Zhengyou
2017-12-01
Recently, the discrete valley degree of freedom has attracted extensive attention in condensed matter physics. Here, we present an experimental observation of the intriguing valley transport for microwaves in photonic crystals, including the bulk valley transport and the valley-projected edge modes along the interface separating different photonic insulating phases. For both cases, valley-selective excitations are realized by a point-like chiral source located at proper locations inside the samples. Our results are promising for exploring unprecedented routes to manipulate microwaves.
Luyckx, K; Dewulf, J; Van Weyenberg, S; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K
2015-04-01
Cleaning and disinfection of the broiler stable environment is an essential part of farm hygiene management. Adequate cleaning and disinfection is essential for prevention and control of animal diseases and zoonoses. The goal of this study was to shed light on the dynamics of microbiological and non-microbiological parameters during the successive steps of cleaning and disinfection and to select the most suitable sampling methods and parameters to evaluate cleaning and disinfection in broiler houses. The effectiveness of cleaning and disinfection protocols was measured in six broiler houses on two farms through visual inspection, adenosine triphosphate hygiene monitoring and microbiological analyses. Samples were taken at three time points: 1) before cleaning, 2) after cleaning, and 3) after disinfection. Before cleaning and after disinfection, air samples were taken in addition to agar contact plates and swab samples taken from various sampling points for enumeration of total aerobic flora, Enterococcus spp., and Escherichia coli and the detection of E. coli and Salmonella. After cleaning, air samples, swab samples, and adenosine triphosphate swabs were taken and a visual score was also assigned for each sampling point. The mean total aerobic flora determined by swab samples decreased from 7.7±1.4 to 5.7±1.2 log CFU/625 cm2 after cleaning and to 4.2±1.6 log CFU/625 cm2 after disinfection. Agar contact plates were used as the standard for evaluating cleaning and disinfection, but in this study they were found to be less suitable than swabs for enumeration. In addition to measuring total aerobic flora, Enterococcus spp. seemed to be a better hygiene indicator to evaluate cleaning and disinfection protocols than E. coli. All stables were Salmonella negative, but the detection of its indicator organism E. coli provided additional information for evaluating cleaning and disinfection protocols. Adenosine triphosphate analyses gave additional information about the hygiene level of the different sampling points. © 2015 Poultry Science Association Inc.
Christensen, V.G.; Pope, L.M.
1997-01-01
A network of 34 stream sampling sites was established in the 1,005-square-mile Cheney Reservoir watershed, south-central Kansas, to evaluate spatial variability in concentrations of selected water-quality constituents during low flow. Land use in the Cheney Reservoir watershed is almost entirely agricultural, consisting of pasture and cropland. Cheney Reservoir provides 40 to 60 percent of the water needs for the city of Wichita, Kansas. Sampling sites were selected to determine the relative contribution of point and nonpoint sources of water-quality constituents to streams in the watershed and to identify areas of potential water-quality concern. Water-quality constituents of interest included dissolved solids and major ions, nitrogen and phosphorus nutrients, atrazine, and fecal coliform bacteria. Water from the 34 sampling sites was sampled once in June and once in September 1996 during Phase I of a two-phase study to evaluate water-quality constituent concentrations and loading characteristics in selected subbasins within the watershed and into and out of Cheney Reservoir. Information summarized in this report pertains to Phase I and was used in the selection of six long-term monitoring sites for Phase II of the study. The average low-flow constituent concentrations in water collected during Phase I from all sampling sites was 671 milligrams per liter for dissolved solids, 0.09 milligram per liter for dissolved ammonia as nitrogen, 0.85 milligram per liter for dissolved nitrite plus nitrate as nitrogen, 0.19 milligram per liter for total phosphorus, 0.20 microgram per liter for dissolved atrazine, and 543 colonies per 100 milliliters of water for fecal coliform bacteria. Generally, these constituents were of nonpoint-source origin and, with the exception of dissolved solids, probably were related to agricultural activities. Dissolved solids probably occur naturally as the result of the dissolution of rocks and ancient marine sediments containing large salt deposits. Nutrients also may have resulted from point-source discharges from wastewater-treatment plants. An examination of water-quality characteristics during low flow in the Cheney Reservoir watershed provided insight into the spatial variability of water-quality constituents and allowed for between-site comparisons under stable-flow conditions; identified areas of the watershed that may be of particular water-quality concern; provided a preliminary evaluation of contributions from point and nonpoint sources of contamination; and identified areas of the watershed where long-term monitoring may be appropriate to quantify perceived water-quality problems.
Lahou, Evy; Jacxsens, Liesbeth; Van Landeghem, Filip; Uyttendaele, Mieke
2014-08-01
Food service operations are confronted with a diverse range of raw materials and served meals. The implementation of a microbial sampling plan in the framework of verification of suppliers and their own production process (functionality of their prerequisite and HACCP program), demands selection of food products and sampling frequencies. However, these are often selected without a well described scientifically underpinned sampling plan. Therefore, an approach on how to set-up a focused sampling plan, enabled by a microbial risk categorization of food products, for both incoming raw materials and meals served to the consumers is presented. The sampling plan was implemented as a case study during a one-year period in an institutional food service operation to test the feasibility of the chosen approach. This resulted in 123 samples of raw materials and 87 samples of meal servings (focused on high risk categorized food products) which were analyzed for spoilage bacteria, hygiene indicators and food borne pathogens. Although sampling plans are intrinsically limited in assessing the quality and safety of sampled foods, it was shown to be useful to reveal major non-compliances and opportunities to improve the food safety management system in place. Points of attention deduced in the case study were control of Listeria monocytogenes in raw meat spread and raw fish as well as overall microbial quality of served sandwiches and salads. Copyright © 2014 Elsevier Ltd. All rights reserved.
Data on microscale atmospheric pollution of Bolshoy Kamen town (Primorsky region, Russia)
NASA Astrophysics Data System (ADS)
Kholodov, Aleksei; Ugay, Sergey; Drozd, Vladimir; Maiss, Natalia; Golokhvast, Kirill
2017-10-01
The paper discusses the study of atmospheric particulate matter of Bolshoy Kamen town by means of laser granulometry of snow water samples. Snow sampling points were selected close to major enterprises, along the main streets and roads of the town and in the residential area. The near-ground layer of atmospheric air of the town contains particulate matter of three main size classes: under 10 microns, 10-50 microns and over 700 microns. It is shown that the atmosphere of this town is lightly polluted with particles under 10 μm (PM10). Only in 5 sampling points out of 11 we found microparticles potentially hazardous to human health in significant quantities - from 16.2% to 34.6%. On the most territory of the town large particles (over 400 μm) dominate reaching 79.2%. We can conclude that judging by the particle size analysis of snow water samples Bolshoy Kamen town can be considered safe in terms of presence of particles under 10 μm (PM10) in the atmosphere.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Roohparvar, Rasool; Taher, Mohammad Ali; Mohadesi, Alireza
2008-01-01
For the simultaneous determination of nickel(ll) and copper(ll) in plant samples, a rapid and accurate method was developed. In this method, solid-phase extraction (SPE) and first-order derivative spectrophotometry (FDS) are combined, and the result is coupled with the H-point standard addition method (HPSAM). Compared with normal spectrophotometry, derivative spectrophotometry offers the advantages of increased selectivity and sensitivity. As there is no need for carrying out any pretreatment of the sample, the spectrophotometry method is easy, but because of a high detection limit, it is not so practical. In order to decrease the detection limit, it is suggested to combine spectrophotometry with a preconcentration method such as SPE. In the present work, after separation and preconcentration of Ni(ll) and Cu(ll) on modified clinoptilolite zeolite that is loaded with 2-[1-(2-hydroxy-5-sulforphenyl)-3-phenyl-5-formaza-no]-benzoic acid monosodium salt (zincon) as a selective chromogenic reagent, FDS-HPSAM, which is a simple and selective spectrophotometric method, has been applied for simultaneous determination of these ions. With optimum conditions, the detection limit in original solutions is 0.7 and 0.5 ng/mL, respectively, for nickel and copper. The linear concentration ranges in the proposed method for nickel and copper ions in original solutions are 1.1 to 3.0 x 10(3) and 0.9 to 2.0 x 10(3) ng/mL, respectively. The recommended procedure is applied to successful determination of Cu(ll) and Ni(ll) in standard and real samples.
Including granulometric sediment coastal data composition into the Black Sea GIS
NASA Astrophysics Data System (ADS)
Zhuk, Elena; Khaliulin, Alexey; Krylenko, Marina; Krylenko, Viacheslav; Zodiatis, George; Nikolaidis, Marios; Nikolaidis, Andreas
2017-09-01
The module structure of the Black Sea GIS allows the increasing of its functionality, including new data types and defining new procedures accessing them, their visualization and integration with existing data by their conjoint processing and representation. The Black Sea GIS is released as free software; Mapserver is used as a mapping service; MySQL DBMS works with relational data. A new additional feature provided, is the ability of including coastal data obtained in SB SIO RAS. The data represent granulometric composition of the Anapa bay-bar sediments. The Anapa bay-bar is an accumulative sand form (about 50 km long) located on the northwest Russian Black Sea coast. The entire bay-bar and especially its southern part with sand beaches 50-200 m wide is intensively used in recreation. This work is based on the results of field studies of 2010-2014 in the southern part of the Anapa bay-bar researched by scientists of the Shirshov Institute of Oceanology RAS. Since the shore under consideration has no clearly pronounced reference points, "virtual" points located within 1 km distance from each other were selected. Transversal profiles cross these points. The granulometric composition was studied along with 45 profiles. The samples taken in every profile were from the most characteristic morphological parts of the beach. In this study we used shoreline zone samples. Twenty one granule fractions (mm) were separated in the laboratory. The module which processes coastal data allows to select coastal data based on territory/region and granulometric sediment composition. Also, it allows to visualize coastal maps with user-selected features combined with other GIS data.
Herptofaunal species richness responses to forest landscape structure in Arkansas
Craig Loehle; T. Bently Wigley; Paul A. Shipman; Stanley F. Fox; Scott Rutzmoser; Ronald E. Thill; M. Anthony Melchiors
2005-01-01
Species accumulation curves were used to study reiationships between herpetofaunal richness and habitat characteristics on four watersheds in Arkansas that differed markedly with respect to management intensity. Selected habitat characteristics were estimated for stands containing the sample points and within buffers with radii of 250. 500 m, and 1 km surrounding the...
Raman spectral imaging for quantitative contaminant evaluation in skim milk powder
USDA-ARS?s Scientific Manuscript database
This study uses a point-scan Raman spectral imaging system for quantitative detection of melamine in milk powder. A sample depth of 2 mm and corresponding laser intensity of 200 mW were selected after evaluating the penetration of a 785 nm laser through milk powder. Horizontal and vertical spatial r...
VizieR Online Data Catalog: K giant stars along Sagittarius streams (Ren+, 2017)
NASA Astrophysics Data System (ADS)
Ren, H.-B.; Shi, W.-B.; Zhang, X.; Tang, Y.-K.; Zhang, Y.; Hou, Y.-H.; Wang, Y.-F.
2017-08-01
Law & Majewski (2010ApJ...714..229L) provided a Sgr model that has 105 points with angular position, distance and radial velocity. By using the model of Law & Majewski (2010ApJ...714..229L), we selected K giant samples belonging to the leading Sgr stream. (1 data file).
How Broad Liberal Arts Training Produces Phd Economists: Carleton's Story
ERIC Educational Resources Information Center
Bourne, Jenny; Grawe, Nathan D.
2015-01-01
Several recent studies point to strong performance in economics PhD programs of graduates from liberal arts colleges. While every undergraduate program is unique and the likelihood of selection bias combines with small sample sizes to caution against drawing strong conclusions, the authors reflect on their experience at Carleton College to…
Raman spectral imaging technique on detection of melamine in skim milk powder
USDA-ARS?s Scientific Manuscript database
A point-scan Raman spectral imaging system was used for quantitative detection of melamine in milk powder. A sample depth of 2 mm and corresponding laser intensity of 200 mW were selected after evaluating the penetration of a 785 nm laser through milk powder. Horizontal and vertical spatial resoluti...
Geochemical studies of rocks from North Ray Crater, Apollo 16
NASA Technical Reports Server (NTRS)
Lindstrom, M. M.; Salpas, P. A.
1982-01-01
The samples included in the study were all collected as individual specimens from Station 11 near the rim of North Ray Crater. Samples were selected to cover the entire range of rock types from anorthosites to subophitic impact melts, giving particular attention to the feldspathic breccias which predominate at the site. The chemical composition of North Ray Crater rocks is discussed along with the compositional variations among North Ray Crater samples, and the relationships between North Ray Crater and other Apollo 16 stations. It is pointed out that the primary objective in sampling the Apollo 16 site was to characterize materials from the Cayley Plains and Descartes Highlands.
Optical selection and collection of DNA fragments
Roslaniec, Mary C.; Martin, John C.; Jett, James H.; Cram, L. Scott
1998-01-01
Optical selection and collection of DNA fragments. The present invention includes the optical selection and collection of large (>.mu.g) quantities of clonable, chromosome-specific DNA from a sample of chromosomes. Chromosome selection is based on selective, irreversible photoinactivation of unwanted chromosomal DNA. Although more general procedures may be envisioned, the invention is demonstrated by processing chromosomes in a conventional flow cytometry apparatus, but where no droplets are generated. All chromosomes in the sample are first stained with at least one fluorescent analytic dye and bonded to a photochemically active species which can render chromosomal DNA unclonable if activated. After passing through analyzing light beam(s), unwanted chromosomes are irradiated using light which is absorbed by the photochemically active species, thereby causing photoinactivation. As desired chromosomes pass this photoinactivation point, the inactivating light source is deflected by an optical modulator; hence, desired chromosomes are not photoinactivated and remain clonable. The selection and photoinactivation processes take place on a microsecond timescale. By eliminating droplet formation, chromosome selection rates 50 times greater than those possible with conventional chromosome sorters may be obtained. Thus, usable quantities of clonable DNA from any source thereof may be collected.
Hartmann, Georg; Schuster, Michael
2013-01-25
The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies.
Dowd, J E; Weber, I; Rodriguez, B; Piret, J M; Kwok, K E
1999-05-20
The selection of medium feed rates for perfusion bioreactors represents a challenge for process optimization, particularly in bioreactors that are sampled infrequently. When the present and immediate future of a bioprocess can be adequately described, predictive control can minimize deviations from set points in a manner that can maximize process consistency. Predictive control of perfusion hollow-fiber bioreactors was investigated in a series of hybridoma cell cultures that compared operator control to computer estimation of feed rates. Adaptive software routines were developed to estimate the current and predict the future glucose uptake and lactate production of the bioprocess at each sampling interval. The current and future glucose uptake rates were used to select the perfusion feed rate in a designed response to deviations from the set point values. The routines presented a graphical user interface through which the operator was able to view the up-to-date culture performance and assess the model description of the immediate future culture performance. In addition, fewer samples were taken in the computer-estimated cultures, reducing labor and analytical expense. The use of these predictive controller routines and the graphical user interface decreased the glucose and lactate concentration variances up to sevenfold, and antibody yields increased by 10% to 43%. Copyright 1999 John Wiley & Sons, Inc.
Selection and Characterization of Vegetable Crop Cultivars for use in Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Langhans, Robert W.
1997-01-01
Cultivar evaluation for controlled environments is a lengthy and multifaceted activity. The chapters of this thesis cover eight steps preparatory to yield trials, and the final step of cultivar selection after data are collected. The steps are as follows: 1. Examination of the literature on the crop and crop cultivars to assess the state of knowledge. 2. Selection of standard cultivars with which to explore crop response to major growth factors and determine set points for screening and, later, production. 3. Determination of practical growing techniques for the crop in controlled environments. 4. Design of experiments for determination of crop responses to the major growth factors, with particular emphasis on photoperiod, daily light integral and air temperature. 5. Developing a way of measuring yield appropriate to the crop type by sampling through the harvest period and calculating a productivity function. 6. Narrowing down the pool of cultivars and breeding lines according to a set of criteria and breeding history. 7. Determination of environmental set points for cultivar evaluation through calculating production cost as a function of set points and size of target facility. 8. Design of screening and yield trial experiments emphasizing efficient use of space. 9. Final evaluation of cultivars after data collection, in terms of production cost and value to the consumer. For each of the steps, relevant issues are addressed. In selecting standards to determine set points for screening, set points that optimize cost of production for the standards may not be applicable to all cultivars. Production of uniform and equivalent- sized seedlings is considered as a means of countering possible differences in seed vigor. Issues of spacing and re-spacing are also discussed.
SDSS-IV MaNGA IFS GALAXY SURVEY—SURVEY DESIGN, EXECUTION, AND INITIAL DATA QUALITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Renbin; Zhang, Kai; Bundy, Kevin
The MaNGA Survey (Mapping Nearby Galaxies at Apache Point Observatory) is one of three core programs in the Sloan Digital Sky Survey IV. It is obtaining integral field spectroscopy for 10,000 nearby galaxies at a spectral resolution of R ∼ 2000 from 3622 to 10354 Å. The design of the survey is driven by a set of science requirements on the precision of estimates of the following properties: star formation rate surface density, gas metallicity, stellar population age, metallicity, and abundance ratio, and their gradients; stellar and gas kinematics; and enclosed gravitational mass as a function of radius. We describe how thesemore » science requirements set the depth of the observations and dictate sample selection. The majority of targeted galaxies are selected to ensure uniform spatial coverage in units of effective radius (R{sub e}) while maximizing spatial resolution. About two-thirds of the sample is covered out to 1.5 R{sub e} (Primary sample), and one-third of the sample is covered to 2.5 R{sub e} (Secondary sample). We describe the survey execution with details that would be useful in the design of similar future surveys. We also present statistics on the achieved data quality, specifically the point-spread function, sampling uniformity, spectral resolution, sky subtraction, and flux calibration. For our Primary sample, the median r -band signal-to-noise ratio is ∼70 per 1.4 Å pixel for spectra stacked between 1 R{sub e} and 1.5 R{sub e}. Measurements of various galaxy properties from the first-year data show that we are meeting or exceeding the defined requirements for the majority of our science goals.« less
SDSS-IV MaNGA IFS Galaxy Survey—Survey Design, Execution, and Initial Data Quality
NASA Astrophysics Data System (ADS)
Yan, Renbin; Bundy, Kevin; Law, David R.; Bershady, Matthew A.; Andrews, Brett; Cherinka, Brian; Diamond-Stanic, Aleksandar M.; Drory, Niv; MacDonald, Nicholas; Sánchez-Gallego, José R.; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Zhang, Kai; Aragón-Salamanca, Alfonso; Belfiore, Francesco; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael R.; Brownstein, Joel; Cappellari, Michele; D'Souza, Richard; Emsellem, Eric; Fu, Hai; Gaulme, Patrick; Graham, Mark T.; Goddard, Daniel; Gunn, James E.; Harding, Paul; Jones, Amy; Kinemuchi, Karen; Li, Cheng; Li, Hongyu; Maiolino, Roberto; Mao, Shude; Maraston, Claudia; Masters, Karen; Merrifield, Michael R.; Oravetz, Daniel; Pan, Kaike; Parejko, John K.; Sanchez, Sebastian F.; Schlegel, David; Simmons, Audrey; Thanjavur, Karun; Tinker, Jeremy; Tremonti, Christy; van den Bosch, Remco; Zheng, Zheng
2016-12-01
The MaNGA Survey (Mapping Nearby Galaxies at Apache Point Observatory) is one of three core programs in the Sloan Digital Sky Survey IV. It is obtaining integral field spectroscopy for 10,000 nearby galaxies at a spectral resolution of R ˜ 2000 from 3622 to 10354 Å. The design of the survey is driven by a set of science requirements on the precision of estimates of the following properties: star formation rate surface density, gas metallicity, stellar population age, metallicity, and abundance ratio, and their gradients; stellar and gas kinematics; and enclosed gravitational mass as a function of radius. We describe how these science requirements set the depth of the observations and dictate sample selection. The majority of targeted galaxies are selected to ensure uniform spatial coverage in units of effective radius (R e ) while maximizing spatial resolution. About two-thirds of the sample is covered out to 1.5R e (Primary sample), and one-third of the sample is covered to 2.5R e (Secondary sample). We describe the survey execution with details that would be useful in the design of similar future surveys. We also present statistics on the achieved data quality, specifically the point-spread function, sampling uniformity, spectral resolution, sky subtraction, and flux calibration. For our Primary sample, the median r-band signal-to-noise ratio is ˜70 per 1.4 Å pixel for spectra stacked between 1R e and 1.5R e . Measurements of various galaxy properties from the first-year data show that we are meeting or exceeding the defined requirements for the majority of our science goals.
Salgueiro-González, N; Turnes-Carou, I; Viñas, L; Besada, V; Muniategui-Lorenzo, S; López-Mahía, P; Prada-Rodríguez, D
2016-05-15
Wild mussels (Mytilus galloprovincialis) were selected as bioindicators of chemical pollution to evaluate the occurrence and spatial distribution of five endocrine disrupting compounds in the Spanish Atlantic coast and Bay of Biscay. A total of 24 samples were collected in May, 2011 and analysed by selective pressurized liquid extraction followed by liquid chromatography tandem mass spectrometry determination. Branched alkylphenols (4-tert-octylphenol and nonylphenol) were determined in more than 90% of the analysed samples whereas the presence of linear alkylphenols (4-n-octylphenol and 4-n-nonylphenol) was scarcely detected (<12% of the samples). Wastewater treatment plants discharges and nautical, fishing and shipping activities were considered the primary sources of contamination by alkylphenols. Bisphenol A was found in 16% of the analysed samples associated to punctual industrial discharges. A total endocrine disrupting compound (alkylphenols and bisphenol A) average concentration of 604ngg(-1) dw was calculated and nonylphenol was the main contributor in almost all sampling points. Copyright © 2016 Elsevier Ltd. All rights reserved.
The performance of sample selection estimators to control for attrition bias.
Grasdal, A
2001-07-01
Sample attrition is a potential source of selection bias in experimental, as well as non-experimental programme evaluation. For labour market outcomes, such as employment status and earnings, missing data problems caused by attrition can be circumvented by the collection of follow-up data from administrative registers. For most non-labour market outcomes, however, investigators must rely on participants' willingness to co-operate in keeping detailed follow-up records and statistical correction procedures to identify and adjust for attrition bias. This paper combines survey and register data from a Norwegian randomized field trial to evaluate the performance of parametric and semi-parametric sample selection estimators commonly used to correct for attrition bias. The considered estimators work well in terms of producing point estimates of treatment effects close to the experimental benchmark estimates. Results are sensitive to exclusion restrictions. The analysis also demonstrates an inherent paradox in the 'common support' approach, which prescribes exclusion from the analysis of observations outside of common support for the selection probability. The more important treatment status is as a determinant of attrition, the larger is the proportion of treated with support for the selection probability outside the range, for which comparison with untreated counterparts is possible. Copyright 2001 John Wiley & Sons, Ltd.
Kilic, Mahmut; Avci, Dilek; Uzuncakmak, Tugba
2016-01-01
The aim of this study is to examine the Internet addiction among adolescents in relation to their sociodemographic characteristics, communication skills, and perceived familial social support. This cross-sectional research is conducted in the high schools in some city centers, in Turkey, in 2013. In this study, cluster sampling was used. In each school, a class for each grade level was randomly selected, and all the students in the selected classes were included in the sample. One thousand seven hundred forty-two students aged between 14 and 20 years were included in the sample.The mean Internet Addiction Scale (IAS) score of the students was found to be 27.9 ± 21.2. According to the scores obtained from IAS, 81.8% of the students were found to display no symptoms (<50 points), 16.9% were found to display borderline symptoms (50-79 points), and 1.3% were found to be Internet addicts (≥80 points). According to the results of the binary logistic regression, male students and the students in single sex vocational schools were found to report higher levels of borderline Internet addiction. It was also observed that the IAS score increases when the father's educational level increases and when the students' school performance is worse. On the other hand, the IAS score decreases when the student grade level, perceived family social support, and communication skills scores increase.The risk factors for Internet addiction are being a male, low academic achievement, inadequate social support and communication skills, and father's high educational level.
Exploring a potential energy surface by machine learning for characterizing atomic transport
NASA Astrophysics Data System (ADS)
Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro
2018-03-01
We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.
A clustering algorithm for sample data based on environmental pollution characteristics
NASA Astrophysics Data System (ADS)
Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun
2015-04-01
Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.
[Bacteriological study on juvenile periodontitis].
Han, N
1991-02-01
The predominant cultivable microflora of 23 pockets in 15 juvenile periodontitis (JP) patients was studied for the first time in China using the current anaerobic methodology. Samples were taken with sterile paper points and dispersed on a vortex mixer. Then the diluted samples were plated on the non-selective blood agar plates and selective MGB medium which favors the growth of Actinobacillus actimycetemcomitans (Aa) and incubated in anaerobic chamber for 5 days. From each sample 15 or more isolated colonies were picked in sequence without selection and subcultured. The isolates were identified mainly by Schrechenberger's 4 hour rapid methods for biochemical and fermentative tests and the chromatographic analysis of acid end products using ion-chromatography. The results were as follows: 1. The microflora of healthy sulci of 7 healthy young subjects was significantly different from that in the pocket of JP patients. The predominant species in healthy sulci were Streptococcus spp and Capnocytophaga gingivalis. 2. The species increased significantly in JP patients in prevalence and proportions was Eubacterium. Other species in high proportions were Bacteroides oris, B. melaninogenicus, B. gingivalis, Capnocytophaga sputigena, and Actinomyces meyeri, etc. 3. Actinobacillus actinomycetemcomitans was not detected in any of the samples.
Smith, W.P.; Wiedenfeld, D.A.; Hanel, P.B.; Twedt, D.J.; Ford, R.P.; Cooper, R.J.; Smith, Winston Paul
1993-01-01
To quantify efficacy of point count sampling in bottomland hardwood forests, we examined the influence of point count duration on corresponding estimates of number of individuals and species recorded. To accomplish this we conducted a totalof 82 point counts 7 May-16 May 1992distributed among three habitats (Wet, Mesic, Dry) in each of three regions within the lower Mississippi Alluvial Valley (MAV). Each point count consisted of recording the number of individual birds (all species) seen or heard during the initial three minutes and per each minute thereafter for a period totaling ten minutes. In addition, we included 384 point counts recorded during an 8-week period in each of 3 years (1985-1987) among 56 randomly-selected forest patches within the bottomlands of western Tennessee. Each point count consisted of recording the number of individuals (excluding migrating species) during each of four, 5 minute intervals for a period totaling 20 minutes. To estimate minimum sample size, we determined sampling variation at each level (region, habitat, and locality) with the 82 point counts from the lower (MAV) and applied the procedures of Neter and Wasserman (1974:493; Applied linear statistical models). Neither the cumulative number of individuals nor number of species per sampling interval attained an asymptote after 10 or 20 minutes of sampling. For western Tennessee bottomlands, total individual and species counts relative to point count duration were similar among years and comparable to the pattern observed throughout the lower MAV. Across the MAV, we recorded a total of 1,62 1 birds distributed among 52 species with the majority (8721/1621) representing 8 species. More birds were recorded within 25-50 m than in either of the other distance categories. There was significant variation in numbers of individuals and species among point counts. For both, significant differences between region and patch (nested within region) occurred; neither habitat nor interaction between habitat and region was significant. For = 0.05 and L3 = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total birds (MSE = 9.28) and species (MSE = 3.79), respectively; 25 percent of the mean could be achieved with 5 counts per factor level. Corresponding sample sizes required to detect differences of rarer species (e.g., Wood Thrush) were 500; for common species (e.g., Northern Cardinal) this same level of precision could be achieved with 100 counts.
Radon Concentration in Groundwater in the Central Region of Gyeongju, Korea - 13130
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jung Min; Lee, A. Rim; Park, Chan Hee
Radon is a naturally occurring radioactive gas that is a well known cause of lung cancer through inhalation. Nevertheless, stomach cancer can also occur if radon-containing water is ingested. This study measured the radon concentration in groundwater for drinking or other domestic uses in the central region of Gyeongju, Korea. The groundwater samples were taken from 11 points chosen from the 11 administrative districts in the central region of Gyeongju by selecting a point per district considering the demographic distribution including the number of tourists who visit the ancient ruins and archaeological sites. The mean radon concentrations in the groundwatermore » samples ranged from 14.38 to 9050.73 Bq.m{sup -3}, which were below the recommendations by the U.S. EPA and WHO. (authors)« less
Defect-selective dry etching for quick and easy probing of hexagonal boron nitride domains.
Wu, Qinke; Lee, Joohyun; Park, Sangwoo; Woo, Hwi Je; Lee, Sungjoo; Song, Young Jae
2018-03-23
In this study, we demonstrate a new method to selectively etch the point defects or the boundaries of as-grown hexagonal boron nitride (hBN) films and flakes in situ on copper substrates using hydrogen and argon gases. The initial quality of the chemical vapor deposition-grown hBN films and flakes was confirmed by UV-vis absorption spectroscopy, atomic force microscopy, and transmission electron microscopy. Different gas flow ratios of Ar/H 2 were then employed to etch the same quality of samples and it was found that etching with hydrogen starts from the point defects and grows epitaxially, which helps in confirming crystalline orientations. However, etching with argon is sensitive to line defects (boundaries) and helps in visualizing the domain size. Finally, based on this defect-selective dry etching technique, it could be visualized that the domains of a polycrystalline hBN monolayer merged together with many parts, even with those that grew from a single nucleation seed.
Defect-selective dry etching for quick and easy probing of hexagonal boron nitride domains
NASA Astrophysics Data System (ADS)
Wu, Qinke; Lee, Joohyun; Park, Sangwoo; Woo, Hwi Je; Lee, Sungjoo; Song, Young Jae
2018-03-01
In this study, we demonstrate a new method to selectively etch the point defects or the boundaries of as-grown hexagonal boron nitride (hBN) films and flakes in situ on copper substrates using hydrogen and argon gases. The initial quality of the chemical vapor deposition-grown hBN films and flakes was confirmed by UV-vis absorption spectroscopy, atomic force microscopy, and transmission electron microscopy. Different gas flow ratios of Ar/H2 were then employed to etch the same quality of samples and it was found that etching with hydrogen starts from the point defects and grows epitaxially, which helps in confirming crystalline orientations. However, etching with argon is sensitive to line defects (boundaries) and helps in visualizing the domain size. Finally, based on this defect-selective dry etching technique, it could be visualized that the domains of a polycrystalline hBN monolayer merged together with many parts, even with those that grew from a single nucleation seed.
Rural drinking water at supply and household levels: quality and management.
Hoque, Bilqis A; Hallman, Kelly; Levy, Jason; Bouis, Howarth; Ali, Nahid; Khan, Feroze; Khanam, Sufia; Kabir, Mamun; Hossain, Sanower; Shah Alam, Mohammad
2006-09-01
Access to safe drinking water has been an important national goal in Bangladesh and other developing countries. While Bangladesh has almost achieved accepted bacteriological drinking water standards for water supply, high rates of diarrheal disease morbidity indicate that pathogen transmission continues through water supply chain (and other modes). This paper investigates the association between water quality and selected management practices by users at both the supply and household levels in rural Bangladesh. Two hundred and seventy tube-well water samples and 300 water samples from household storage containers were tested for fecal coliform (FC) concentrations over three surveys (during different seasons). The tube-well water samples were tested for arsenic concentration during the first survey. Overall, the FC was low (the median value ranged from 0 to 4 cfu/100ml) in water at the supply point (tube-well water samples) but significantly higher in water samples stored in households. At the supply point, 61% of tube-well water samples met the Bangladesh and WHO standards of FC; however, only 37% of stored water samples met the standards during the first survey. When arsenic contamination was also taken into account, only 52% of the samples met both the minimum microbiological and arsenic content standards of safety. The contamination rate for water samples from covered household storage containers was significantly lower than that of uncovered containers. The rate of water contamination in storage containers was highest during the February-May period. It is shown that safe drinking water was achieved by a combination of a protected and high quality source at the initial point and maintaining quality from the initial supply (source) point through to final consumption. It is recommended that the government and other relevant actors in Bangladesh establish a comprehensive drinking water system that integrates water supply, quality, handling and related educational programs in order to ensure the safety of drinking water supplies.
Lee, Kathy E.; Langer, Susan K.; Barber, Larry B.; Writer, Jeff H.; Ferrey, Mark L.; Schoenfuss, Heiko L.; Furlong, Edward T.; Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Martinovic, Dalma; Woodruff, Olivia R.; Keefe, Steffanie H.; Brown, Greg K.; Taylor, Howard E.; Ferrer, Imma; Thurman, E. Michael
2011-01-01
This report presents the study design, environmental data, and quality-assurance data for an integrated chemical and biological study of selected streams or lakes that receive wastewater-treatment plant effluent in Minnesota. This study was a cooperative effort of the U.S. Geological Survey, the Minnesota Pollution Control Agency, St. Cloud State University, the University of St. Thomas, and the University of Colorado. The objective of the study was to identify distribution patterns of endocrine active chemicals, pharmaceuticals, and other organic and inorganic chemicals of concern indicative of wastewater effluent, and to identify biological characteristics of estrogenicity and fish responses in the same streams. The U.S. Geological Survey collected and analyzed water, bed-sediment, and quality-assurance samples, and measured or recorded streamflow once at each sampling location from September through November 2009. Sampling locations included surface water and wastewater-treatment plant effluent. Twenty-five wastewater-treatment plants were selected to include continuous flow and periodic release facilities with differing processing steps (activated sludge or trickling filters) and plant design flows ranging from 0.002 to 10.9 cubic meters per second (0.04 to 251 million gallons per day) throughout Minnesota in varying land-use settings. Water samples were collected from the treated effluent of the 25 wastewater-treatment plants and at one point upstream from and one point downstream from wastewater-treatment plant effluent discharges. Bed-sediment samples also were collected at each of the stream or lake locations. Water samples were analyzed for major ions, nutrients, trace elements, pharmaceuticals, phytoestrogens and pharmaceuticals, alkylphenols and other neutral organic chemicals, carboxylic acids, and steroidal hormones. A subset (25 samples) of the bed-sediment samples were analyzed for carbon, wastewater-indicator chemicals, and steroidal hormones; the remaining samples were archived. Biological characteristics were determined by using an in-vitro bioassay to determine total estrogenicity in water samples and a caged fish study to determine characteristics of fish from experiments that exposed fish to wastewater effluent in 2009. St. Cloud State University deployed and processed caged fathead minnows at 13 stream sites during September 2009 for the caged fish study. Measured fish data included length, weight, body condition factor, and vitellogenin concentrations.
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
Francis A. Roesch
2012-01-01
In the past, the goal of forest inventory was to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design included selection probabilities based on land area observed at a discrete point in time. That is, time was not...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
A Follow-Up of Subjects Scoring above 180 IQ in Terman's "Genetic Studies of Genius."
ERIC Educational Resources Information Center
Feldman, David Henry
1984-01-01
Using the Terman files, 26 subjects with scores above 180 IQ were compared with 26 randomly selected subjects from Terman's sample. Findings were generally that the extra IQ points made little difference and that extremely high IQ does not seem to indicate "genius" in the commonly understood sense. (Author/CL)
Educational Service Quality in Zanjan University of Medical Sciences from Students' Point of View
ERIC Educational Resources Information Center
Mohammadi, Ali; Mohammadi, Jamshid
2014-01-01
This study aims at evaluating perceived service quality in Zanjan University of Medical Sciences (ZUMS). This study was cross-sectional and authors surveyed educational services at ZUMS. Through stratified random sampling, 384 students were selected and an adapted SERVQUAL instrument was used for data collection. Data analysis was performed by…
Predicting the Academic Success of Community College Students in Specific Programs of Study.
ERIC Educational Resources Information Center
Yess, James P.
The intent of this study was to determine the influence of selected independent variables on the graduating grade point average (GPA) of community college students in various programs of study. A sample of 483 students from one community college represented seven programs of study: Business Administration-General, Business Administration-Transfer,…
Linguistic Precautions That to Be Considered When Translating the Holy Quran
ERIC Educational Resources Information Center
Siddiek, Ahmed Gumaa
2017-01-01
The present study is an attempt to raise some points that should be considered when translating the Quranic Text into English. We have looked into some samples of translations, selected from well known English translations of the Holy Quran and critically examined them. There were some errors in those translations, due to linguistic factors, owing…
Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-01-01
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565
Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-12-23
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).
Self-Assessment of Hearing and Purchase of Hearing Aids by Middle-Aged and Elderly Adults
Otavio, Andressa Colares da Costa; Coradini, Patricia Pérez; Teixeira, Adriane Ribeiro
2015-01-01
Introduction Presbycusis is a consequence of aging. Prescription of hearing aids is part of the treatment, although the prevalence of use by elderly people is still small. Objective To verify whether or not self-assessment of hearing is a predictor for purchase of hearing aids. Methods Quantitative, cross-sectional, descriptive, and observational study. Participants were subjects who sought a private hearing center for selection of hearing aids. During the diagnostic interview, subjects answered the following question: “On a scale of 1 to 10, with 1 being the worst and 10 the best, how would you rate your overall hearing ability?” After that, subjects underwent audiometry, selected a hearing aid, performed a home trial, and decided whether or not to purchase the hearing aid. The variables were associated and analyzed statistically. Results The sample was comprised of 32 subjects, both men and women, with a higher number of women. Mean age was 71.41 ± 12.14 years. Self-assessment of hearing ranged from 2 to 9 points. Overall, 71.9% of the subjects purchased hearing aids. There was no association between scores in the self-assessment and the purchase of hearing aids (p = 0.263). Among those who scored between 2 and 5 points, 64.7% purchased the device; between 6 and 7 points, 76.09% purchased the device; and between 8 and 9 points, 50% purchased the device, respectively. Conclusion There is evidence that low self-assessment scores lead to the purchase of hearing aids, although no significant association was observed in the sample. PMID:26722346
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
A novel image registration approach via combining local features and geometric invariants
Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa
2018-01-01
Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595
Salas-Reyes, Isela Guadalupe; Arriaga-Jordán, Carlos Manuel; Rebollar-Rebollar, Samuel; García-Martínez, Anastacio; Albarrán-Portillo, Benito
2015-08-01
The objective of this study was to assess the sustainability of 10 dual-purpose cattle farms in a subtropical area of central Mexico. The IDEA method (Indicateurs de Durabilité des Exploitations Agricoles) was applied, which includes the agroecological, socio-territorial and economic scales (scores from 0 to 100 points per scale). A sample of 47 farms from a total of 91 registered in the local livestock growers association was analysed with principal component analysis and cluster analysis. From results, 10 farms were selected for the in-depth study herein reported, being the selection criterion continuous milk production throughout the year. Farms had a score of 88 and 86 points for the agroecological scale in the rainy and dry seasons. In the socio-territorial scale, scores were 73 points for both seasons, being the component of employment and services the strongest. Scores for the economic scale were 64 and 56 points for the rainy and dry seasons, respectively, when no economic cost for family labour is charged, which decreases to 59 and 45 points when an opportunity cost for family labour is considered. Dual-purpose farms in the subtropical area of central Mexico have a medium sustainability, with the economic scale being the limiting factor, and an area of opportunity.
Wilde, Sandra; Timpson, Adrian; Kirsanow, Karola; Kaiser, Elke; Kayser, Manfred; Unterländer, Martina; Hollfelder, Nina; Potekhina, Inna D; Schier, Wolfram; Thomas, Mark G; Burger, Joachim
2014-04-01
Pigmentation is a polygenic trait encompassing some of the most visible phenotypic variation observed in humans. Here we present direct estimates of selection acting on functional alleles in three key genes known to be involved in human pigmentation pathways--HERC2, SLC45A2, and TYR--using allele frequency estimates from Eneolithic, Bronze Age, and modern Eastern European samples and forward simulations. Neutrality was overwhelmingly rejected for all alleles studied, with point estimates of selection ranging from around 2-10% per generation. Our results provide direct evidence that strong selection favoring lighter skin, hair, and eye pigmentation has been operating in European populations over the last 5,000 y.
Integration of imagery and cartographic data through a common map base
NASA Technical Reports Server (NTRS)
Clark, J.
1983-01-01
Several disparate data types are integrated by using control points as the basis for spatially registering the data to a map base. The data are reprojected to match the coordinates of the reference UTM (Universal Transverse Mercator) map projection, as expressed in lines and samples. Control point selection is the most critical aspect of integrating the Thematic Mapper Simulator MSS imagery with the cartographic data. It is noted that control points chosen from the imagery are subject to error from mislocated points, either points that did not correlate well to the reference map or minor pixel offsets because of interactive cursorring errors. Errors are also introduced in map control points when points are improperly located and digitized, leading to inaccurate latitude and longitude coordinates. Nonsystematic aircraft platform variations, such as yawl, pitch, and roll, affect the spatial fidelity of the imagery in comparison with the quadrangles. Features in adjacent flight paths do not always correspond properly owing to the systematic panorama effect and alteration of flightline direction, as well as platform variations.
Knopman, Debra S.; Voss, Clifford I.
1989-01-01
Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.
The Impact of Soil Sampling Errors on Variable Rate Fertilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Hoskinson; R C. Rope; L G. Blackwood
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Environment-based selection effects of Planck clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosyra, R.; Gruen, D.; Seitz, S.
2015-07-24
We investigate whether the large-scale structure environment of galaxy clusters imprints a selection bias on Sunyaev–Zel'dovich (SZ) catalogues. Such a selection effect might be caused by line of sight (LoS) structures that add to the SZ signal or contain point sources that disturb the signal extraction in the SZ survey. We use the Planck PSZ1 union catalogue in the Sloan Digital Sky Survey (SDSS) region as our sample of SZ-selected clusters. We calculate the angular two-point correlation function (2pcf) for physically correlated, foreground and background structure in the RedMaPPer SDSS DR8 catalogue with respect to each cluster. We compare ourmore » results with an optically selected comparison cluster sample and with theoretical predictions. In contrast to the hypothesis of no environment-based selection, we find a mean 2pcf for background structures of -0.049 on scales of ≲40 arcmin, significantly non-zero at ~4σ, which means that Planck clusters are more likely to be detected in regions of low background density. We hypothesize this effect arises either from background estimation in the SZ survey or from radio sources in the background. We estimate the defect in SZ signal caused by this effect to be negligibly small, of the order of ~10 -4 of the signal of a typical Planck detection. Analogously, there are no implications on X-ray mass measurements. However, the environmental dependence has important consequences for weak lensing follow up of Planck galaxy clusters: we predict that projection effects account for half of the mass contained within a 15 arcmin radius of Planck galaxy clusters. We did not detect a background underdensity of CMASS LRGs, which also leaves a spatially varying redshift dependence of the Planck SZ selection function as a possible cause for our findings.« less
Campbell, J.P.; Lyford, F.P.; Willey, Richard E.
2002-01-01
A mixed plume of contaminants in ground water, including volatile organic compounds (VOCs), semi-volatile organic compounds (SVOCs), and metals, near the former Nyanza property in Ashland, Massachusetts, discharges to the Sudbury River upstream and downstream of Mill Pond and a former mill raceway. Polyethylene-membrane vapor-diffusion (PVD) samplers were installed in river-bottom sediments to determine if PVD samplers provide an alternative to ground-water sampling from well points for identifying areas of detectable concentrations of contaminants in sediment pore water near the ground-water and surface-water interface. In August and September 2000, the PVD samplers were installed near well points at depths of 8 to 12 inches in both fine and coarse sediments, whereas the well points were installed at depths of 1 to 5 feet in coarse sediments only. Comparison between vapor and water samples at 29 locations upstream from Mill Pond show that VOC vapor concentrations from PVD samplers in coarse river-bottom sediments are more likely to correspond to ground-water concentrations from well points than PVD samplers installed in fine sediments. Significant correlations based on Kendall's Tau were shown between vapor and ground-water concentrations for trichloroethylene and chlorobenzene for PVD samplers installed in coarse sediments where the fine organic layer that separated the two sampling depths was 1 foot or less in thickness. VOC concentrations from vapor samples also were compared to VOC, SVOC, and metals concentrations from ground-water samples at 10 well points installed upstream and downstream from Mill Pond, and in the former mill raceway. Chlorobenzene vapor concentrations correlated significantly with ground-water concentrations for 5 VOCs, 2 SVOCs, and 10 metals. Trichloroethylene vapor concentrations did not correlate with any of the other ground-water constituents analyzed at the 10 well points. Chlorobenzene detected by use of PVD samplers appears to be a strong indicator of the presence of VOCs, SVOCs, and metals in ground water sampled from well points at this site. Results from PVD samplers indicate that contaminant concentrations in water from well points installed 1 to 5 ft below fine sediments may not reflect concentrations in pore water less than 1 foot below the river bottom. There is insufficient information available to determine if VOC concentrations detected in PVD samplers are useful for identifying detectable aqueous concentrations of SVOCs and metals in sediment pore water at this site. Samples of pore water from a similar depth as PVD samplers are needed for confirmation of this objective.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
NASA Technical Reports Server (NTRS)
Strahler, A. H.; Woodcock, C. E.; Logan, T. L.
1983-01-01
A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.
On the universality of the two-point galaxy correlation function
NASA Technical Reports Server (NTRS)
Davis, Marc; Meiksin, Avery; Strauss, Michael A.; Da Costa, L. Nicolaci; Yahil, Amos
1988-01-01
The behavior of the two-point galaxy correlation function in volume-limited subsamples of three complete redshift surveys is investigated. The correlation length is shown to scale approximately as the square root of the distance limit in both the CfA and Southern Sky catalogs, but to be independent of the distance limit in the IRAS sample. This effect is found to be due to factors such as the large positive density fluctuations in the foreground of the optically selected catalogs biasing the correlation length estimate downward, and the brightest galaxies appearing to be more strongly clustered than the mean.
VizieR Online Data Catalog: Face-on disk galaxies photometry. I. (de Jong+, 1994)
NASA Astrophysics Data System (ADS)
de Jong, R. S.; van der Kruit, P. C.
1995-07-01
We present accurate surface photometry in the B, V, R, I, H and K passbands of 86 spiral galaxies. The galaxies in this statistically complete sample of undisturbed spirals were selected from the UGC to have minimum diameters of 2' and minor over major axis ratios larger than 0.625. This sample has been selected in such a way that it can be used to represent a volume limited sample. The observation and reduction techniques are described in detail, especially the not often used driftscan technique for CCDs and the relatively new techniques using near-infrared (near-IR) arrays. For each galaxy we present radial profiles of surface brightness. Using these profiles we calculated the integrated magnitudes of the galaxies in the different passbands. We performed internal and external consistency checks for the magnitudes as well as the luminosity profiles. The internal consistency is well within the estimated errors. Comparisons with other authors indicate that measurements from photographic plates can show large deviations in the zero-point magnitude. Our surface brightness profiles agree within the errors with other CCD measurements. The comparison of integrated magnitudes shows a large scatter, but a consistent zero-point. These measurements will be used in a series of forthcoming papers to discuss central surface brightnesses, scalelengths, colors and color gradients of disks of spiral galaxies. (9 data files).
Wollert, Richard; Cramer, Elliot
2011-01-01
Psychiatrist and Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) text editor Michael First has criticized the addition of victim counts to criteria proposed by the Paraphilia Sub-Workgroup for inclusion in DSM-5 because they will increase false-positive diagnoses. Psychologist and Chair of the DSM-5 Paraphilia Sub-Workgroup, Ray Blanchard, responded by publishing a study of pedohebephiles and teleiophiles which seemed to show that victim counts could accurately identify pedohebephiles who were selected per self-report and phallometric testing. His analysis was flawed because it did not conform to conventional clinical practice and because he sampled groups at opposite ends of the clinical spectrum. In an analysis of his full sample, we found the false-positive rate for pedohebephilia at the recommended victim count selection points was indeed very large. Why? Because data analyses that eliminate intermediate data points will generate inflated estimates of correlation coefficients, base rates, and the discriminative capacity of predictor variables. This principle is also relevant for understanding the flaws in previous research that led Hanson and Bussiere to conclude that sexual recidivism was correlated with "sexual interest in children as measured by phallometric assessment." The credibility of mental health professionals rests on the reliability of their research. Conducting, publishing, and citing research that reflects Copyright © 2011 John Wiley & Sons, Ltd.
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
A New Factor in UK Students' University Attainment: The Relative Age Effect Reversal?
ERIC Educational Resources Information Center
Roberts, Simon J.; Stott, Tim
2015-01-01
Purpose: The purpose of this paper is to study relative age effects (RAEs) in a selected sample of university students. The majority of education systems across the globe adopt age-related cut-off points for eligibility. This strategy has received criticism for (dis)advantaging those older children born closer to the "cut-off" date for…
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2012 CFR
2012-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2014 CFR
2014-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2013 CFR
2013-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2011 CFR
2011-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
Yılmaz Isıkhan, Selen; Karabulut, Erdem; Alpar, Celal Reha
2016-01-01
Background/Aim . Evaluating the success of dose prediction based on genetic or clinical data has substantially advanced recently. The aim of this study is to predict various clinical dose values from DNA gene expression datasets using data mining techniques. Materials and Methods . Eleven real gene expression datasets containing dose values were included. First, important genes for dose prediction were selected using iterative sure independence screening. Then, the performances of regression trees (RTs), support vector regression (SVR), RT bagging, SVR bagging, and RT boosting were examined. Results . The results demonstrated that a regression-based feature selection method substantially reduced the number of irrelevant genes from raw datasets. Overall, the best prediction performance in nine of 11 datasets was achieved using SVR; the second most accurate performance was provided using a gradient-boosting machine (GBM). Conclusion . Analysis of various dose values based on microarray gene expression data identified common genes found in our study and the referenced studies. According to our findings, SVR and GBM can be good predictors of dose-gene datasets. Another result of the study was to identify the sample size of n = 25 as a cutoff point for RT bagging to outperform a single RT.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Bouchet, Sophie; Pot, David; Deu, Monique; Rami, Jean-François; Billot, Claire; Perrier, Xavier; Rivallan, Ronan; Gardes, Laëtitia; Xia, Ling; Wenzl, Peter; Kilian, Andrzej; Glaszmann, Jean-Christophe
2012-01-01
Population structure, extent of linkage disequilibrium (LD) as well as signatures of selection were investigated in sorghum using a core sample representative of worldwide diversity. A total of 177 accessions were genotyped with 1122 informative physically anchored DArT markers. The properties of DArTs to describe sorghum genetic structure were compared to those of SSRs and of previously published RFLP markers. Model-based (STRUCTURE software) and Neighbor-Joining diversity analyses led to the identification of 6 groups and confirmed previous evolutionary hypotheses. Results were globally consistent between the different marker systems. However, DArTs appeared more robust in terms of data resolution and bayesian group assignment. Whole genome linkage disequilibrium as measured by mean r2 decreased from 0.18 (between 0 to 10 kb) to 0.03 (between 100 kb to 1 Mb), stabilizing at 0.03 after 1 Mb. Effects on LD estimations of sample size and genetic structure were tested using i. random sampling, ii. the Maximum Length SubTree algorithm (MLST), and iii. structure groups. Optimizing population composition by the MLST reduced the biases in small samples and seemed to be an efficient way of selecting samples to make the best use of LD as a genome mapping approach in structured populations. These results also suggested that more than 100,000 markers may be required to perform genome-wide association studies in collections covering worldwide sorghum diversity. Analysis of DArT markers differentiation between the identified genetic groups pointed out outlier loci potentially linked to genes controlling traits of interest, including disease resistance genes for which evidence of selection had already been reported. In addition, evidence of selection near a homologous locus of FAR1 concurred with sorghum phenotypic diversity for sensitivity to photoperiod. PMID:22428056
Bouchet, Sophie; Pot, David; Deu, Monique; Rami, Jean-François; Billot, Claire; Perrier, Xavier; Rivallan, Ronan; Gardes, Laëtitia; Xia, Ling; Wenzl, Peter; Kilian, Andrzej; Glaszmann, Jean-Christophe
2012-01-01
Population structure, extent of linkage disequilibrium (LD) as well as signatures of selection were investigated in sorghum using a core sample representative of worldwide diversity. A total of 177 accessions were genotyped with 1122 informative physically anchored DArT markers. The properties of DArTs to describe sorghum genetic structure were compared to those of SSRs and of previously published RFLP markers. Model-based (STRUCTURE software) and Neighbor-Joining diversity analyses led to the identification of 6 groups and confirmed previous evolutionary hypotheses. Results were globally consistent between the different marker systems. However, DArTs appeared more robust in terms of data resolution and bayesian group assignment. Whole genome linkage disequilibrium as measured by mean r(2) decreased from 0.18 (between 0 to 10 kb) to 0.03 (between 100 kb to 1 Mb), stabilizing at 0.03 after 1 Mb. Effects on LD estimations of sample size and genetic structure were tested using i. random sampling, ii. the Maximum Length SubTree algorithm (MLST), and iii. structure groups. Optimizing population composition by the MLST reduced the biases in small samples and seemed to be an efficient way of selecting samples to make the best use of LD as a genome mapping approach in structured populations. These results also suggested that more than 100,000 markers may be required to perform genome-wide association studies in collections covering worldwide sorghum diversity. Analysis of DArT markers differentiation between the identified genetic groups pointed out outlier loci potentially linked to genes controlling traits of interest, including disease resistance genes for which evidence of selection had already been reported. In addition, evidence of selection near a homologous locus of FAR1 concurred with sorghum phenotypic diversity for sensitivity to photoperiod.
Keshavarz, Yousef; Ghaedi, Sina; Rahimi-Kashani, Mansure
2012-01-01
Background The twelve step program is one of the programs that are administered for overcoming abuse of drugs. In this study, the effectiveness of chemical dependency counseling course was investigated using a hybrid model. Methods In a survey with sample size of 243, participants were selected using stratified random sampling method. A questionnaire was used for collecting data and one sample t-test employed for data analysis. Findings Chemical dependency counseling courses was effective from the point of view of graduates, chiefs of rehabilitation centers, rescuers and their families and ultimately managers of rebirth society, but it was not effective from the point of view of professors and lecturers. The last group evaluated the effectiveness of chemical dependency counseling courses only in performance level. Conclusion It seems that the chemical dependency counseling courses had appropriate effectiveness and led to change in attitudes, increase awareness, knowledge and experience combination and ultimately increased the efficiency of counseling. PMID:24494132
Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F
2009-05-01
Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.
The OVIRS Visible/IR Spectrometer on the OSIRIS-Rex Mission
NASA Technical Reports Server (NTRS)
Reuter, D. C.; Simon-Miller, A. A.
2012-01-01
The OSIRIS-REx (Origins Spectral Interpretation Resource Identification Security Regolith Explorer) Mission is a planetary science mission to study, and return a sample from, the carbonaceous asteroid 1999 RQ-36. The third mission selected under NASA's New Frontiers Program, it is scheduled to be launched in 2016. It is led by PI Dante Lauretta at the University of Arizona and managed by NASA's Goddard Space Flight Center. The spacecraft and the asteroid sampling mechanism, TAGSAM (Touch-And-Go Sample Acquisition Mechanism) will be provided by Lockheed Martin Space Systems. Instrumentation for studying the asteroid include: OCAMS (the OSIRIS-REx Camera Suite), OLA (the OSIRIS-REx Laser Altimeter, a scanning LIDAR), OTES (The OSIRIS-REx Thermal Emission Spectrometer, a 4-50 micron point spectrometer) and OVIRS (the OSIRIS-REx Visible and IR Spectrometer, a 0.4 to 4.3 micron point spectrometer). The payload also includes REXIS (the Regolith X-ray Imaging Spectrometer) a student provided experiment. This paper presents a description of the OVIRS instrument.
Use of Naturally Available Reference Targets to Calibrate Airborne Laser Scanning Intensity Data
Vain, Ants; Kaasalainen, Sanna; Pyysalo, Ulla; Krooks, Anssi; Litkey, Paula
2009-01-01
We have studied the possibility of calibrating airborne laser scanning (ALS) intensity data, using land targets typically available in urban areas. For this purpose, a test area around Espoonlahti Harbor, Espoo, Finland, for which a long time series of ALS campaigns is available, was selected. Different target samples (beach sand, concrete, asphalt, different types of gravel) were collected and measured in the laboratory. Using tarps, which have certain backscattering properties, the natural samples were calibrated and studied, taking into account the atmospheric effect, incidence angle and flying height. Using data from different flights and altitudes, a time series for the natural samples was generated. Studying the stability of the samples, we could obtain information on the most ideal types of natural targets for ALS radiometric calibration. Using the selected natural samples as reference, the ALS points of typical land targets were calibrated again and examined. Results showed the need for more accurate ground reference data, before using natural samples in ALS intensity data calibration. Also, the NIR camera-based field system was used for collecting ground reference data. This system proved to be a good means for collecting in situ reference data, especially for targets with inhomogeneous surface reflection properties. PMID:22574045
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.
Species conservation profiles of a random sample of world spiders I: Agelenidae to Filistatidae
Seppälä, Sini; Henriques, Sérgio; Draney, Michael L; Foord, Stefan; Gibbons, Alastair T; Gomez, Luz A; Kariko, Sarah; Malumbres-Olarte, Jagoba; Milne, Marc; Vink, Cor J
2018-01-01
Abstract Background The IUCN Red List of Threatened Species is the most widely used information source on the extinction risk of species. One of the uses of the Red List is to evaluate and monitor the state of biodiversity and a possible approach for this purpose is the Red List Index (RLI). For many taxa, mainly hyperdiverse groups, it is not possible within available resources to assess all known species. In such cases, a random sample of species might be selected for assessment and the results derived from it extrapolated for the entire group - the Sampled Red List Index (SRLI). With the current contribution and the three following papers, we intend to create the first point in time of a future spider SRLI encompassing 200 species distributed across the world. New information A sample of 200 species of spiders were randomly selected from the World Spider Catalogue, an updated global database containing all recognised species names for the group. The 200 selected species where divided taxonomically at the family level and the familes were ordered alphabetically. In this publication, we present the conservation profiles of 46 species belonging to the famillies alphabetically arranged between Agelenidae and Filistatidae, which encompassed Agelenidae, Amaurobiidae, Anyphaenidae, Araneidae, Archaeidae, Barychelidae, Clubionidae, Corinnidae, Ctenidae, Ctenizidae, Cyatholipidae, Dictynidae, Dysderidae, Eresidae and Filistatidae. PMID:29725239
Species conservation profiles of a random sample of world spiders I: Agelenidae to Filistatidae.
Seppälä, Sini; Henriques, Sérgio; Draney, Michael L; Foord, Stefan; Gibbons, Alastair T; Gomez, Luz A; Kariko, Sarah; Malumbres-Olarte, Jagoba; Milne, Marc; Vink, Cor J; Cardoso, Pedro
2018-01-01
The IUCN Red List of Threatened Species is the most widely used information source on the extinction risk of species. One of the uses of the Red List is to evaluate and monitor the state of biodiversity and a possible approach for this purpose is the Red List Index (RLI). For many taxa, mainly hyperdiverse groups, it is not possible within available resources to assess all known species. In such cases, a random sample of species might be selected for assessment and the results derived from it extrapolated for the entire group - the Sampled Red List Index (SRLI). With the current contribution and the three following papers, we intend to create the first point in time of a future spider SRLI encompassing 200 species distributed across the world. A sample of 200 species of spiders were randomly selected from the World Spider Catalogue, an updated global database containing all recognised species names for the group. The 200 selected species where divided taxonomically at the family level and the familes were ordered alphabetically. In this publication, we present the conservation profiles of 46 species belonging to the famillies alphabetically arranged between Agelenidae and Filistatidae, which encompassed Agelenidae, Amaurobiidae, Anyphaenidae, Araneidae, Archaeidae, Barychelidae, Clubionidae, Corinnidae, Ctenidae, Ctenizidae, Cyatholipidae, Dictynidae, Dysderidae, Eresidae and Filistatidae.
Galloway, Joel M.; Haggard, Brian E.; Meyers, Michael T.; Green, W. Reed
2005-01-01
The U.S. Geological Survey, in cooperation with the University of Arkansas and the U.S. Department of Agriculture, Agricultural Research Service, collected data in 2004 to determine the occurrence of pharmaceuticals and other organic wastewater constituents, including many constituents of emerging environmental concern, in selected streams in northern Arkansas. Samples were collected in March and April 2004 from 17 sites located upstream and downstream from wastewater- treatment plant effluent discharges on 7 streams in northwestern Arkansas and at 1 stream site in a relatively undeveloped basin in north-central Arkansas. Additional samples were collected at three of the sites in August 2004. The targeted organic wastewater constituents and sample sites were selected because wastewater-treatment plant effluent discharge provides a potential point source of these constituents and analytical techniques have improved to accurately measure small amounts of these constituents in environmental samples. At least 1 of the 108 pharmaceutical or other organic wastewater constituents was detected at all sites in 2004, except at Spavinaw Creek near Maysville, Arkansas. The number of detections generally was greater at sites downstream from municipal wastewater-treatment plant effluent discharges (mean = 14) compared to sites not influenced by wastewatertreatment plants (mean = 3). Overall, 42 of the 108 constituents targeted in the collected water-quality samples were detected. The most frequently detected constituents included caffeine, phenol, para-cresol, and acetyl hexamethyl tetrahydro naphthalene.
Microbiologic endodontic status of young traumatized tooth.
Baumotte, Karla; Bombana, Antonio C; Cai, Silvana
2011-12-01
Traumatic dental injuries could expose the dentin and, even the pulp, to the oral environment, making possible their contamination. The presence of microorganisms causes pulpal disease and further a tecidual clutter in the periradicular region. The therapy of periradicular pathosis is the consequence of a correct diagnoses which depends on the knowledge of the nature and complexity of endodontic infections. As there is no information on the microbiology of primary endodontic infection in young teeth, the aim of the current study was to investigate the microbiologic status of root canals from permanent young teeth with primary endodontic infection. Twelve patients with the need for endodontic treatment participated in the study. The selected teeth were uniradicular and had an incomplete root formation. They had untreated necrotic pulp. After the access preparation, nineteen microbiologic samples were obtained from the root canals with sterile paper points. Afterwards, the paper points were pooled in a sterile tube containing 2 ml of prereduced transport fluid. The samples were diluted and spread onto plates with selective medium for Enterococcus spp. and for yeast species and onto plates with non-selective medium. A quantitative analysis was performed. The mean number of cultivable bacterial cells in the root canals was 5.7 × 10(6). In four samples (21.05%) black pigmented species were recovered and the mean number of cells was 6.5 × 10(5). One specimen (5.25%) showed the growth of Enterococcus species and the mean number of cells in this case was of 1.5 × 10(4) . The results showed a root canal microbiota with similar design as seen in completely formed teeth. © 2011 John Wiley & Sons A/S.
Selective propagation and beam splitting of surface plasmons on metallic nanodisk chains.
Hu, Yuhui; Zhao, Di; Wang, Zhenghan; Chen, Fei; Xiong, Xiang; Peng, Ruwen; Wang, Mu
2017-05-01
Manipulating the propagation of surface plasmons (SPs) on a nanoscale is a fundamental issue of nanophotonics. By using focused electron beam, SPs can be excited with high spatial accuracy. Here we report on the propagation of SPs on a chain of gold nanodisks with cathodoluminescence (CL) spectroscopy. Experimental evidence for the propagation of SPs excited by the focused electron beam is demonstrated. The wavelength of the transmitted SPs depends on the geometrical parameters of the nanodisk chain. Furthermore, we design and fabricate a beam splitter, which selectively transmits SPs of certain wavelengths to a specific direction. By scanning the sample surface point by point and collecting the CL spectra, we obtain the spectral mapping and identify that the chain of the smaller nanodisks can efficiently transport SPs at shorter wavelengths. This Letter provides a unique approach to manipulate in-plane propagation of SPs.
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Mansour, Fotouh R; Danielson, Neil D
2017-08-01
Dispersive liquid-liquid microextraction (DLLME) is a special type of microextraction in which a mixture of two solvents (an extracting solvent and a disperser) is injected into the sample. The extraction solvent is then dispersed as fine droplets in the cloudy sample through manual or mechanical agitation. Hence, the sample is centrifuged to break the formed emulsion and the extracting solvent is manually separated. The organic solvents commonly used in DLLME are halogenated hydrocarbons that are highly toxic. These solvents are heavier than water, so they sink to the bottom of the centrifugation tube which makes the separation step difficult. By using solvents of low density, the organic extractant floats on the sample surface. If the selected solvent such as undecanol has a freezing point in the range 10-25°C, the floating droplet can be solidified using a simple ice-bath, and then transferred out of the sample matrix; this step is known as solidification of floating organic droplet (SFOD). Coupling DLLME to SFOD combines the advantages of both approaches together. The DLLME-SFOD process is controlled by the same variables of conventional liquid-liquid extraction. The organic solvents used as extractants in DLLME-SFOD must be immiscible with water, of lower density, low volatility, high partition coefficient and low melting and freezing points. The extraction efficiency of DLLME-SFOD is affected by types and volumes of organic extractant and disperser, salt addition, pH, temperature, stirring rate and extraction time. This review discusses the principle, optimization variables, advantages and disadvantages and some selected applications of DLLME-SFOD in water, food and biomedical analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Sindik, Josko; Nazor, Damir
2011-09-01
Identification of differences in individual conative characteristics and in perceived group cohesion of the basketball players playing in different positions in the team could provide guidelines for a better selection of basketball players and better coaching work. The aim of our study was to determine the differences in relation to the positions of guards and forwards/centres, and the four major positions in the team. The final sample of subjects (74 basketball players) is selected from the initial sample of 107 subjects, selected from nine men's senior basketball teams that played in A-1 Croatian men's basketball league championship in 2006/2007. The results showed no statistically significant difference between basketball players who play in different positions in the team, neither in relation to two basic positions in the team (guards as opposed to forwards/centres), nor in relation to the four positions in the team (point guard, shooting guard, small forward, power forward/centre).
Foveal analysis and peripheral selection during active visual sampling
Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.
2014-01-01
Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588
Prediction and standard error estimation for a finite universe total when a stratum is not sampled
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, T.
1994-01-01
In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time,more » the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.« less
Fuzzy support vector machine for microarray imbalanced data classification
NASA Astrophysics Data System (ADS)
Ladayya, Faroh; Purnami, Santi Wulan; Irhamah
2017-11-01
DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.
Investigation of water quality parameters at selected points on the Tennessee River
NASA Technical Reports Server (NTRS)
Manger, M. C.
1973-01-01
Physical, chemical, and biological water quality parameters have been investigated at the Widow's Creek steam plant. The water quality parameters and field site locations have been selected so as to be compatible with the interests and needs of the Environmental Application Office at Marshall Space Flight Center. All sampling and testing was conducted as directed in the 13th Edition of Standard Methods of Analysis for Water and Waste Water or as suggested by NASA'S Technical Officer. Data is presented in a form compatible with that presently being collected by other agencies.
NASA Astrophysics Data System (ADS)
Sánchez, Clara I.; Niemeijer, Meindert; Kockelkorn, Thessa; Abràmoff, Michael D.; van Ginneken, Bram
2009-02-01
Computer-aided Diagnosis (CAD) systems for the automatic identification of abnormalities in retinal images are gaining importance in diabetic retinopathy screening programs. A huge amount of retinal images are collected during these programs and they provide a starting point for the design of machine learning algorithms. However, manual annotations of retinal images are scarce and expensive to obtain. This paper proposes a dynamic CAD system based on active learning for the automatic identification of hard exudates, cotton wool spots and drusen in retinal images. An uncertainty sampling method is applied to select samples that need to be labeled by an expert from an unlabeled set of 4000 retinal images. It reduces the number of training samples needed to obtain an optimum accuracy by dynamically selecting the most informative samples. Results show that the proposed method increases the classification accuracy compared to alternative techniques, achieving an area under the ROC curve of 0.87, 0.82 and 0.78 for the detection of hard exudates, cotton wool spots and drusen, respectively.
Community structure of aquatic insects in the Esparza River, Costa Rica.
Herrera-Vásquez, Jonathan
2009-01-01
This study focused on the structure of the aquatic insect community in spatial and temporal scales in the Esparza River. The river was sampled for one full year throughout 2007. During the dry season low flow months, five sampling points were selected in two different habitats (currents and pools), with five replicates per sample site. During the wet season with peak rain, only the data in the "current habitat" were sampled at each site. Specimens present in the different substrates were collected and preserved in situ. A nested ANOVA was then applied to the data to determine richness and density as the response variables. The variations in temporal and spatial scales were analyzed using width, depth and discharge of the river, and then analyzed using a nested ANOVA. Only a correlation of 51% similarity in richness was found, while in spatial scale, richness showed significant variation between sampling sites, but not between habitats. However, the temporal scale showed significant differences between habitats. Density showed differences between sites and habitats during the dry season in the spatial scale, while in the temporal scale significant variation was found between sampling sites. Width varied between habitats during the dry season, but not between sampling points. Depth showed differences between sampling sites and season. This work studies the importance of community structure of aquatic insects in rivers, and its relevance for the quality of water in rivers and streams.
Dunn, Abe; Liebman, Eli; Rittmueller, Lindsey; Shapiro, Adam Hale
2017-04-01
To provide guidelines to researchers measuring health expenditures by disease and compare these methodologies' implied inflation estimates. A convenience sample of commercially insured individuals over the 2003 to 2007 period from Truven Health. Population weights are applied, based on age, sex, and region, to make the sample of over 4 million enrollees representative of the entire commercially insured population. Different methods are used to allocate medical-care expenditures to distinct condition categories. We compare the estimates of disease-price inflation by method. Across a variety of methods, the compound annual growth rate stays within the range 3.1 to 3.9 percentage points. Disease-specific inflation measures are more sensitive to the selected methodology. The selected allocation method impacts aggregate inflation rates, but considering the variety of methods applied, the differences appear small. Future research is necessary to better understand these differences in other population samples and to connect disease expenditures to measures of quality. © Health Research and Educational Trust.
Quality of selected coals of Hungary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landis, E.R.; Rohrbacher, T.J.; Gluskoter, H.J.
2000-07-01
As part of the activities conducted under the US-Hungarian Science and Technology Fund, a total of 39 samples from five coal mines in five geologically-distinct coal areas in Hungary were selected for proximate and ultimate analyses. In addition, the heat value, forms of sulfur, free-swelling index, equilibrium moisture, Hardgrove grindability index, four-point ash fusion temperatures (both oxidizing and reducing), and apparent specific gravity were determined for each sample. Standard procedures established by the American Society for Testing and Materials (ASTM, 1999) were used. The analytical results will be available in the International Coal Quality Data Base of the USGS. Resultsmore » of the program provide data for comparison with coal quality test data from Europe and information of value to potential investors or cooperators in the coal industry of Hungary and Central Europe.« less
NASA Astrophysics Data System (ADS)
Wijaya, I. M. W.; Soedjono, E. S.
2018-03-01
Municipal wastewater is the main contributor to diverse water pollution problems. In order to prevent the pollution risks, wastewater have to be treated before discharged to the main water. Selection of appropriated treatment process need the characteristic information of wastewater as design consideration. This study aims to analyse the physicochemical characteristic of municipal wastewater from inlet and outlet of ABR unit around Surabaya City. Medokan Semampir and Genteng Candi Rejo has been selected as wastewater sampling point. The samples were analysed in laboratory with parameters, such as pH, TSS, COD, BOD, NH4 +, NO3 -, NO2 -, P, and detergent. The results showed that all parameters in both locations are under the national standard of discharged water quality. In other words, the treated water is securely discharged to the river
Sharp, T G
1984-02-01
The study was designed to determine whether any one of seven selected variables or a combination of the variables is predictive of performance on the State Board Test Pool Examination. The selected variables studied were: high school grade point average (HSGPA), The University of Tennessee, Knoxville, College of Nursing grade point average (GPA), and American College Test Assessment (ACT) standard scores (English, ENG; mathematics, MA; social studies, SS; natural sciences, NSC; composite, COMP). Data utilized were from graduates of the baccalaureate program of The University of Tennessee, Knoxville, College of Nursing from 1974 through 1979. The sample of 322 was selected from a total population of 572. The Statistical Analysis System (SAS) was designed to accomplish analysis of the predictive relationship of each of the seven selected variables to State Board Test Pool Examination performance (result of pass or fail), a stepwise discriminant analysis was designed for determining the predictive relationship of the strongest combination of the independent variables to overall State Board Test Pool Examination performance (result of pass or fail), and stepwise multiple regression analysis was designed to determine the strongest predictive combination of selected variables for each of the five subexams of the State Board Test Pool Examination. The selected variables were each found to be predictive of SBTPE performance (result of pass or fail). The strongest combination for predicting SBTPE performance (result of pass or fail) was found to be GPA, MA, and NSC.
Ling, Shaoping; Hu, Zheng; Yang, Zuyu; Yang, Fang; Li, Yawei; Lin, Pei; Chen, Ke; Dong, Lili; Cao, Lihua; Tao, Yong; Hao, Lingtong; Chen, Qingjian; Gong, Qiang; Wu, Dafei; Li, Wenjie; Zhao, Wenming; Tian, Xiuyun; Hao, Chunyi; Hungate, Eric A; Catenacci, Daniel V T; Hudson, Richard R; Li, Wen-Hsiung; Lu, Xuemei; Wu, Chung-I
2015-11-24
The prevailing view that the evolution of cells in a tumor is driven by Darwinian selection has never been rigorously tested. Because selection greatly affects the level of intratumor genetic diversity, it is important to assess whether intratumor evolution follows the Darwinian or the non-Darwinian mode of evolution. To provide the statistical power, many regions in a single tumor need to be sampled and analyzed much more extensively than has been attempted in previous intratumor studies. Here, from a hepatocellular carcinoma (HCC) tumor, we evaluated multiregional samples from the tumor, using either whole-exome sequencing (WES) (n = 23 samples) or genotyping (n = 286) under both the infinite-site and infinite-allele models of population genetics. In addition to the many single-nucleotide variations (SNVs) present in all samples, there were 35 "polymorphic" SNVs among samples. High genetic diversity was evident as the 23 WES samples defined 20 unique cell clones. With all 286 samples genotyped, clonal diversity agreed well with the non-Darwinian model with no evidence of positive Darwinian selection. Under the non-Darwinian model, MALL (the number of coding region mutations in the entire tumor) was estimated to be greater than 100 million in this tumor. DNA sequences reveal local diversities in small patches of cells and validate the estimation. In contrast, the genetic diversity under a Darwinian model would generally be orders of magnitude smaller. Because the level of genetic diversity will have implications on therapeutic resistance, non-Darwinian evolution should be heeded in cancer treatments even for microscopic tumors.
Ling, Shaoping; Hu, Zheng; Yang, Zuyu; Yang, Fang; Li, Yawei; Lin, Pei; Chen, Ke; Dong, Lili; Cao, Lihua; Tao, Yong; Hao, Lingtong; Chen, Qingjian; Gong, Qiang; Wu, Dafei; Li, Wenjie; Zhao, Wenming; Tian, Xiuyun; Hao, Chunyi; Hungate, Eric A.; Catenacci, Daniel V. T.; Hudson, Richard R.; Li, Wen-Hsiung; Lu, Xuemei; Wu, Chung-I
2015-01-01
The prevailing view that the evolution of cells in a tumor is driven by Darwinian selection has never been rigorously tested. Because selection greatly affects the level of intratumor genetic diversity, it is important to assess whether intratumor evolution follows the Darwinian or the non-Darwinian mode of evolution. To provide the statistical power, many regions in a single tumor need to be sampled and analyzed much more extensively than has been attempted in previous intratumor studies. Here, from a hepatocellular carcinoma (HCC) tumor, we evaluated multiregional samples from the tumor, using either whole-exome sequencing (WES) (n = 23 samples) or genotyping (n = 286) under both the infinite-site and infinite-allele models of population genetics. In addition to the many single-nucleotide variations (SNVs) present in all samples, there were 35 “polymorphic” SNVs among samples. High genetic diversity was evident as the 23 WES samples defined 20 unique cell clones. With all 286 samples genotyped, clonal diversity agreed well with the non-Darwinian model with no evidence of positive Darwinian selection. Under the non-Darwinian model, MALL (the number of coding region mutations in the entire tumor) was estimated to be greater than 100 million in this tumor. DNA sequences reveal local diversities in small patches of cells and validate the estimation. In contrast, the genetic diversity under a Darwinian model would generally be orders of magnitude smaller. Because the level of genetic diversity will have implications on therapeutic resistance, non-Darwinian evolution should be heeded in cancer treatments even for microscopic tumors. PMID:26561581
[Determination of LF-VD refining furnace slag by X ray fluorescence spectrometry].
Kan, Bin; Cheng, Jian-ping; Song, Zu-feng
2004-10-01
Eight components, i.e. TFe, CaO, MgO, Al2O3, SiO2, TiO2, MnO and P2O5 in refining furnace slag were determined by X ray fluorescence spectrometer. Because the content of CaO was high, the authors selected 12 national and departmental grade slag standard samples and prepared a series of synthetic standard samples by adding spectrally pure reagents to them. The calibration curve is suitable to the sample analysis of CaO, MgO and SiO2 with widely varying range. Meanwhile, the points on the curve are even. The samples were prepared at high temperature by adding Li2B4O7 as flux. The experiments for the selection of the sample preparation conditions about strip reagents, melting temperature and dulition ratio were carried out. The matrix effects on absorption and enhancement were corrected by means of PH model and theoretical alpha coefficient. Moreover, the precision and accuracy experiments were performed. In comparison with chemical analysis method, the quantitative analytical results for each component are satisfactory. The method has proven rapid, precise and simple.
Detection of adulterated commercial Spanish beeswax.
Serra Bonvehi, J; Orantes Bermejo, F J
2012-05-01
The physical and chemical parameters (melting point and saponification number), and the fraction of hydrocarbons, monoesters, acids and alcohols have been determined in 90 samples of Spanish commercial beeswax from Apis mellifera L. The adulteration with paraffins of different melting point, cow tallow, stearic acid, and carnauba wax were determined by HTGC-FID/MS detection, and the research was focussed mainly on paraffins and microcrystallines waxes. In general, the added adulterant can be identified by the presence of non-naturally beeswax components, and by the differences of values of selected components between pure and adulterated beeswax. The detection limits were determined using pure and adulterated beeswax with different amounts of added waxes (5%, 10%, 20% and 30%). Percentages higher than 1-5% of each adulterant can be detected in the mixtures. Paraffin waxes were confirmed in 33 of the 90 samples analysed at concentrations between 5% and 30%. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wuytack, Tatiana; Verheyen, Kris; Wuyts, Karen; Kardel, Fatemeh; Adriaenssens, Sandy; Samson, Roeland
2010-12-01
In this study, we assess the potential of white willow (Salix alba L.) as bioindicator for monitoring of air quality. Therefore, shoot biomass, specific leaf area, stomatal density, stomatal pore surface, and stomatal resistance were assessed from leaves of stem cuttings. The stem cuttings were introduced in two regions in Belgium with a relatively high and a relatively low level of air pollution, i.e., Antwerp city and Zoersel, respectively. In each of these regions, nine sampling points were selected. At each sampling point, three stem cuttings of white willow were planted in potting soil. Shoot biomass and specific leaf area were not significantly different between Antwerp city and Zoersel. Microclimatic differences between the sampling points may have been more important to plant growth than differences in air quality. However, stomatal pore surface and stomatal resistance of white willow were significantly different between Zoersel and Antwerp city. Stomatal pore surface was 20% lower in Antwerp city due to a significant reduction in both stomatal length (-11%) and stomatal width (-14%). Stomatal resistance at the adaxial leaf surface was 17% higher in Antwerp city because of the reduction in stomatal pore surface. Based on these results, we conclude that stomatal characteristics of white willow are potentially useful indicators for air quality.
Hyperspectral imaging with laser-scanning sum-frequency generation microscopy
Hanninen, Adam; Shu, Ming Wai; Potma, Eric O.
2017-01-01
Vibrationally sensitive sum-frequency generation (SFG) microscopy is a chemically selective imaging technique sensitive to non-centrosymmetric molecular arrangements in biological samples. The routine use of SFG microscopy has been hampered by the difficulty of integrating the required mid-infrared excitation light into a conventional, laser-scanning nonlinear optical (NLO) microscope. In this work, we describe minor modifications to a regular laser-scanning microscope to accommodate SFG microscopy as an imaging modality. We achieve vibrationally sensitive SFG imaging of biological samples with sub-μm resolution at image acquisition rates of 1 frame/s, almost two orders of magnitude faster than attained with previous point-scanning SFG microscopes. Using the fast scanning capability, we demonstrate hyperspectral SFG imaging in the CH-stretching vibrational range and point out its use in the study of molecular orientation and arrangement in biologically relevant samples. We also show multimodal imaging by combining SFG microscopy with second-harmonic generation (SHG) and coherent anti-Stokes Raman scattering (CARS) on the same imaging platfrom. This development underlines that SFG microscopy is a unique modality with a spatial resolution and image acquisition time comparable to that of other NLO imaging techniques, making point-scanning SFG microscopy a valuable member of the NLO imaging family. PMID:28966861
Diversity of human small intestinal Streptococcus and Veillonella populations.
van den Bogert, Bartholomeus; Erkus, Oylum; Boekhorst, Jos; de Goffau, Marcus; Smid, Eddy J; Zoetendal, Erwin G; Kleerebezem, Michiel
2013-08-01
Molecular and cultivation approaches were employed to study the phylogenetic richness and temporal dynamics of Streptococcus and Veillonella populations in the small intestine. Microbial profiling of human small intestinal samples collected from four ileostomy subjects at four time points displayed abundant populations of Streptococcus spp. most affiliated with S. salivarius, S. thermophilus, and S. parasanguinis, as well as Veillonella spp. affiliated with V. atypica, V. parvula, V. dispar, and V. rogosae. Relative abundances varied per subject and time of sampling. Streptococcus and Veillonella isolates were cultured using selective media from ileostoma effluent samples collected at two time points from a single subject. The richness of the Streptococcus and Veillonella isolates was assessed at species and strain level by 16S rRNA gene sequencing and genetic fingerprinting, respectively. A total of 160 Streptococcus and 37 Veillonella isolates were obtained. Genetic fingerprinting differentiated seven Streptococcus lineages from ileostoma effluent, illustrating the strain richness within this ecosystem. The Veillonella isolates were represented by a single phylotype. Our study demonstrated that the small intestinal Streptococcus populations displayed considerable changes over time at the genetic lineage level because only representative strains of a single Streptococcus lineage could be cultivated from ileostoma effluent at both time points. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
The case for planetary sample return missions. 2. History of Mars.
Gooding, J L; Carr, M H; McKay, C P
1989-08-01
Principal science goals for exploration of Mars are to establish the chemical, isotopic, and physical state of Martian material, the nature of major surface-forming processes and their time scales, and the past and present biological potential of the planet. Many of those goals can only be met by detailed analyses of atmospheric gases and carefully selected samples of fresh rocks, weathered rocks, soils, sediments, and ices. The high-fidelity mineral separations, complex chemical treatments, and ultrasensitive instrument systems required for key measurements, as well as the need to adapt analytical strategies to unanticipated results, point to Earth-based laboratory analyses on returned Martian samples as the best means for meeting the stated objectives.
Saghir, Shakil A; Mendrala, Alan L; Bartels, Michael J; Day, Sue J; Hansen, Steve C; Sushynski, Jacob M; Bus, James S
2006-03-15
Strategies were developed for the estimation of systemically available daily doses of chemicals, diurnal variations in blood levels, and rough elimination rates in subchronic feeding/drinking water studies, utilizing a minimal number of blood samples. Systemic bioavailability of chemicals was determined by calculating area under the plasma concentration curve over 24 h (AUC-24 h) using complete sets of data (> or =5 data points) and also three, two, and one selected time points. The best predictions of AUC-24 h were made when three time points were used, corresponding to Cmax, a mid-morning sample, and C(min). These values were found to be 103 +/- 10% of the original AUC-24 h, with 13 out of 17 values ranging between 96 and 105% of the original. Calculation of AUC-24 h from two samples (Cmax and Cmin) or one mid-morning sample afforded slightly larger variations in the calculated AUC-24 h (69-136% of the actual). Following drinking water exposure, prediction of AUC-24 h using 3 time points (Cmax, mid-morning, and Cmin) was very close to actual values (80-100%) among mice, while values for rats were only 63% of the original due to less frequent drinking behavior of rats during the light cycle. Collection and analysis of 1-3 blood samples per dose may provide insight into dose-proportional or non-dose-proportional differences in systemic bioavailability, pointing towards saturation of absorption or elimination or some other phenomenon warranting further investigation. In addition, collection of the terminal blood samples from rats, which is usually conducted after 18 h of fasting, will be helpful in rough estimation of blood/plasma half-life of the compound. The amount of chemical(s) and/or metabolite(s) in excreta and their possible use as biomarkers in predicting the daily systemic exposure levels are also discussed. Determining these parameters in the early stages of testing will provide critical information to improve the appropriate design of other longer-term toxicity studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saghir, Shakil A.; Mendrala, Alan L.; Bartels, Michael J.
Strategies were developed for the estimation of systemically available daily doses of chemicals, diurnal variations in blood levels, and rough elimination rates in subchronic feeding/drinking water studies, utilizing a minimal number of blood samples. Systemic bioavailability of chemicals was determined by calculating area under the plasma concentration curve over 24 h (AUC-24 h) using complete sets of data ({>=}5 data points) and also three, two, and one selected time points. The best predictions of AUC-24 h were made when three time points were used, corresponding to C {sub max}, a mid-morning sample, and C {sub min}. These values were foundmore » to be 103 {+-} 10% of the original AUC-24 h, with 13 out of 17 values ranging between 96 and 105% of the original. Calculation of AUC-24 h from two samples (C {sub max} and C {sub min}) or one mid-morning sample afforded slightly larger variations in the calculated AUC-24 h (69-136% of the actual). Following drinking water exposure, prediction of AUC-24 h using 3 time points (C {sub max}, mid-morning, and C {sub min}) was very close to actual values (80-100%) among mice, while values for rats were only 63% of the original due to less frequent drinking behavior of rats during the light cycle. Collection and analysis of 1-3 blood samples per dose may provide insight into dose-proportional or non-dose-proportional differences in systemic bioavailability, pointing towards saturation of absorption or elimination or some other phenomenon warranting further investigation. In addition, collection of the terminal blood samples from rats, which is usually conducted after 18 h of fasting, will be helpful in rough estimation of blood/plasma half-life of the compound. The amount of chemical(s) and/or metabolite(s) in excreta and their possible use as biomarkers in predicting the daily systemic exposure levels are also discussed. Determining these parameters in the early stages of testing will provide critical information to improve the appropriate design of other longer-term toxicity studies.« less
Spatial distribution of Legionella pneumophila MLVA-genotypes in a drinking water system.
Rodríguez-Martínez, Sarah; Sharaby, Yehonatan; Pecellín, Marina; Brettar, Ingrid; Höfle, Manfred; Halpern, Malka
2015-06-15
Bacteria of the genus Legionella cause water-based infections, resulting in severe pneumonia. To improve our knowledge about Legionella spp. ecology, its prevalence and its relationships with environmental factors were studied. Seasonal samples were taken from both water and biofilm at seven sampling points of a small drinking water distribution system in Israel. Representative isolates were obtained from each sample and identified to the species level. Legionella pneumophila was further determined to the serotype and genotype level. High resolution genotyping of L. pneumophila isolates was achieved by Multiple-Locus Variable number of tandem repeat Analysis (MLVA). Within the studied water system, Legionella plate counts were higher in summer and highly variable even between adjacent sampling points. Legionella was present in six out of the seven selected sampling points, with counts ranging from 1.0 × 10(1) to 5.8 × 10(3) cfu/l. Water counts were significantly higher in points where Legionella was present in biofilms. The main fraction of the isolated Legionella was L. pneumophila serogroup 1. Serogroup 3 and Legionella sainthelensis were also isolated. Legionella counts were positively correlated with heterotrophic plate counts at 37 °C and negatively correlated with chlorine. Five MLVA-genotypes of L. pneumophila were identified at different buildings of the sampled area. The presence of a specific genotype, "MLVA-genotype 4", consistently co-occurred with high Legionella counts and seemed to "trigger" high Legionella counts in cold water. Our hypothesis is that both the presence of L. pneumophila in biofilm and the presence of specific genotypes, may indicate and/or even lead to high Legionella concentration in water. This observation deserves further studies in a broad range of drinking water systems to assess its potential for general use in drinking water monitoring and management. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Professor Feng Run-Shen's essential experience in penetration needling method].
Feng, Mu-Lan
2009-04-01
Professor Feng Run-Shen is engaged in medicine for more than 60 years. He pays attention to medical ethics and has perfect medical skill. He energetically advocates combination of acupuncture with medication and stresses the concept of viewing the situation as a whole in selection of acupoints and treatment, particularly, clinical application of point properties. Clinically, he is accomplished in penetration needling, for which one needle acts on two or more points, enlarging the range of needling sensation, so it has very good therapeutic effects on many diseases. In the paper, the case samples about penetration needling in his clinical practice are summarized and introduced.
The colligative properties of fruit juices by photopyroelectric calorimetry
NASA Astrophysics Data System (ADS)
Frandas, A.; Surducan, V.; Nagy, G.; Bicanic, D.
1999-03-01
The photopyroelectric method was used to study the depression of freezing point in juices prepared from selected apple and orange juice concentrates. By using the models for real solutions, the effective molecular weight of the dissolved solids was obtained. The acids concentration in the fruit juice is reflected both in the equivalent molecular weight (by lowering it) and in the interaction coefficients b and C. Using the data for the molecular weight and the characteristic coefficients, prediction curves for the samples investigated can be used in practice. Freezing point depression can also be used as an indicator of the degree of spoilage of fruit juices.
Pesko, Michael F; Kenkel, Donald S; Wang, Hua; Hughes, Jenna M
2016-04-01
To estimate the effect of potential regulations of electronic nicotine delivery systems (ENDS) among adult smokers, including increasing taxes, reducing flavor availability and adding warning labels communicating various levels of risk. We performed a discrete choice experiment (DCE) among a national sample of 1200 adult smokers. We examined heterogeneity in policy responses by age, cigarette quitting interest and current ENDS use. Our experiment overlapped January 2015 by design, providing exogenous variation in cigarette quitting interest from New Year resolutions. KnowledgePanel, an online panel of recruited respondents. A total of 1200 adult smokers from the United States. Hypothetical purchase choice of cigarettes, nicotine replacement therapy and a disposable ENDS. Increasing ENDS prices from $3 to $6 was associated with a 13.6 percentage point reduction in ENDS selection (P < 0.001). Restricting flavor availability in ENDS to tobacco and menthol was associated with a 2.1 percentage point reduction in ENDS selection (P < 0.001). The proposed Food and Drug Administration (FDA) warning label was associated with a 1.1 percentage point reduction in ENDS selection (P < 0.05) and the MarkTen warning label with a 5.1 percentage point reduction (P < 0.001). We estimated an ENDS price elasticity of -1.8 (P < 0.001) among adult smokers. Statistically significant interaction terms (P < 0.001) imply that price responsiveness was higher among adult smokers 18-24 years of age, smokers who have vaped over the last month and smokers with above the median quitting interest. Young adult smokers were 3.7 percentage points more likely to choose ENDS when multiple flavors were available than older adults (P < 0.001). Young adult smokers and those with above the median cigarette quitting interest were also more likely to reduce cigarette selection and increase ENDS selection in January 2015 (P < 0.001), potentially in response to New Year's resolutions to quit smoking. Increased taxes, a proposed US Food and Drug Administration warning label for electronic nicotine delivery systems and a more severe warning label may discourage adult smokers from switching to electronic nicotine delivery systems. Reducing the availability of flavors may reduce ENDS use by young adult smokers. © 2015 Society for the Study of Addiction.
An overview of the thematic mapper geometric correction system
NASA Technical Reports Server (NTRS)
Beyer, E. P.
1983-01-01
Geometric accuracy specifications for LANDSAT 4 are reviewed and the processing concepts which form the basis of NASA's thematic mapper geometric correction system are summarized for both the flight and ground segments. The flight segment includes the thematic mapper instrument, attitude measurement devices, attitude control, and ephemeris processing. For geometric correction the ground segment uses mirror scan correction data, payload correction data, and control point information to determine where TM detector samples fall on output map projection systems. Then the raw imagery is reformatted and resampled to produce image samples on a selected output projection grid system.
The Relevance of Emotional Intelligence in Personnel Selection for High Emotional Labor Jobs
Hock, Michael; Schütz, Astrid
2016-01-01
Although a large number of studies have pointed to the potential of emotional intelligence (EI) in the context of personnel selection, research in real-life selection contexts is still scarce. The aim of the present study was to examine whether EI would predict Assessment Center (AC) ratings of job-relevant competencies in a sample of applicants for the position of a flight attendant. Applicants’ ability to regulate emotions predicted performance in group exercises. However, there were inconsistent effects of applicants’ ability to understand emotions: Whereas the ability to understand emotions had a positive effect on performance in interview and role play, the effect on performance in group exercises was negative. We suppose that the effect depends on task type and conclude that tests of emotional abilities should be used judiciously in personnel selection procedures. PMID:27124201
Floyd A. Johnson
1961-01-01
This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...
Supercritical Fluids: Nanotechnology and Select Emerging Applications
2006-01-01
power law functions with respect to the critical point parameters , see Seibert et al. (2001). This has a very important consequence that any results...catalyst support (silica doped alumina) was prepared via the sol–gel approach using two drying methods leading to xerogel and aerogel. The sol–gel...alumina samples doped with silicon sustains thermal treatment at 1200C, or more, for several hours. The active phase (palladium as catalyst) was
Salmonella contamination risk points in broiler carcasses during slaughter line processing.
Rivera-Pérez, Walter; Barquero-Calvo, Elías; Zamora-Sanabria, Rebeca
2014-12-01
Salmonella is one of the foodborne pathogens most commonly associated with poultry products. The aim of this work was to identify and analyze key sampling points creating risk of Salmonella contamination in a chicken processing plant in Costa Rica and perform a salmonellosis risk analysis. Accordingly, the following examinations were performed: (i) qualitative testing (presence or absence of Salmonella), (ii) quantitative testing (Salmonella CFU counts), and (iii) salmonellosis risk analysis, assuming consumption of contaminated meat from the processing plant selected. Salmonella was isolated in 26% of the carcasses selected, indicating 60% positive in the flocks sampled. The highest Salmonella counts were observed after bleeding (6.1 log CFU per carcass), followed by a gradual decrease during the subsequent control steps. An increase in the percentage of contamination (10 to 40%) was observed during evisceration and spray washing (after evisceration), with Salmonella counts increasing from 3.9 to 5.1 log CFU per carcass. According to the prevalence of Salmonella -contaminated carcasses released to trade (20%), we estimated a risk of 272 cases of salmonellosis per year as a result of the consumption of contaminated chicken. Our study suggests that the processes of evisceration and spray washing represent a risk of Salmonella cross-contamination and/ or recontamination in broilers during slaughter line processing.
Organic solutes in ground water at the Idaho National Engineering Laboratory
Leenheer, Jerry A.; Bagby, Jefferson C.
1982-01-01
In August 1980, the U.S. Geological Survey started a reconnaissance survey of organic solutes in drinking water sources, ground-water monitoring wells, perched water table monitoring wells, and in select waste streams at the Idaho National Engineering Laboratory (INEL). The survey was to be a two-phase program. In the first phase, 77 wells and 4 potential point sources were sampled for dissolved organic carbon (DOC). Four wells and several potential point sources of insecticides and herbicides were sampled for insecticides and herbicides. Fourteen wells and four potential organic sources were sampled for volatile and semivolatile organic compounds. The results of the DOC analyses indicate no high level (>20 mg/L DOC) organic contamination of ground water. The only detectable insecticide or herbicide was a DDT concentration of 10 parts per trillion (0.01 microgram per liter) in one observation well. The volatile and semivolatile analyses do not indicate the presence of hazardous organic contaminants in significant amounts (>10 micrograms per liter) in the samples taken. Due to the lack of any significant organic ground-water contamination in this reconnaissance survey, the second phase of the study, which was to follow up the first phase by additional sampling of any contaminated wells, was canceled.
Influenza virus drug resistance: a time-sampled population genetics perspective.
Foll, Matthieu; Poh, Yu-Ping; Renzette, Nicholas; Ferrer-Admetlla, Anna; Bank, Claudia; Shim, Hyunjin; Malaspinas, Anna-Sapfo; Ewing, Gregory; Liu, Ping; Wegmann, Daniel; Caffrey, Daniel R; Zeldovich, Konstantin B; Bolon, Daniel N; Wang, Jennifer P; Kowalik, Timothy F; Schiffer, Celia A; Finberg, Robert W; Jensen, Jeffrey D
2014-02-01
The challenge of distinguishing genetic drift from selection remains a central focus of population genetics. Time-sampled data may provide a powerful tool for distinguishing these processes, and we here propose approximate Bayesian, maximum likelihood, and analytical methods for the inference of demography and selection from time course data. Utilizing these novel statistical and computational tools, we evaluate whole-genome datasets of an influenza A H1N1 strain in the presence and absence of oseltamivir (an inhibitor of neuraminidase) collected at thirteen time points. Results reveal a striking consistency amongst the three estimation procedures developed, showing strongly increased selection pressure in the presence of drug treatment. Importantly, these approaches re-identify the known oseltamivir resistance site, successfully validating the approaches used. Enticingly, a number of previously unknown variants have also been identified as being positively selected. Results are interpreted in the light of Fisher's Geometric Model, allowing for a quantification of the increased distance to optimum exerted by the presence of drug, and theoretical predictions regarding the distribution of beneficial fitness effects of contending mutations are empirically tested. Further, given the fit to expectations of the Geometric Model, results suggest the ability to predict certain aspects of viral evolution in response to changing host environments and novel selective pressures.
Black-pigmented anaerobic rods in closed periapical lesions.
Bogen, G; Slots, J
1999-05-01
This study determined the frequency of Porphyromonas endodontalis, Porphyromonas gingivalis, Prevotella intermedia and Prevotella nigrescens in 20 closed periapical lesions associated with symptomatic and asymptomatic refractory endodontic disease. To deliniate possible oral sources of P. endodontalis, the presence of the organism was assessed in selected subgingival sites and saliva in the same study patients. Periapical samples were obtained by paper points during surgical endodontic procedures using methods designed to minimize contamination by non-endodontic microorganisms. Subgingival plaque samples were obtained by paper points from three periodontal pockets and from the pocket of the tooth associated with the closed periapical lesion. Unstimulated saliva was collected from the surface of the soft palate. Bacterial identification was performed using a species-specific polymerase chain reaction (PCR) detection method. P. endodontalis was not identified in any periapical lesion, even though subgingival samples from eight patients (40%) revealed the P. endodontalis-specific amplicon. P. gingivalis occurred in one periapical lesion that was associated with moderate pain. P. nigrescens, P. endodontalis and P. intermedia were not detected in any periapical lesion studied. Black-pigmented anaerobic rods appear to be infrequent inhabitants of the closed periapical lesion.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
Direct sampling from N dimensions to N dimensions applied to porous media
NASA Astrophysics Data System (ADS)
Adler, Pierre; Nguyen, Thang; Coelho, Daniel; Robinet, Jean Charles; Wendling, Jacques
2014-05-01
The reconstruction of porous media starting from some experimental data is still a very challenging problem in terms of random geometry and a very attractive one because of its innumerable industrial applications. The developments of Computed Microtomography (CMT) have not diminished the need for reconstruction methods and the availability of three dimensional data has considerably facilitated the reconstruction of porous media. In the past, several techniques were used such as thresholded Gaussian fields [1], simulated annealing [2] and Boolean models where polydisperse and penetrable spheres are generated randomly (see [3] for a combination with correlation function). Recently, [4] developed the Direct Sampling method (DSM) as an alternative to multiple-point simulations. The purpose of the present work is to develop DSM and to apply it to the reconstruction of porous media made of one or several minerals [5]. Application of this method only necessitates a sample of the medium to reproduce called Training Image (TI). The main feature of DSM can be summarized as follows. Suppose that n points (x1,…,xn) are already known in the Simulated Medium (SM) and that one wants to determine the value of an extra point x; the TI is searched in order to find a configuration (y1,…,yn) where these points have the same colors and relative positions as (x1,…,xn) in the SM; then, the value of the point y in the TI which is in the same relative position with respect to (y1,…,yn) than x with respect to (x1,…,xn) is given to x in the SM. The algorithm and its main features are briefly described. Important advantages of DSM are that it can easily generate media with several phases which are spatially periodic or not. The searching process - i.e. the selected points y in the TI and the corresponding determined points x in the SM - will be illustrated by some short movies. The properties of the resulting SMs (such as the phase probabilities and the correlation functions) will be qualitatively and quantitatively compared to the ones of the TI. The major numerical parameters which influence the results and the calculation time, are the size of the TI, the radius of the selection window and the acceptance threshold. They are studied and recommendations are made for their choice. For instance, the size of the TI should be at least twice the largest correlation length found in it. Some features necessitate a special analysis such as the number of isolated points of one phase in another phase, the influence of the choice of the initial points, the influence of a modified voxel in the course of the simulation and the generation of phases with a small probability in the TI. For the real TI which were analysed, the number of isolated points was always smaller than 0.5%; they can be suppressed with a very small influence on the statistical characteristics of the SM. The choice of the initial points has no consequences in a statistical sense. Finally, some initial tests show that the permeabilities of simulated samples and of the TI are close. REFERENCES [1] Adler P.M., Jacquin. C.G. & Quiblier J.A., Int. J. Multiphase Flow, 16 (1990), 691. [2] Hazlett, R.D., Math. Geol. 29 (1997), 801. [3] J.-F.Thovert, P.M.Adler, Phys. Rev. E, 83 (2011), 031104. [4] Mariethoz,G. and Renard,P. and Straubhaar,J., Water Resour. Res., 46,10.1029/2008WR007621 (2010). [5] Nguyen Kim, T, Direct sampling applied to porous media. Ph.D. Thesis, University P. and M. Curie, Paris (2013).
Subrandom methods for multidimensional nonuniform sampling.
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.
Schultz, M.M.; Furlong, E.T.; Kolpin, D.W.; Werner, S.L.; Schoenfuss, H.L.; Barber, L.B.; Blazer, V.S.; Norris, D.O.; Vajda, A.M.
2010-01-01
Antidepressant pharmaceuticals are widely prescribed in the United States; release of municipal wastewater effluent is a primary route introducing them to aquatic environments, where little is known about their distribution and fate. Water, bed sediment, and brain tissue from native white suckers (Catostomus commersoni)were collected upstream and atpoints progressively downstream from outfalls discharging to two effluentimpacted streams, Boulder Creek (Colorado) and Fourmile Creek (Iowa). A liquid chromatography/tandem mass spectrometry method was used to quantify antidepressants, including fluoxetine, norfluoxetine (degradate), sertraline, norsertraline (degradate), paroxetine, Citalopram, fluvoxamine, duloxetine, venlafaxine, and bupropion in all three sample matrices. Antidepressants were not present above the limit of quantitation in water samples upstream from the effluent outfalls but were present at points downstream at ng/L concentrations, even at the farthest downstream sampling site 8.4 km downstream from the outfall. The antidepressants with the highest measured concentrations in both streams were venlafaxine, bupropion, and Citalopram and typically were observed at concentrations of at least an order of magnitude greater than the more commonly investigated antidepressants fluoxetine and sertraline. Concentrations of antidepressants in bed sediment were measured at ng/g levels; venlafaxine and fluoxetine were the predominant chemicals observed. Fluoxetine, sertraline, and their degradates were the principal antidepressants observed in fish brain tissue, typically at low ng/g concentrations. Aqualitatively different antidepressant profile was observed in brain tissue compared to streamwater samples. This study documents that wastewater effluent can be a point source of antidepressants to stream ecosystems and that the qualitative composition of antidepressants in brain tissue from exposed fish differs substantially from the compositions observed in streamwater and sediment, suggesting selective uptake. ?? 2010 American Chemical Society.
Sample similarity analysis of angles of repose based on experimental results for DEM calibration
NASA Astrophysics Data System (ADS)
Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu
2017-06-01
As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.
Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data
NASA Astrophysics Data System (ADS)
Mobli, Mehdi
2015-07-01
The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
Sampling challenges in a study examining refugee resettlement
2011-01-01
Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation. PMID:21406104
Sampling challenges in a study examining refugee resettlement.
Sulaiman-Hill, Cheryl Mr; Thompson, Sandra C
2011-03-15
As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation.
Chapter A6. Section 6.6. Alkalinity and Acid Neutralizing Capacity
Rounds, Stewart A.; Wilde, Franceska D.
2002-01-01
Alkalinity (determined on a filtered sample) and Acid Neutralizing Capacity (ANC) (determined on a whole-water sample) are measures of the ability of a water sample to neutralize strong acid. Alkalinity and ANC provide information on the suitability of water for uses such as irrigation, determining the efficiency of wastewater processes, determining the presence of contamination by anthropogenic wastes, and maintaining ecosystem health. In addition, alkalinity is used to gain insights on the chemical evolution of an aqueous system. This section of the National Field Manual (NFM) describes the USGS field protocols for alkalinity/ANC determination using either the inflection-point or Gran function plot methods, including calculation of carbonate species, and provides guidance on equipment selection.
Sensor-triggered sampling to determine instantaneous airborne vapor exposure concentrations.
Smith, Philip A; Simmons, Michael K; Toone, Phillip
2018-06-01
It is difficult to measure transient airborne exposure peaks by means of integrated sampling for organic chemical vapors, even with very short-duration sampling. Selection of an appropriate time to measure an exposure peak through integrated sampling is problematic, and short-duration time-weighted average (TWA) values obtained with integrated sampling are not likely to accurately determine actual peak concentrations attained when concentrations fluctuate rapidly. Laboratory analysis for integrated exposure samples is preferred from a certainty standpoint over results derived in the field from a sensor, as a sensor user typically must overcome specificity issues and a number of potential interfering factors to obtain similarly reliable data. However, sensors are currently needed to measure intra-exposure period concentration variations (i.e., exposure peaks). In this article, the digitized signal from a photoionization detector (PID) sensor triggered collection of whole-air samples when toluene or trichloroethylene vapors attained pre-determined levels in a laboratory atmosphere generation system. Analysis by gas chromatography-mass spectrometry of whole-air samples (with both 37 and 80% relative humidity) collected using the triggering mechanism with rapidly increasing vapor concentrations showed good agreement with the triggering set point values. Whole-air samples (80% relative humidity) in canisters demonstrated acceptable 17-day storage recoveries, and acceptable precision and bias were obtained. The ability to determine exceedance of a ceiling or peak exposure standard by laboratory analysis of an instantaneously collected sample, and to simultaneously provide a calibration point to verify the correct operation of a sensor was demonstrated. This latter detail may increase the confidence in reliability of sensor data obtained across an entire exposure period.
Bagner, Daniel M; Sheinkopf, Stephen J; Miller-Loncar, Cynthia; LaGasse, Linda L; Lester, Barry M; Liu, Jing; Bauer, Charles R; Shankaran, Seetha; Bada, Henrietta; Das, Abhik
2009-03-01
To examine the relationship between early parenting stress and later child behavior in a high-risk sample and measure the effect of drug exposure on the relationship between parenting stress and child behavior. A subset of child-caregiver dyads (n=607) were selected from the Maternal Lifestyle Study (MLS), which is a large sample of children (n=1,388) with prenatal cocaine exposure and a comparison sample unexposed to cocaine. Of the 607 dyads, 221 were prenatally exposed to cocaine and 386 were unexposed to cocaine. Selection was based on the presence of a stable caregiver at 4 and 36 months with no evidence of change in caregiver between those time points. Parenting stress at 4 months significantly predicted child externalizing behavior at 36 months. These relations were unaffected by cocaine exposure suggesting the relationship between parenting stress and behavioral outcome exists for high-risk children regardless of drug exposure history. These results extend the findings of the relationship between parenting stress and child behavior to a sample of high-risk children with prenatal drug exposure. Implications for outcome and treatment are discussed.
Evaluation of the hydrometer for testing immunoglobulin G1 concentrations in Holstein colostrum.
Pritchett, L C; Gay, C C; Hancock, D D; Besser, T E
1994-06-01
Hydrometer measurement in globulin and IgG1 concentration measured by the radial immunodiffusion technique were compared for 915 samples of first milking colostrum from Holstein cows. Least squares analysis of the relationship between hydrometer measurement and IgG1 concentration was improved by log transformation of IgG1 concentration and resulted in a significant linear relationship between hydrometer measurement and log10 IgG1 concentration; r2 = .469. At 50 mg of globulin/ml of colostrum, the recommended hydrometer cutoff point for colostrum selection, the sensitivity of the hydrometer as a test of IgG1 concentration in Holstein colostrum was 26%, and the negative predictive value was 67%. The negative predictive value and sensitivity of the hydrometer as a test of IgG1 in Holstein colostrum was improved, and the cost of misclassification of colostrum was minimized, when the cutoff point for colostrum selection was increased above the recommended 50 mg/ml.
Brichta-Harhay, Dayna M.; Kalchayanand, Norasak; Bosilevac, Joseph M.; Shackelford, Steven D.; Wheeler, Tommy L.; Koohmaraie, Mohammad
2012-01-01
The objective of this study was to characterize Salmonella enterica contamination on carcasses in two large U.S. commercial pork processing plants. The carcasses were sampled at three points, before scalding (prescald), after dehairing/polishing but before evisceration (preevisceration), and after chilling (chilled final). The overall prevalences of Salmonella on carcasses at these three sampling points, prescald, preevisceration, and after chilling, were 91.2%, 19.1%, and 3.7%, respectively. At one of the two plants, the prevalence of Salmonella was significantly higher (P < 0.01) for each of the carcass sampling points. The prevalences of carcasses with enumerable Salmonella at prescald, preevisceration, and after chilling were 37.7%, 4.8%, and 0.6%, respectively. A total of 294 prescald carcasses had Salmonella loads of >1.9 log CFU/100 cm2, but these carcasses were not equally distributed between the two plants, as 234 occurred at the plant with higher Salmonella prevalences. Forty-one serotypes were identified on prescald carcasses with Salmonella enterica serotypes Derby, Typhimurium, and Anatum predominating. S. enterica serotypes Typhimurium and London were the most common of the 24 serotypes isolated from preevisceration carcasses. The Salmonella serotypes Johannesburg and Typhimurium were the most frequently isolated serotypes of the 9 serotypes identified from chilled final carcasses. Antimicrobial susceptibility was determined for selected isolates from each carcass sampling point. Multiple drug resistance (MDR), defined as resistance to three or more classes of antimicrobial agents, was identified for 71.2%, 47.8%, and 77.5% of the tested isolates from prescald, preevisceration, and chilled final carcasses, respectively. The results of this study indicate that the interventions used by pork processing plants greatly reduce the prevalence of Salmonella on carcasses, but MDR Salmonella was isolated from 3.2% of the final carcasses sampled. PMID:22327585
Buczinski, S; Vandeweerd, J M
2016-09-01
Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Multi-locus analysis of genomic time series data from experimental evolution.
Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S
2015-04-01
Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.
Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets
ERIC Educational Resources Information Center
Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin
2011-01-01
Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…
Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco
2013-07-01
In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample composition in terms of DBH (and likely age and structure) are desirable in Italy. As the adoption of a plot-based approach will keep a large share of the trees formerly selected, direct tree-by-tree comparison will remain possible, thus limiting the impact on the time series comparability. In addition, the plot-based design will favour the integration with NFI_2.
Meso-Scale Wetting of Paper Towels
NASA Astrophysics Data System (ADS)
Abedsoltan, Hossein
In this study, a new experimental approach is proposed to investigate the absorption properties of some selected retail paper towels. The samples were selected from two important manufacturing processes, conventional wet pressing (CWP) considered value products, and through air drying (TAD) considered as high or premium products. The tested liquids were water, decane, dodecane, and tetradecane with the total volumes in micro-liter range. The method involves the point source injection of liquid with different volumetric flowrates, in the nano-liter per second range. The local site for injection was chosen arbitrarily on the sample surface. The absorption process was monitored and recorded as the liquid advances, with two distinct imaging system methods, infrared imaging and optical imaging. The microscopic images were analyzed to calculate the wetted regions during the absorption test, and the absorption diagrams were generated. These absorption diagrams were dissected to illustrate the absorption phenomenon, and the absorption properties of the samples. The local (regional) absorption rates were computed for Mardi Gras and Bounty Basic as the representative samples for CWP and TAD, respectively in order to be compared with the absorption capacity property of these two samples. Then, the absorption capacity property was chosen as an index factor to compare the absorption properties of all the tested paper towels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyai, K.; Oura, T.; Kawashima, M.
1978-11-01
A simple and reliable method of paired TSH assay was developed and used in screening for neonatal primary hypothyroidism. In this method, a paired assay is first done. Equal parts of the extracts of dried blood spots on filter paper (9 mm diameter) from two infants 4 to 7 days old are combined and assayed for TSH by double antibody RIA. If the value obtained is over the cut-off point, the extracts are assayed separately for TSH in a second assay to identify the abnormal sample. Two systems, A and B, with different cut-off points were tested. On the basismore » of reference blood samples (serum levels of TSH, 80 ..mu..U/ml in system A and 40 ..mu..U/ml in system B), the cut-off point was selected as follows: upper 5 (A) or 4 (B) percentile in the paired assay and values of reference blood samples in the second individual assay. Four cases (2 in A and 2 in B) of neonatal primary hypothyroidism were found among 25 infants (23 in A and 2 in B) who were recalled from a general population of 41,400 infants (24,200 in A and 17,200 in B) by 22,700 assays. This paired TSH assay system saves labor and expense for screening neonatal hypothyroidism.« less
Design and field results of a walk-through EDS
NASA Astrophysics Data System (ADS)
Wendel, Gregory J.; Bromberg, Edward E.; Durfee, Memorie K.; Curby, William A.
1997-01-01
A walk-through portal sampling module which incorporates active sampling has been developed. The module uses opposing wands which actively brush the subjects exterior clothing to disturb explosive traces. These traces are entrained in an air stream and transported to a High Speed GC- chemiluminescence explosives detection system. This combination provides automatic screening of passengers at rates of 10 per minute. The system exhibits sensitivity and selectivity which equals or betters that available from commercially available manual equipment. The systems has been developed for deployment at border crossings, airports and other security screening points. Detailed results of laboratory tests and airport field trials are reviewed.
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
Godden, S M; Royster, E; Timmerman, J; Rapnicki, P; Green, H
2017-08-01
Study objectives were to (1) describe the diagnostic test characteristics of an automated milk leukocyte differential (MLD) test and the California Mastitis Test (CMT) to identify intramammary infection (IMI) in early- (EL) and late-lactation (LL) quarters and cows when using 3 different approaches to define IMI from milk culture, and (2) describe the repeatability of MLD test results at both the quarter and cow level. Eighty-six EL and 90 LL Holstein cows were sampled from 3 Midwest herds. Quarter milk samples were collected for a cow-side CMT test, milk culture, and MLD testing. Quarter IMI status was defined by 3 methods: culture of a single milk sample, culture of duplicate samples with parallel interpretation, and culture of duplicate samples with serial interpretation. The MLD testing was completed in duplicate within 8 h of sample collection; MLD results (positive/negative) were reported at each possible threshold setting (1-18 for EL; 1-12 for LL) and CMT results (positive/negative) were reported at each possible cut-points (trace, ≥1, ≥2, or 3). We created 2 × 2 tables to compare MLD and CMT results to milk culture, at both the quarter and cow level, when using each of 3 different definitions of IMI as the referent test. Paired MLD test results were compared with evaluate repeatability. The MLD test showed excellent repeatability. The choice of definition of IMI from milk culture had minor effects on estimates of MLD and CMT test characteristics. For EL samples, when interpreting MLD and CMT results at the quarter level, and regardless of the referent test used, both tests had low sensitivity (MLD = 11.7-39.1%; CMT = 0-52.2%) but good to very good specificity (MLD = 82.1-95.2%; CMT = 68.1-100%), depending on the cut-point used. Sensitivity improved slightly if diagnosis was interpreted at the cow level (MLD = 25.6-56.4%; CMT = 0-72.2%), though specificity generally declined (MLD = 61.8-100%; CMT = 25.0-100%) depending on the cut-point used. For LL samples, when interpreted at the quarter level, both tests had variable sensitivity (MLD = 46.6-84.8%; CMT = 9.6-72.7%) and variable specificity (MLD = 59.2-79.8%; CMT = 52.5-97.3%), depending on the cut-point used. Test sensitivity improved if interpreted at the cow level (MLD = 59.6-86.4%; CMT = 19.1-86.4%), though specificity declined (MLD = 32.4-56.8%; CMT = 14.3-92.3%). Producers considering adopting either test for LL or EL screening programs will need to carefully consider the goals and priorities of the program (e.g., whether to prioritize test sensitivity or specificity) when deciding on the level of interpretation (quarter or cow) and when selecting the optimal cut-point for interpreting test results. Additional validation studies and large randomized field studies will be needed to evaluate the effect of adopting either test in selective dry cow therapy or fresh cow screening programs on udder health, antibiotic use, and economics. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Shrestha, Akina; Sharma, Subodh; Gerold, Jana; Erismann, Séverine; Sagar, Sanjay; Koju, Rajendra; Schindler, Christian; Odermatt, Peter; Utzinger, Jürg; Cissé, Guéladio
2017-01-18
This study assessed drinking water quality, sanitation, and hygiene (WASH) conditions among 708 schoolchildren and 562 households in Dolakha and Ramechhap districts of Nepal. Cross-sectional surveys were carried out in March and June 2015. A Delagua water quality testing kit was employed on 634 water samples obtained from 16 purposively selected schools, 40 community water sources, and 562 households to examine water quality. A flame atomic absorption spectrophotometer was used to test lead and arsenic content of the same samples. Additionally, a questionnaire survey was conducted to obtain WASH predictors. A total of 75% of school drinking water source samples and 76.9% point-of-use samples (water bottles) at schools, 39.5% water source samples in the community, and 27.4% point-of-use samples at household levels were contaminated with thermo-tolerant coliforms. The values of water samples for pH (6.8-7.6), free and total residual chlorine (0.1-0.5 mg/L), mean lead concentration (0.01 mg/L), and mean arsenic concentration (0.05 mg/L) were within national drinking water quality standards. The presence of domestic animals roaming inside schoolchildren's homes was significantly associated with drinking water contamination (adjusted odds ratio: 1.64; 95% confidence interval: 1.08-2.50; p = 0.02). Our findings call for an improvement of WASH conditions at the unit of school, households, and communities.
[Laser Raman spectral investigations of the carbon structure of LiFePO4/C cathode material].
Yang, Chao; Li, Yong-Mei; Zhao, Quan-Feng; Gan, Xiang-Kun; Yao, Yao-Chun
2013-10-01
In the present paper, Laser Raman spectral was used to study the carbon structure of LiFePO4/C positive material. The samples were also been characterized by X-ray diffraction (XRD), scanning electron microscope(SEM), selected area electron diffraction (SEAD) and resistivity test. The result indicated that compared with the sp2/sp3 peak area ratios the I(D)/I(G) ratios are not only more evenly but also exhibited some similar rules. However, the studies indicated that there exist differences of I(D)/ I(G) ratios and sp2/sp3 peak area ratios among different points in the same sample. And compared with the samples using citric acid or sucrose as carbon source, the sample which was synthetized with mixed carbon source (mixed by citric acid and sucrose) exhibited higher I(D)/I(G) ratios and sp2/sp3 peak area ratios. Also, by contrast, the differences of I(D)/I(G) ratios and sp2/sp3 peak area ratios among different points in the same sample are less than the single carbon source samples' datas. In the scanning electron microscopy (sem) and transmission electron microscopy (sem) images, we can observed the uneven distributions of carbon coating of the primary particles and the secondary particles, this may be the main reason for not being uniform of difference data in the same sample. The obvious discreteness will affect the normal use of Raman spectroscopy in these tests.
Shrestha, Akina; Sharma, Subodh; Gerold, Jana; Erismann, Séverine; Sagar, Sanjay; Koju, Rajendra; Schindler, Christian; Odermatt, Peter; Utzinger, Jürg; Cissé, Guéladio
2017-01-01
This study assessed drinking water quality, sanitation, and hygiene (WASH) conditions among 708 schoolchildren and 562 households in Dolakha and Ramechhap districts of Nepal. Cross-sectional surveys were carried out in March and June 2015. A Delagua water quality testing kit was employed on 634 water samples obtained from 16 purposively selected schools, 40 community water sources, and 562 households to examine water quality. A flame atomic absorption spectrophotometer was used to test lead and arsenic content of the same samples. Additionally, a questionnaire survey was conducted to obtain WASH predictors. A total of 75% of school drinking water source samples and 76.9% point-of-use samples (water bottles) at schools, 39.5% water source samples in the community, and 27.4% point-of-use samples at household levels were contaminated with thermo-tolerant coliforms. The values of water samples for pH (6.8–7.6), free and total residual chlorine (0.1–0.5 mg/L), mean lead concentration (0.01 mg/L), and mean arsenic concentration (0.05 mg/L) were within national drinking water quality standards. The presence of domestic animals roaming inside schoolchildren’s homes was significantly associated with drinking water contamination (adjusted odds ratio: 1.64; 95% confidence interval: 1.08–2.50; p = 0.02). Our findings call for an improvement of WASH conditions at the unit of school, households, and communities. PMID:28106779
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
NASA Technical Reports Server (NTRS)
Evans, Cindy; Todd, Nancy
2014-01-01
The Astromaterials Acquisition & Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. Today, the suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from Japan's Hayabusa mission, solar wind atoms collected during the Genesis mission, and space-exposed hardware from several missions. To support planetary science research on these samples, JSC's Astromaterials Curation Office hosts NASA's Astromaterials Curation digital repository and data access portal [http://curator.jsc.nasa.gov/], providing descriptions of the missions and collections, and critical information about each individual sample. Our office is designing and implementing several informatics initiatives to better serve the planetary research community. First, we are re-hosting the basic database framework by consolidating legacy databases for individual collections and providing a uniform access point for information (descriptions, imagery, classification) on all of our samples. Second, we continue to upgrade and host digital compendia that summarize and highlight published findings on the samples (e.g., lunar samples, meteorites from Mars). We host high resolution imagery of samples as it becomes available, including newly scanned images of historical prints from the Apollo missions. Finally we are creating plans to collect and provide new data, including 3D imagery, point cloud data, micro CT data, and external links to other data sets on selected samples. Together, these individual efforts will provide unprecedented digital access to NASA's Astromaterials, enabling preservation of the samples through more specific and targeted requests, and supporting new planetary science research and collaborations on the samples.
NASA Astrophysics Data System (ADS)
Mori, K.; Kanaya, G.
2016-02-01
Serious injuries occurred in residents who consumed fish and shellfishes in Minamata Bay polluted by high-concentration methyl-mercury in the 1950s. Pollution has fallen to a safe level because of the pollution prevention project (dredging etc.) carried out from 1977 to 1990. From 2010 we have been researching the bioaccumulation of mercury in several fishes in Minamata Bay and surrounding areas. We selected several sampling points that showed different environmental conditions, species composition and food web patterns. For the determination of feeding types of 60 species fishes (600 samples) sampled by gill net, we measured mercury levels of each sample and directly checked food items in gut, and distinguished carnivore, omnivore, herbivore and detritivore. At this time, we introduced a stable isotope analysis for checking the food history and feeding habits of dominant fish. In about 300 individuals of 30 species of dominant fish selected from the 600 samples, we measured the stable nitrogen and carbon isotope ratios (δ15N, δ13C) of each sample. Checking the food items in gut of fishes, more than 80% of fishes were carnivorous, and showed different selectivity of food items, such as fish, crustacean and so on. From the results of stable isotope ratios, benthic fish tended to show a higher ratio of δ13C. Usually benthic microalgae evidenced a higher ratio of δ13C than planktonic microalgae, and the ratio conservative through the food chain. In general, δ15N increases through the food chain with +3 to +4 ‰ enrichment per trophic step. In these data, carnivorous fishes of benthic and pelagic type showed medium and high ratios of δ15N. From comparing the stable isotope ratio to the mercury concentration of fishes, all of the high-mercury fishes belonged to benthic and carnivorous types. We consider the joint method of food web analysis and stable isotope analysis to be useful for understanding the mechanism of mercury bioaccumulation through the food web
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Matijasevich, Alicia; Munhoz, Tiago N; Tavares, Beatriz Franck; Barbosa, Ana Paula Pereira Neto; da Silva, Diego Mello; Abitante, Morgana Sonza; Dall'Agnol, Tatiane Abreu; Santos, Iná S
2014-10-08
Standardized questionnaires designed for the identification of depression are useful for monitoring individual as well as population mental health. The Edinburgh Postnatal Depression Scale (EPDS) has originally been developed to assist primary care health professionals to detect postnatal depression, but several authors recommend its use outside of the postpartum period. In Brazil, the use of the EPDS for screening depression outside the postpartum period and among non-selected populations has not been validated. The present study aimed to assess the validity of the EPDS as a screening instrument for major depressive episode (MDE) among adults from the general population. This is a validation study that used a population-based sampling technique to select the participants. The study was conducted in the city of Pelotas, Brazil. Households were randomly selected by two stage conglomerates with probability proportional to size. EPDS was administered to 447 adults (≥20 years). Approximately 17 days later, participants were reinterviewed by psychiatrics and psychologists using a structured diagnostic interview (Mini International Neuropsychiatric Interview, MINI). We calculated the sensitivity and specificity of each cutoff point of EPDS, and values were plotted as a receiver operator characteristic curve. The best cutoff point for screening depression was ≥8, with 80.0% (64.4 - 90.9%) sensitivity and 87.0% (83.3 - 90.1%) specificity. Among women the best cutoff point was ≥8 too with values of sensitivity and specificity of 84.4% (67.2 - 94.7%) and 81.3% (75.5 - 86.1%), respectively. Among men, the best cutoff point was ≥7 (75% sensitivity and 89% specificity). The EPDS was shown to be suitable for screening MDE among adults in the community.
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
Motamarri, Srinivas; Boccelli, Dominic L
2012-09-15
Users of recreational waters may be exposed to elevated pathogen levels through various point/non-point sources. Typical daily notifications rely on microbial analysis of indicator organisms (e.g., Escherichia coli) that require 18, or more, hours to provide an adequate response. Modeling approaches, such as multivariate linear regression (MLR) and artificial neural networks (ANN), have been utilized to provide quick predictions of microbial concentrations for classification purposes, but generally suffer from high false negative rates. This study introduces the use of learning vector quantization (LVQ)--a direct classification approach--for comparison with MLR and ANN approaches and integrates input selection for model development with respect to primary and secondary water quality standards within the Charles River Basin (Massachusetts, USA) using meteorologic, hydrologic, and microbial explanatory variables. Integrating input selection into model development showed that discharge variables were the most important explanatory variables while antecedent rainfall and time since previous events were also important. With respect to classification, all three models adequately represented the non-violated samples (>90%). The MLR approach had the highest false negative rates associated with classifying violated samples (41-62% vs 13-43% (ANN) and <16% (LVQ)) when using five or more explanatory variables. The ANN performance was more similar to LVQ when a larger number of explanatory variables were utilized, but the ANN performance degraded toward MLR performance as explanatory variables were removed. Overall, the use of LVQ as a direct classifier provided the best overall classification ability with respect to violated/non-violated samples for both standards. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2017-07-01
This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.
NASA Astrophysics Data System (ADS)
Michot, Didier; Fouad, Youssef; Pascal, Pichelin; Viaud, Valérie; Soltani, Inès; Walter, Christian
2017-04-01
This study aims are: i) to assess SOC content distribution according to the global soil map (GSM) project recommendations in a heterogeneous landscape ; ii) to compare the prediction performance of digital soil mapping (DSM) and visible-near infrared (Vis-NIR) spectroscopy approaches. The study area of 140 ha, located at Plancoët, surrounds the unique mineral spring water of Brittany (Western France). It's a hillock characterized by a heterogeneous landscape mosaic with different types of forest, permanent pastures and wetlands along a small coastal river. We acquired two independent datasets: j) 50 points selected using a conditioned Latin hypercube sampling (cLHS); jj) 254 points corresponding to the GSM grid. Soil samples were collected in three layers (0-5, 20-25 and 40-50cm) for both sampling strategies. SOC content was only measured in cLHS soil samples, while Vis-NIR spectra were measured on all the collected samples. For the DSM approach, a machine-learning algorithm (Cubist) was applied on the cLHS calibration data to build rule-based models linking soil carbon content in the different layers with environmental covariates, derived from digital elevation model, geological variables, land use data and existing large scale soil maps. For the spectroscopy approach, we used two calibration datasets: k) the local cLHS ; kk) a subset selected from the regional spectral database of Brittany after a PCA with a hierarchical clustering analysis and spiked by local cLHS spectra. The PLS regression algorithm with "leave-one-out" cross validation was performed for both calibration datasets. SOC contents for the 3 layers of the GSM grid were predicted using the different approaches and were compared with each other. Their prediction performance was evaluated by the following parameters: R2, RMSE and RPD. Both approaches led to satisfactory predictions for SOC content with an advantage for the spectral approach, particularly as regards the pertinence of the variation range.
Gaudreau, Patrick; Amiot, Catherine E; Vallerand, Robert J
2009-03-01
This study examined longitudinal trajectories of positive and negative affective states with a sample of 265 adolescent elite hockey players followed across 3 measurement points during the 1st 11 weeks of a season. Latent class growth modeling, incorporating a time-varying covariate and a series of predictors assessed at the onset of the season, was used to chart out distinct longitudinal trajectories of affective states. Results provided evidence for 3 trajectories of positive affect and 3 trajectories of negative affect. Two of these trajectories were deflected by team selection, a seasonal turning point occurring after the 1st measurement point. Furthermore, the trajectories of positive and negative affective states were predicted by theoretically driven predictors assessed at the start of the season (i.e., self-determination, need satisfaction, athletic identity, and school identity). These results contribute to a better understanding of the motivational, social, and identity-related processes associated with the distinct affective trajectories of athletes participating in elite sport during adolescence.
Measuring Generalized Trust: An Examination of Question Wording and the Number of Scale Points.
Lundmark, Sebastian; Gilljam, Mikael; Dahlberg, Stefan
2016-01-01
Survey institutes recently have changed their measurement of generalized trust from the standard dichotomous scale to an 11-point scale. Additionally, numerous survey institutes use different question wordings: where most rely on the standard, fully balanced question (asking if "most people can be trusted or that you need to be very careful in dealing with people"), some use minimally balanced questions, asking only if it is "possible to trust people." By using two survey-embedded experiments, one with 12,009 self-selected respondents and the other with a probability sample of 2,947 respondents, this study evaluates the generalized trust question in terms of question wording and number of scale points used. Results show that, contrary to the more commonly used standard question format (used, for example, by the American National Election Studies and the General Social Survey), generalized trust is best measured with a minimally balanced question wording accompanied with either a seven- or an 11-point scale.
NASA Astrophysics Data System (ADS)
Lazic, V.; De Ninno, A.
2017-11-01
The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.
AGES: THE AGN AND GALAXY EVOLUTION SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochanek, C. S.; Eisenstein, D. J.; Caldwell, N.
2012-05-01
The AGN and Galaxy Evolution Survey (AGES) is a redshift survey covering, in its standard fields, 7.7 deg{sup 2} of the Booetes field of the NOAO Deep Wide-Field Survey. The final sample consists of 23,745 redshifts. There are well-defined galaxy samples in 10 bands (the B{sub W} , R, I, J, K, IRAC 3.6, 4.5, 5.8, and 8.0 {mu}m, and MIPS 24 {mu}m bands) to a limiting magnitude of I < 20 mag for spectroscopy. For these galaxies, we obtained 18,163 redshifts from a sample of 35,200 galaxies, where random sparse sampling was used to define statistically complete sub-samples inmore » all 10 photometric bands. The median galaxy redshift is 0.31, and 90% of the redshifts are in the range 0.085 < z < 0.66. Active galactic nuclei (AGNs) were selected as radio, X-ray, IRAC mid-IR, and MIPS 24 {mu}m sources to fainter limiting magnitudes (I < 22.5 mag for point sources). Redshifts were obtained for 4764 quasars and galaxies with AGN signatures, with 2926, 1718, 605, 119, and 13 above redshifts of 0.5, 1, 2, 3, and 4, respectively. We detail all the AGES selection procedures and present the complete spectroscopic redshift catalogs and spectral energy distribution decompositions. Photometric redshift estimates are provided for all sources in the AGES samples.« less
The Assessment of Selectivity in Different Quadrupole-Orbitrap Mass Spectrometry Acquisition Modes
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Wegh, Robin S.; Meijer, Thijs; Nielen, Michel W. F.
2015-02-01
Selectivity of the confirmation of identity in liquid chromatography (tandem) mass spectrometry using Q-Orbitrap instrumentation was assessed using different acquisition modes based on a representative experimental data set constructed from 108 samples, including six different matrix extracts and containing over 100 analytes each. Single stage full scan, all ion fragmentation, and product ion scanning were applied. By generating reconstructed ion chromatograms using unit mass window in targeted MS2, selected reaction monitoring (SRM), regularly applied using triple-quadrupole instruments, was mimicked. This facilitated the comparison of single stage full scan, all ion fragmentation, (mimicked) SRM, and product ion scanning applying a mass window down to 1 ppm. Single factor Analysis of Variance was carried out on the variance (s2) of the mass error to determine which factors and interactions are significant parameters with respect to selectivity. We conclude that selectivity is related to the target compound (mainly the mass defect), the matrix, sample clean-up, concentration, and mass resolution. Selectivity of the different instrumental configurations was quantified by counting the number of interfering peaks observed in the chromatograms. We conclude that precursor ion selection significantly contributes to selectivity: monitoring of a single product ion at high mass accuracy with a 1 Da precursor ion window proved to be equally selective or better to monitoring two transition products in mimicked SRM. In contrast, monitoring a single fragment in all ion fragmentation mode results in significantly lower selectivity versus mimicked SRM. After a thorough inter-laboratory evaluation study, the results of this study can be used for a critical reassessment of the current identification points system and contribute to the next generation of evidence-based and robust performance criteria in residue analysis and sports doping.
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei
2003-03-01
An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.
Ozçift, Akin
2011-05-01
Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.
Application of the Optimized Summed Scored Attributes Method to Sex Estimation in Asian Crania.
Tallman, Sean D; Go, Matthew C
2018-05-01
The optimized summed scored attributes (OSSA) method was recently introduced and validated for nonmetric ancestry estimation between American Black and White individuals. The method proceeds by scoring, dichotomizing, and subsequently summing ordinal morphoscopic trait scores to maximize between-group differences. This study tests the applicability of the OSSA method for sex estimation using five cranial traits given the methodological similarities between classifying sex and ancestry. A large sample of documented crania from Japan and Thailand (n = 744 males, 320 females) are used to develop a heuristically selected OSSA sectioning point of ≤1 separating males and females. This sectioning point is validated using a holdout sample of Japanese, Thai, and Filipino (n = 178 males, 82 females) individuals. The results indicate a general correct classification rate of 82% using all five traits, and 81% when excluding the mental eminence. Designating an OSSA score of 2 as indeterminate is recommended. © 2017 American Academy of Forensic Sciences.
Horká, Marie; Karásek, Pavel; Salplachta, Jiří; Růžička, Filip; Vykydalová, Marie; Kubesová, Anna; Dráb, Vladimír; Roth, Michal; Slais, Karel
2013-07-25
In this study, combination of capillary isoelectric focusing (CIEF) in tapered fused silica (FS) capillary with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is presented as an efficient approach for unambiguous identification of probiotic bacteria in real sample. For this purpose, bacteria within genus Lactobacillus were selected as model bioanalytes and cow's milk was selected as a biological sample. CIEF analysis of both the cultivated bacteria and the bacteria in the milk was optimized and isoelectric points characterizing the examined bacteria were subsequently determined independently of the bacterial sample origin. The use of tapered FS capillary significantly enhanced the separation capacity and efficiency of the CIEF analyses performed. In addition, the cell number injected into the tapered FS capillary was quantified and an excellent linearity of the calibration curves was achieved which enabled quantitative analysis of the bacteria by CIEF with UV detection. The minimum detectable number of bacterial cells was 2×10(6) mL(-1). Finally, cow's milk spiked with the selected bacterium was analyzed by CIEF in tapered FS capillary, the focused and detected bacterial cells were collected from the capillary, deposited onto the cultivation medium, and identified using MALDI-TOF MS afterward. Our results have revealed that the proposed procedure can be advantageously used for unambiguous identification of probiotic bacteria in a real sample. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kassem, Mohammed A.; Amin, Alaa S.
2015-02-01
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.
VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS
NASA Technical Reports Server (NTRS)
Rizzi, S. A.
1994-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech International, Inc.; Wellesley, MA) which includes a TMS320C30 DSP processor, 256Kb zero wait state SRAM, and a daughter board with 8Mb one wait state DRAM. Please contact COSMIC for additional information on required hardware and software. In order to compile the provided VPI source code, a Microsoft C version 6.0 compiler, a Texas Instruments' TMS320C30 assembly language compiler, and the Spirit 30 run time libraries are required. A math co-processor is highly recommended. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. VPI was developed in 1991-1992.
Perfluorochemicals and human semen quality: the LIFE study.
Louis, Germaine M Buck; Chen, Zhen; Schisterman, Enrique F; Kim, Sungduk; Sweeney, Anne M; Sundaram, Rajeshwari; Lynch, Courtney D; Gore-Langton, Robert E; Barr, Dana Boyd
2015-01-01
The relation between persistent environmental chemicals and semen quality is evolving, although limited data exist for men recruited from general populations. We examined the relation between perfluorinated chemicals (PFCs) and semen quality among 501 male partners of couples planning pregnancy. Using population-based sampling strategies, we recruited 501 couples discontinuing contraception from two U.S. geographic regions from 2005 through 2009. Baseline interviews and anthropometric assessments were conducted, followed by blood collection for the quantification of seven serum PFCs (perfluorosulfonates, perfluorocarboxylates, and perfluorosulfonamides) using tandem mass spectrometry. Men collected a baseline semen sample and another approximately 1 month later. Semen samples were shipped with freezer packs, and analyses were performed on the day after collection. We used linear regression to estimate the difference in each semen parameter associated with a one unit increase in the natural log-transformed PFC concentration after adjusting for confounders and modeling repeated semen samples. Sensitivity analyses included optimal Box-Cox transformation of semen quality end points. Six PFCs [2-(N-methyl-perfluorooctane sulfonamido) acetate (Me-PFOSA-AcOH), perfluorodecanoate (PFDeA), perfluorononanoate (PFNA), perfluorooctane sulfonamide (PFOSA), perfluorooctane sulfonate (PFOS), and perfluorooctanoic acid (PFOA)] were associated with 17 semen quality end points before Box-Cox transformation. PFOSA was associated with smaller sperm head area and perimeter, a lower percentage of DNA stainability, and a higher percentage of bicephalic and immature sperm. PFDeA, PFNA, PFOA, and PFOS were associated with a lower percentage of sperm with coiled tails. Select PFCs were associated with certain semen end points, with the most significant associations observed for PFOSA but with results in varying directions.
Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.
2016-01-01
Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628
Dlamini, S G; Mathuthu, M M; Tshivhase, V M
2016-03-01
High concentrations of radionuclides and toxic elements in gold mine tailings facilities present a health hazard to the environment and people living near that area. Soil and water samples from selected areas around the Princess Mine dump were collected. Soil sampling was done on the surface (15 cm) and also 100 cm below the surface. Water samples were taken from near the dump, mid-stream and the flowing part of the stream (drainage pipe) passing through Roodepoort from the mine dump. Soil samples were analyzed by gamma-ray spectroscopy using a HPGe detector to determine the activity concentrations of (238)U, (232)Th and (4) (40)K from the activities of the daughter nuclides in the respective decay chains. The average activity concentrations for uranium and thorium in soil were calculated to be 129 ± 36.1 Bq/kg and 18.1 ± 4.01 Bq/kg, respectively. Water samples were analyzed using Inductively Coupled Plasma Mass Spectrometer. Transfer factors for uranium and thorium from soil to water (at point A closest to dump) were calculated to be 0.494 and 0.039, respectively. At point Z2, which is furthest from the dump, they were calculated to be 0.121 and 0.004, respectively. These transfer factors indicate that there is less translocation of the radionuclides as the water flows. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Selection of RR Lyrae Stars Using POSS and SDSS
NASA Astrophysics Data System (ADS)
Fraser, Oliver J.; Barton, J. R.; Oldfield, B. J.; Biesiadzinski, T. P.; Horning, D. A.; Baerny, J. K.; Kiuchi, F.; Krogsrud, D.; Longhurst, D. S.; McCommas, L. P.; Scheidt, J. A.; Covarrubias, R.; Covey, K.; Laws, C.; Sesar, B.; Ivezic, Z.
2006-12-01
We test a method for identifying candidate RR Lyrae stars based on a comparison of POSS and SDSS photometry (Sesar et. al. 2005). Our candidate stars range in SDSS g magnitude from 14.4--16, or a distance of 6--12 kpc. Follow-up photometry obtained at Manastash Ridge Observatory typically includes 30-40 points per light curve. We find that at least two thirds of our sample of 23 objects are clearly variable, with light curves consistent with RR Lyrae. Candidate RR Lyrae were selected using stars that had brightened at least 0.3 magnitudes between POSS and SDSS, and which had SDSS magnitudes and colors consistent with the cuts in Ivezic et al. 2004.
Overdensities of SMGs around WISE-selected, ultraluminous, high-redshift AGNs
NASA Astrophysics Data System (ADS)
Jones, Suzy F.; Blain, Andrew W.; Assef, Roberto J.; Eisenhardt, Peter; Lonsdale, Carol; Condon, James; Farrah, Duncan; Tsai, Chao-Wei; Bridge, Carrie; Wu, Jingwen; Wright, Edward L.; Jarrett, Tom
2017-08-01
We investigate extremely luminous dusty galaxies in the environments around Wide-field Infrared Survey Explorer (WISE)-selected hot dust-obscured galaxies (Hot DOGs) and WISE/radio-selected active galactic nuclei (AGNs) at average redshifts of z = 2.7 and 1.7, respectively. Previous observations have detected overdensities of companion submillimetre-selected sources around 10 Hot DOGs and 30 WISE/radio AGNs, with overdensities of ˜2-3 and ˜5-6, respectively. We find that the space densities in both samples to be overdense compared to normal star-forming galaxies and submillimetre galaxies (SMGs) in the Submillimetre Common-User Bolometer Array 2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). Both samples of companion sources have consistent mid-infrared (mid-IR) colours and mid-IR to submm ratios as SMGs. The brighter population around WISE/radio AGNs could be responsible for the higher overdensity reported. We also find that the star formation rate densities are higher than the field, but consistent with clusters of dusty galaxies. WISE-selected AGNs appear to be good signposts for protoclusters at high redshift on arcmin scales. The results reported here provide an upper limit to the strength of angular clustering using the two-point correlation function. Monte Carlo simulations show no angular correlation, which could indicate protoclusters on scales larger than the SCUBA-2 1.5-arcmin scale maps.
ERIC Educational Resources Information Center
Al-Madi, Bayan
2013-01-01
The purpose of this study is to identify the level of practicing academic freedom by the faculty members of Al al-Bayt University. The study population included all the faculty members (297) of Al al-Bayt University, during the academic year, 2010/2011. The study sample was randomly selected and included 250 faculty members. To achieve the aims of…
Release of (and lessons learned from mining) a pioneering large toxicogenomics database.
Sandhu, Komal S; Veeramachaneni, Vamsi; Yao, Xiang; Nie, Alex; Lord, Peter; Amaratunga, Dhammika; McMillian, Michael K; Verheyen, Geert R
2015-07-01
We release the Janssen Toxicogenomics database. This rat liver gene-expression database was generated using Codelink microarrays, and has been used over the past years within Janssen to derive signatures for multiple end points and to classify proprietary compounds. The release consists of gene-expression responses to 124 compounds, selected to give a broad coverage of liver-active compounds. A selection of the compounds were also analyzed on Affymetrix microarrays. The release includes results of an in-house reannotation pipeline to Entrez gene annotations, to classify probes into different confidence classes. High confidence unambiguously annotated probes were used to create gene-level data which served as starting point for cross-platform comparisons. Connectivity map-based similarity methods show excellent agreement between Codelink and Affymetrix runs of the same samples. We also compared our dataset with the Japanese Toxicogenomics Project and observed reasonable agreement, especially for compounds with stronger gene signatures. We describe an R-package containing the gene-level data and show how it can be used for expression-based similarity searches. Comparing the same biological samples run on the Affymetrix and the Codelink platform, good correspondence is observed using connectivity mapping approaches. As expected, this correspondence is smaller when the data are compared with an independent dataset such as TG-GATE. We hope that this collection of gene-expression profiles will be incorporated in toxicogenomics pipelines of users.
Gill, C O; McGinnis, J C; Bryant, J
1998-07-21
The microbiological effects on the product of the series of operations for skinning the hindquarters of beef carcasses at three packing plants were assessed. Samples were obtained at each plant from randomly selected carcasses, by swabbing specified sites related to opening cuts, rump skinning or flank skinning operations, randomly selected sites along the lines of the opening cuts, or randomly selected sites on the skinned hindquarters of carcasses. A set of 25 samples of each type was collected at each plant, with the collection of a single sample from each selected carcass. Aerobic counts, coliforms and Escherichia coli were enumerated in each sample, and a log mean value was estimated for each set of 25 counts on the assumption of a log normal distribution of the counts. The data indicated that the hindquarters skinning operations at plant A were hygienically inferior to those at the other two plants, with mean numbers of coliforms and E. coli being about two orders of magnitude greater, and aerobic counts being an order of magnitude greater on the skinned hindquarters of carcasses from plant A than on those from plants B or C. The data further indicated that the operation for cutting open the skin at plant C was hygienically superior to the equivalent operation at plant B, but that the operations for skinning the rump and flank at plant B were hygienically superior to the equivalent operations at plant C. The findings suggest that objective assessment of the microbiological effects on carcasses of beef carcass dressing processes will be required to ensure that Hazard Analysis: Critical Control Point and Quality Management Systems are operated to control the microbiological condition of carcasses.
Methodological issues with adaptation of clinical trial design.
Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T
2006-01-01
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.
Gutnick, Damara; Siegel, Carole; Laska, Eugene; Wanderling, Joseph; Wagner, Ellen Cogen; Haugland, Gary; Conlon, Mary K
We examined whether the cut-point 10 for the Patient Health Questionnaire-9 (PHQ9) depression screen used in primary care populations is equally valid for Mexicans (M), Ecuadorians (E), Puerto Ricans (PR) and non-Hispanic whites (W) from inner-city hospital-based primary care clinics; and whether stressful life events elevate scores and the probability of major depressive disorder (MDD). Over 18-months, a sample of persons from hospital clinics with a positive initial PHQ2 and a subsequent PHQ9 were administered a stressful life event questionnaire and a Structured Clinical Interview to establish an MDD diagnosis, with oversampling of those between 8 and 12: (n=261: 75 E, 71 M, 51 PR, 64 W). For analysis, the sample was weighted using chart review (n=368) to represent a typical clinic population. Receiver Operating Characteristics analysis selected cut-points maximizing sensitivity (Sn) plus specificity (Sp). The optimal cut-point for all groups was 13 with the corresponding Sn and Sp estimates for E=(Sn 73%, Sp 71%), M=(76%, 81%), PR=(81%, 63%) and W=(80%, 74%). Stressful life events impacted screen scores and MDD diagnosis. Elevating the PHQ9 cut-point for inner-city Latinos as well as whites is suggested to avoid high false positive rates leading to improper treatment with clinical and economic consequences. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pourmajidian, Maedeh; McDermid, Joseph R.
2018-03-01
The present study investigates the selective oxidation of a 0.1C-6Mn-2Si medium-Mn advanced high-strength steel during austenization annealing heat treatments as a function of process atmosphere oxygen partial pressure and annealing time. It was determined that the surface oxide growth kinetics followed a parabolic rate law with the minimum rate belonging to the lowest oxygen partial pressure atmosphere at a dew point of 223 K (- 50 °C). The chemistry of the surface and subsurface oxides was studied using STEM + EELS on the sample cross sections, and it was found that the surface oxides formed under the 223 K (- 50 °C) dew-point atmosphere consisted of a layered configuration of SiO2, MnSiO3, and MnO, while in the case of the higher pO2 process atmospheres, only MnO was detected at the surface. Consistent with the Wagner calculations, it was shown that the transition to internal oxidation for Mn occurred under the 243 K (- 30 °C) and 278 K (+ 5 °C) dew-point atmospheres. However, the predictions of the external to internal oxidation for Si using the Wagner model did not correlate well with the experimental findings nor did the predictions of the Mataigne et al. model for multi-element alloys. Investigations of the internal oxide network at the grain boundaries revealed a multilayer oxide structure composed of amorphous SiO2 and crystalline MnSiO3, respectively, at the oxide core and outer shell. A mechanism for the formation of the oxide morphologies observed, based on kinetic and thermodynamic factors, was proposed. It is expected that only the fine and nodule-like MnO oxides formed on the surface of the samples annealed under the 278 K (+ 5 °C) dew-point process atmosphere for 60 and 120 seconds are sufficiently thin and of the desired dispersed morphology to promote reactive wetting by the molten galvanizing bath.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Bieri, Stefan; Ilias, Yara; Bicchi, Carlo; Veuthey, Jean-Luc; Christen, Philippe
2006-04-21
An effective combination of focused microwave-assisted extraction (FMAE) with solid-phase microextraction (SPME) prior to gas chromatography (GC) is described for the selective extraction and quantitative analysis of cocaine from coca leaves (Erythroxylum coca). This approach required switching from an organic extraction solvent to an aqueous medium more compatible with SPME liquid sampling. SPME was performed in the direct immersion mode with a universal 100 microm polydimethylsiloxane (PDMS) coated fibre. Parameters influencing this extraction step, such as solution pH, sampling time and temperature are discussed. Furthermore, the overall extraction process takes into account the stability of cocaine in alkaline aqueous solutions at different temperatures. Cocaine degradation rate was determined by capillary electrophoresis using the short end injection procedure. In the selected extraction conditions, less than 5% of cocaine was degraded after 60 min. From a qualitative point of view, a significant gain in selectivity was obtained with the incorporation of SPME in the extraction procedure. As a consequence of SPME clean-up, shorter columns could be used and analysis time was reduced to 6 min compared to 35 min with conventional GC. Quantitative results led to a cocaine content of 0.70 +/- 0.04% in dry leaves (RSD <5%) which agreed with previous investigations.
Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus
2018-07-12
The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active selection of samples by active learning (AL) used for subsequent model adaptation is advantageous compared to passive (random) selection in case that a drift leads to persistent prediction bias allowing more rapid adaptation at lower reference measurement rates. Fully unsupervised adaptation using FLEXFIS-PLS could improve predictive accuracy significantly for light drifts but was not able to fully compensate for prediction bias in case of significant lack of fit w.r.t. the latent variable space. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J.; Mohr, J.; Saro, A.
2015-02-26
We use microwave observations from the South Pole Telescope (SPT) to examine the Sunyaev-Zel'dovich effect (SZE) signatures of a sample of 46 X-ray selected groups and clusters drawn from similar to 6 deg(2) of the XMM-Newton Blanco Cosmology Survey. These systems extend to redshift z = 1.02 and probe the SZE signal to the lowest X-ray luminosities (>= 10(42) erg s(-1)) yet; these sample characteristics make this analysis complementary to previous studies. We develop an analysis tool, using X-ray luminosity as a mass proxy, to extract selection-bias-corrected constraints on the SZE significance and Y-500 mass relations. The former is inmore » good agreement with an extrapolation of the relation obtained from high-mass clusters. However, the latter, at low masses, while in good agreement with the extrapolation from the high-mass SPT clusters, is in tension at 2.8 sigma with the Planck constraints, indicating the low-mass systems exhibit lower SZE signatures in the SPT data. We also present an analysis of potential sources of contamination. For the radio galaxy point source population, we find 18 of our systems have 843 MHz Sydney University Molonglo Sky Survey sources within 2 arcmin of the X-ray centre, and three of these are also detected at significance >4 by SPT. Of these three, two are associated with the group brightest cluster galaxies, and the third is likely an unassociated quasar candidate. We examine the impact of these point sources on our SZE scaling relation analyses and find no evidence of biases. We also examine the impact of dusty galaxies using constraints from the 220 GHz data. The stacked sample provides 2.8 sigma significant evidence of dusty galaxy flux, which would correspond to an average underestimate of the SPT Y-500 signal that is (17 +/- 9) per cent in this sample of low-mass systems. Finally, we explore the impact of future data from SPTpol and XMM-XXL, showing that it will lead to a factor of 4 to 5 tighter constraints on these SZE mass-observable relations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J.; Mohr, J.; Saro, A.
2015-02-25
We use microwave observations from the South Pole Telescope (SPT) to examine the Sunyaev–Zel'dovich effect (SZE) signatures of a sample of 46 X-ray selected groups and clusters drawn from ~6 deg 2 of the XMM–Newton Blanco Cosmology Survey. These systems extend to redshift z = 1.02 and probe the SZE signal to the lowest X-ray luminosities (≥10 42 erg s -1) yet; these sample characteristics make this analysis complementary to previous studies. We develop an analysis tool, using X-ray luminosity as a mass proxy, to extract selection-bias-corrected constraints on the SZE significance and Y_500 mass relations. The former is in good agreement with anmore » extrapolation of the relation obtained from high-mass clusters. However, the latter, at low masses, while in good agreement with the extrapolation from the high-mass SPT clusters, is in tension at 2.8σ with the Planck constraints, indicating the low-mass systems exhibit lower SZE signatures in the SPT data. We also present an analysis of potential sources of contamination. For the radio galaxy point source population, we find 18 of our systems have 843 MHz Sydney University Molonglo Sky Survey sources within 2 arcmin of the X-ray centre, and three of these are also detected at significance >4 by SPT. Of these three, two are associated with the group brightest cluster galaxies, and the third is likely an unassociated quasar candidate. We examine the impact of these point sources on our SZE scaling relation analyses and find no evidence of biases. We also examine the impact of dusty galaxies using constraints from the 220 GHz data. The stacked sample provides 2.8σ significant evidence of dusty galaxy flux, which would correspond to an average underestimate of the SPT Y_500 signal that is (17 ± 9)per cent in this sample of low-mass systems. Finally, we explore the impact of future data from SPTpol and XMM-XXL, showing that it will lead to a factor of 4 to 5 tighter constraints on these SZE mass–observable relations.« less
Nishikawa, Shingo; Kimura, Hideharu; Koba, Hayato; Yoneda, Taro; Watanabe, Satoshi; Sakai, Tamami; Hara, Johsuke; Sone, Takashi; Kasahara, Kazuo; Nakao, Shinji
2018-03-01
The epidermal growth factor receptor (EGFR) T790M mutation is associated with resistance to EGFR tyrosine kinase inhibitors (EGFR-TKIs) in non-small cell lung cancer (NSCLC). However, tissues for the genotyping of the EGFR T790M mutation can be difficult to obtain in a clinical setting. The aims of this study were to evaluate a blood-based, non-invasive approach to detecting the EGFR T790M mutation in advanced NSCLC patients using the PointMan™ EGFR DNA enrichment kit, which is a novel method for the selective amplification of specific genotype sequences. Blood samples were collected from NSCLC patients who had activating EGFR mutations and who were resistant to EGFR-TKI treatment. Using cell-free DNA (cfDNA) from plasma, EGFR T790M mutations were amplified using the PointMan™ enrichment kit, and all the reaction products were confirmed using direct sequencing. The concentrations of plasma DNA were then determined using quantitative real-time PCR. Nineteen patients were enrolled, and 12 patients (63.2%) were found to contain EGFR T790M mutations in their cfDNA, as detected by the kit. T790M mutations were detected in tumor tissues in 12 cases, and 11 of these cases (91.7%) also exhibited the T790M mutation in cfDNA samples. The concentrations of cfDNA were similar between patients with the T790M mutation and those without the mutation. The PointMan™ kit provides a useful method for determining the EGFR T790M mutation status in cfDNA.
NASA Astrophysics Data System (ADS)
Kraus, Adam H.
Moisture within a transformer's insulation system has been proven to degrade its dielectric strength. When installing a transformer in situ, one method used to calculate the moisture content of the transformer insulation is to measure the dew point temperature of the internal gas volume of the transformer tank. There are two instruments commercially available that are designed for dew point temperature measurement: the Alnor Model 7000 Dewpointer and the Vaisala DRYCAPRTM Hand-Held Dewpoint Meter DM70. Although these instruments perform an identical task, the design technology behind each instrument is vastly different. When the Alnor Dewpointer and Vaisala DM70 instruments are used to measure the dew point of the internal gas volume simultaneously from a pressurized transformer, their differences in dew point measurement have been observed to vary as much as 30 °F. There is minimal scientific research available that focuses on the process of measuring dew point of a gas inside a pressurized transformer, let alone this observed phenomenon. The primary objective of this work was to determine what effect certain factors potentially have on dew point measurements of a transformer's internal gas volume, in hopes of understanding the root cause of this phenomenon. Three factors that were studied include (1) human error, (2) the use of calibrated and out-of-calibration instruments, and (3) the presence of oil vapor gases in the dry air sample, and their subsequent effects on the Q-value of the sampled gas. After completing this portion of testing, none of the selected variables proved to be a direct cause of the observed discrepancies between the two instruments. The secondary objective was to validate the accuracy of each instrument as compared to its respective published range by testing against a known dew point temperature produced by a humidity generator. In a select operating range of -22 °F to -4 °F, both instruments were found to be accurate and within their specified tolerances. This temperature range is frequently encountered in oil-soaked transformers, and demonstrates that both instruments can measure accurately over a limited, yet common, range despite their different design methodologies. It is clear that there is another unknown factor present in oil-soaked transformers that is causing the observed discrepancy between these instruments. Future work will include testing on newly manufactured or rewound transformers in order to investigate other variables that could be causing this discrepancy.
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
NASA Astrophysics Data System (ADS)
Khorunzhev, G. A.; Burenin, R. A.; Meshcheryakov, A. V.; Sazonov, S. Yu.
2016-05-01
We have compiled a catalog of 903 candidates for type 1 quasars at redshifts 3 < z < 5.5 selected among the X-ray sources of the "serendipitous" XMM-Newton survey presented in the 3XMMDR4 catalog (the median X-ray flux is ≈5 × 10-15 erg s-1 cm-2 in the 0.5-2 keV energy band) and located at high Galactic latitudes | b| > 20° in Sloan Digital Sky Survey (SDSS) fields with a total area of about 300 deg2. Photometric SDSS data as well infrared 2MASS and WISE data were used to select the objects. We selected the point sources from the photometric SDSS catalog with a magnitude error δ mz' < 0.2 and a color i' - z' < 0.6 (to first eliminate the M-type stars). For the selected sources, we have calculated the dependences χ2( z) for various spectral templates from the library that we compiled for these purposes using the EAZY software. Based on these data, we have rejected the objects whose spectral energy distributions are better described by the templates of stars at z = 0 and obtained a sample of quasars with photometric redshift estimates 2.75 < z phot < 5.5. The selection completeness of known quasars at z spec > 3 in the investigated fields is shown to be about 80%. The normalized median absolute deviation (Δ z = | z spec - z phot|) is σ Δ z /(1+ z spec) = 0.07, while the outlier fraction is η = 9% when Δ z/(1 + z cпek.) > 0.2. The number of objects per unit area in our sample exceeds the number of quasars in the spectroscopic SDSS sample at the same redshifts approximately by a factor of 1.5. The subsequent spectroscopic testing of the redshifts of our selected candidates for quasars at 3 < z < 5.5 will allow the purity of this sample to be estimated more accurately.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
...; formerly Docket No. 2007D-0290] Draft Guidance for Industry: Cell Selection Devices for Point of Care Production of Minimally Manipulated Autologous Peripheral Blood Stem Cells; Withdrawal of Draft Guidance...: Cell Selection Devices for Point of Care Production of Minimally Manipulated Autologous Peripheral...
Risk score to predict the outcome of patients with cerebral vein and dural sinus thrombosis.
Ferro, José M; Bacelar-Nicolau, Helena; Rodrigues, Teresa; Bacelar-Nicolau, Leonor; Canhão, Patrícia; Crassard, Isabelle; Bousser, Marie-Germaine; Dutra, Aurélio Pimenta; Massaro, Ayrton; Mackowiack-Cordiolani, Marie-Anne; Leys, Didier; Fontes, João; Stam, Jan; Barinagarrementeria, Fernando
2009-01-01
Around 15% of patients die or become dependent after cerebral vein and dural sinus thrombosis (CVT). We used the International Study on Cerebral Vein and Dural Sinus Thrombosis (ISCVT) sample (624 patients, with a median follow-up time of 478 days) to develop a Cox proportional hazards regression model to predict outcome, dichotomised by a modified Rankin Scale score >2. From the model hazard ratios, a risk score was derived and a cut-off point selected. The model and the score were tested in 2 validation samples: (1) the prospective Cerebral Venous Thrombosis Portuguese Collaborative Study Group (VENOPORT) sample with 91 patients; (2) a sample of 169 consecutive CVT patients admitted to 5 ISCVT centres after the end of the ISCVT recruitment period. Sensitivity, specificity, c statistics and overall efficiency to predict outcome at 6 months were calculated. The model (hazard ratios: malignancy 4.53; coma 4.19; thrombosis of the deep venous system 3.03; mental status disturbance 2.18; male gender 1.60; intracranial haemorrhage 1.42) had overall efficiencies of 85.1, 84.4 and 90.0%, in the derivation sample and validation samples 1 and 2, respectively. Using the risk score (range from 0 to 9) with a cut-off of >or=3 points, overall efficiency was 85.4, 84.4 and 90.1% in the derivation sample and validation samples 1 and 2, respectively. Sensitivity and specificity in the combined samples were 96.1 and 13.6%, respectively. The CVT risk score has a good estimated overall rate of correct classifications in both validation samples, but its specificity is low. It can be used to avoid unnecessary or dangerous interventions in low-risk patients, and may help to identify high-risk CVT patients. (c) 2009 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Shariati, Mohsen; Ghafouri, Vahid
2014-02-01
Synthesis of In2O3 nanostructures grown on Si substrate by the resistive evaporation of metallic indium granules followed by dry oxidation process has been articulated. To prepare nucleation growth sites, selected samples pre-annealed around indium melting point in free-oxygen atmosphere and then to fabricate 1-D nanostructures, they annealed in a horizontal thermal furnace in presence of argon and oxygen. For comparison, one sample, the same origin as initially pre-annealed samples, was excluded in pre-annealing process but presented in annealing step. Characterization of the products with FESEM revealed that the pre-annealed obtained nanostructures are mostly nanorod and nanowire with different morphologies. For the comparative sample, no 1-D structures achieved. X-ray diffraction (XRD) patterns for pre-annealed samples indicated that they are crystalline and the comparative one is polycrystalline. Photoluminescence (PL) measurements carried out at room temperature revealed that emission band shifted to shorter wavelength from pre-annealed samples to comparative one.
Where Do I Start (Beginning the Investigation)?
NASA Astrophysics Data System (ADS)
Kornacki, Jeffrey L.
No doubt some will open directly to this chapter, because your product is contaminated with an undesirable microbe, or perhaps you have been asked to do such an investigation for another company's facility not previously observed by you and naturally you want tips on how to find where the contaminant is getting into the product stream. This chapter takes the reader through the process of beginning the investigation including understanding the process including the production schedule and critically reviewing previously generated laboratory data. Understanding the critical control points and validity of their critical limits is also important. Scoping the extent of the problem is next. It is always a good idea for the factory to have a rigorously validated cleaning and sanitation procedure that provides a documented "sanitation breakpoint," which can be useful in the "scoping" process, although some contamination events may extend past these "break-points." Touring the facility is next wherein preliminary pre-selection of areas for future sampling can be done. Operational samples and observations in non-food contact areas can be taken at this time. Then the operations personnel need to be consulted and plans made for an appropriate amount of time to observe equipment break down for "post-operational" sampling and "pre-operational" investigational sampling. Hence the chapter further discusses preparing operations personnel for the disruptions that go along with these investigations and assembling the sampling team. The chapter concludes with a discussion of post-startup observations after an investigation and sampling.
Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail
2015-08-01
The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tosolin, A.; Souček, P.; Beneš, O.; Vigier, J.-F.; Luzzi, L.; Konings, R. J. M.
2018-05-01
PuF3 was synthetized by hydro-fluorination of PuO2 and subsequent reduction of the product by hydrogenation. The obtained PuF3 was analysed by X-Ray Diffraction (XRD) and found phase-pure. High purity was also confirmed by the melting point analysis using Differential Scanning Calorimetry (DSC). PuF3 was then used for thermodynamic assessment of the PuF3-LiF system. Phase equilibrium points and enthalpy of fusion of the eutectic composition were measured by DSC. XRD analyses of selected samples after DSC measurement confirm that after solidification from the liquid, the system returns to a mixture of LiF and PuF3.
Igne, Benoît; de Juan, Anna; Jaumot, Joaquim; Lallemand, Jordane; Preys, Sébastien; Drennen, James K; Anderson, Carl A
2014-10-01
The implementation of a blend monitoring and control method based on a process analytical technology such as near infrared spectroscopy requires the selection and optimization of numerous criteria that will affect the monitoring outputs and expected blend end-point. Using a five component formulation, the present article contrasts the modeling strategies and end-point determination of a traditional quantitative method based on the prediction of the blend parameters employing partial least-squares regression with a qualitative strategy based on principal component analysis and Hotelling's T(2) and residual distance to the model, called Prototype. The possibility to monitor and control blend homogeneity with multivariate curve resolution was also assessed. The implementation of the above methods in the presence of designed experiments (with variation of the amount of active ingredient and excipients) and with normal operating condition samples (nominal concentrations of the active ingredient and excipients) was tested. The impact of criteria used to stop the blends (related to precision and/or accuracy) was assessed. Results demonstrated that while all methods showed similarities in their outputs, some approaches were preferred for decision making. The selectivity of regression based methods was also contrasted with the capacity of qualitative methods to determine the homogeneity of the entire formulation. Copyright © 2014. Published by Elsevier B.V.
Gene selection with multiple ordering criteria.
Chen, James J; Tsai, Chen-An; Tzeng, Shengli; Chen, Chun-Houh
2007-03-05
A microarray study may select different differentially expressed gene sets because of different selection criteria. For example, the fold-change and p-value are two commonly known criteria to select differentially expressed genes under two experimental conditions. These two selection criteria often result in incompatible selected gene sets. Also, in a two-factor, say, treatment by time experiment, the investigator may be interested in one gene list that responds to both treatment and time effects. We propose three layer ranking algorithms, point-admissible, line-admissible (convex), and Pareto, to provide a preference gene list from multiple gene lists generated by different ranking criteria. Using the public colon data as an example, the layer ranking algorithms are applied to the three univariate ranking criteria, fold-change, p-value, and frequency of selections by the SVM-RFE classifier. A simulation experiment shows that for experiments with small or moderate sample sizes (less than 20 per group) and detecting a 4-fold change or less, the two-dimensional (p-value and fold-change) convex layer ranking selects differentially expressed genes with generally lower FDR and higher power than the standard p-value ranking. Three applications are presented. The first application illustrates a use of the layer rankings to potentially improve predictive accuracy. The second application illustrates an application to a two-factor experiment involving two dose levels and two time points. The layer rankings are applied to selecting differentially expressed genes relating to the dose and time effects. In the third application, the layer rankings are applied to a benchmark data set consisting of three dilution concentrations to provide a ranking system from a long list of differentially expressed genes generated from the three dilution concentrations. The layer ranking algorithms are useful to help investigators in selecting the most promising genes from multiple gene lists generated by different filter, normalization, or analysis methods for various objectives.
Spectra from the IRS of Bright Oxygen-Rich Evolved Stars in the SMC
NASA Astrophysics Data System (ADS)
Kraemer, Kathleen E.; Sloan, Greg; Wood, Peter
2016-06-01
We have used Spitzer's Infrared Spectrograph (IRS) to obtain spectra of stars in the Small Magellanic Cloud (SMC). The targets were chosen from the Point Source Catalog of the Mid-Course Space Experiment (MSX), which detected the 243 brightest infrared sources in the SMC. Our SMC sample of oxygen-rich evolved stars shows more dust than found in previous samples, and the dust tends to be dominated by silicates, with little contribution from alumina. Both results may arise from the selection bias in the MSX sample and our sample toward more massive stars. Additionally, several sources show peculiar spectral features such as PAHs, crystalline silicates, or both carbon-rich and silicate features. The spectrum of one source, MSX SMC 145, is a combination of an ordinary AGB star and a background galaxy at z~0.16, rather than an OH/IR star as previously suggested.
Lee, Casey J.; Mau, D.P.; Rasmussen, T.J.
2005-01-01
Water and sediment samples were collected by the U.S. Geological Survey in 12 watersheds in Johnson County, northeastern Kansas, to determine the effects of nonpoint and selected point contaminant sources on stream-water quality and their relation to varying land use. The streams studied were located in urban areas of the county (Brush, Dykes Branch, Indian, Tomahawk, and Turkey Creeks), developing areas of the county (Blue River and Mill Creek), and in more rural areas of the county (Big Bull, Captain, Cedar, Kill, and Little Bull Creeks). Two base-flow synoptic surveys (73 total samples) were conducted in 11 watersheds, a minimum of three stormflow samples were collected in each of six watersheds, and 15 streambed-sediment sites were sampled in nine watersheds from October 2002 through June 2004. Discharge from seven wastewater treatment facilities (WWTFs) were sampled during base-flow synoptic surveys. Discharge from these facilities comprised greater than 50 percent of streamflow at the farthest downstream sampling site in six of the seven watersheds during base-flow conditions. Nutrients, organic wastewater-indicator compounds, and prescription and nonprescription pharmaceutical compounds generally were found in the largest concentrations during base-flow conditions at sites at, or immediately downstream from, point-source discharges from WWTFs. Downstream from WWTF discharges streamflow conditions were generally stable, whereas nutrient and wastewater-indicator compound concentrations decreased in samples from sites farther downstream. During base-flow conditions, sites upstream from WWTF discharges had significantly larger fecal coliform and Escherichia coli densities than downstream sites. Stormflow samples had the largest suspended-sediment concentrations and indicator bacteria densities. Other than in samples from sites in proximity to WWTF discharges, stormflow samples generally had the largest nutrient concentrations in Johnson County streams. Discharge from WWTFs with trickling-filter secondary treatment processes had the largest concentrations of many potential contaminants during base-flow conditions. Samples from two of three trickling-filter WWTFs exceeded Kansas Department of Health and Environment pH- and temperature-dependent chronic aquatic-life criteria for ammonia when early-life stages of fish are present. Discharge from trickling-filter facilities generally had the most detections and largest concentrations of many organic wastewater-indicator compounds in Johnson County stream-water samples. Caffeine (stimulant), nonylphenol-diethoxylate (detergent surfactant), and tris(2-butoxyethyl) phosphate (floor polish, flame retardant, and plasticizer) were found at concentrations larger than maximum concentrations in comparable studies. Land use and seasonality affected the occurrence and magnitude of many potential water-quality contaminants originating from nonpoint sources. Base-flow samples from urban sites located upstream from WWTF discharges had larger indicator bacteria densities and wastewater-indicator compound concentrations than did base-flow samples from sites in nonurban areas. Dissolved-solids concentrations were the largest in winter stormflow samples from urban sites and likely were due to runoff from road-salt application. One sample from an urban watershed had a chloride concentration of 1,000 milligrams per liter, which exceeded the Kansas Department of Health and Environment's acute aquatic-life use criterion (860 milligrams per liter) likely due to effects from road-salt application. Pesticide concentrations were the largest in spring stormflow samples collected in nonurban watersheds. Although most wastewater-indicator compounds were found at the largest concentrations in samples from WWTF discharges, the compounds 9-10, anthraquinone (bird repellent), caffeine (stimulant), carbazole (component of coal tar, petroleum products), nonylphenol-diethoxylate (detergent surfactant),
Griffitt, Kimberly J; Grimes, D Jay
2013-08-01
A new selective and differential medium, Vibrio vulnificus X-Gal (VVX), was developed for direct enumeration of V. vulnificus (Vv) from oyster samples. This agar utilizes cellobiose and lactose as carbon sources, and the antibiotics colistin and polymyxin B as selective agents. Hydrolysis of 5-bromo-4-chloro-3-indolyl- beta-d-galactopyranoside (x-gal), used in the agar as a lactose analog, produces an insoluble blue dye that makes lactose positive colonies easily distinguishable from any non-lactose fermenting bacteria. Various bacterial species were spot plated onto thiosulfate-citrate-bile salts-sucrose agar (TCBS), and CHROMagar Vibrio, two vibrio-specific selective agars, non-selective agar, and VVX to compare selectivity of VVX to other widely used media. A V. vulnificus pure culture was serially diluted on VVX and non-selective agar to determine the VVX percent recovery. Water and oyster samples were spread plated on VVX agar and allowed to incubate for 16-18 h at 33 °C. Blue and white colonies from VVX agar were picked and screened by end point PCR for the Vv hemolysin vvhA. VVX agar showed a significant improvement over TCBS and CHROMagar at preventing non-target growth. There was an 87.5% recovery compared to non-selective plating and a 98% positivity rate of blue colonies picked from oyster tissue plating. The findings suggest that this new agar is a fast, distinctive, and accurate method for enumeration of V. vulnificus from the environment. Copyright © 2013 Elsevier B.V. All rights reserved.
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong
2007-01-01
Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.
NASA Astrophysics Data System (ADS)
Quadri, Ryan; Marchesini, Danilo; van Dokkum, Pieter; Gawiser, Eric; Franx, Marijn; Lira, Paulina; Rudnick, Gregory; Urry, C. Megan; Maza, José; Kriek, Mariska; Barrientos, L. Felipe; Blanc, Guillermo A.; Castander, Francisco J.; Christlein, Daniel; Coppi, Paolo S.; Hall, Patrick B.; Herrera, David; Infante, Leopoldo; Taylor, Edward N.; Treister, Ezequiel; Willis, Jon P.
2007-09-01
We present deep near-infrared JHK imaging of four 10' × 10' fields. The observations were carried out as part of the Multiwavelength Survey by Yale-Chile (MUSYC) with ISPI on the CTIO 4 m telescope. The typical point-source limiting depths are J ~ 22.5, H ~ 21.5, and K ~ 21 (5 σ Vega). The effective seeing in the final images is ~1.0″. We combine these data with MUSYC UBVRIz imaging to create K-selected catalogs that are unique for their uniform size, depth, filter coverage, and image quality. We investigate the rest-frame optical colors and photometric redshifts of galaxies that are selected using common color selection techniques, including distant red galaxies (DRGs), star-forming and passive BzKs, and the rest-frame UV-selected BM, BX, and Lyman break galaxies (LBGs). These techniques are effective at isolating large samples of high-redshift galaxies, but none provide complete or uniform samples across the targeted redshift ranges. The DRG and BM/BX/LBG criteria identify populations of red and blue galaxies, respectively, as they were designed to do. The star-forming BzKs have a very wide redshift distribution, extending down to z ~ 1, a wide range of colors, and may include galaxies with very low specific star formation rates. In comparison, the passive BzKs are fewer in number, have a different distribution of K magnitudes, and have a somewhat different redshift distribution. By combining either the DRG and BM/BX/LBG criteria, or the star-forming and passive BzK criteria, it appears possible to define a reasonably complete sample of galaxies to our flux limit over specific redshift ranges. However, the redshift dependence of both the completeness and sampled range of rest-frame colors poses an ultimate limit to the usefulness of these techniques.
Quantification of 4 antidepressants and a metabolite by LC-MS for therapeutic drug monitoring.
Choong, Eva; Rudaz, Serge; Kottelat, Astrid; Haldemann, Sophie; Guillarme, Davy; Veuthey, Jean-Luc; Eap, Chin B
2011-06-01
A liquid chromatography method coupled to mass spectrometry was developed for the quantification of bupropion, its metabolite hydroxy-bupropion, moclobemide, reboxetine and trazodone in human plasma. The validation of the analytical procedure was assessed according to Société Française des Sciences et Techniques Pharmaceutiques and the latest Food and Drug Administration guidelines. The sample preparation was performed with 0.5 mL of plasma extracted on a cation-exchange solid phase 96-well plate. The separation was achieved in 14 min on a C18 XBridge column (2.1 mm×100 mm, 3.5 μm) using a 50 mM ammonium acetate pH 9/acetonitrile mobile phase in gradient mode. The compounds of interest were analysed in the single ion monitoring mode on a single quadrupole mass spectrometer working in positive electrospray ionisation mode. Two ions were selected per molecule to increase the number of identification points and to avoid as much as possible any false positives. Since selectivity is always a critical point for routine therapeutic drug monitoring, more than sixty common comedications for the psychiatric population were tested. For each analyte, the analytical procedure was validated to cover the common range of concentrations measured in plasma samples: 1-400 ng/mL for reboxetine and bupropion, 2-2000 ng/mL for hydroxy-bupropion, moclobemide, and trazodone. For all investigated compounds, reliable performance in terms of accuracy, precision, trueness, recovery, selectivity and stability was obtained. One year after its implementation in a routine process, this method demonstrated a high robustness with accurate values over the wide concentration range commonly observed among a psychiatric population. Copyright © 2011 Elsevier B.V. All rights reserved.
Impact of speciation on the electron charge transfer properties of nanodiamond drug carriers
NASA Astrophysics Data System (ADS)
Sun, Baichuan; Barnard, Amanda S.
2016-07-01
Unpassivated diamond nanoparticles (bucky-diamonds) exhibit a unique surface reconstruction involving graphitization of certain crystal facets, giving rise to hybrid core-shell particles containing both aromatic and aliphatic carbon. Considerable effort is directed toward eliminating the aromatic shell, but persistent graphitization of subsequent subsurface-layers makes perdurable purification a challenge. In this study we use some simple statistical methods, in combination with electronic structure simulations, to predict the impact of different fractions of aromatic and aliphatic carbon on the charge transfer properties of the ensembles of bucky-diamonds. By predicting quality factors for a variety of cases, we find that perfect purification is not necessary to preserve selectivity, and there is a clear motivation for purifying samples to improve the sensitivity of charge transfer reactions. This may prove useful in designing drug delivery systems where the release of (selected) drugs needs to be sensitive to specific conditions at the point of delivery.Unpassivated diamond nanoparticles (bucky-diamonds) exhibit a unique surface reconstruction involving graphitization of certain crystal facets, giving rise to hybrid core-shell particles containing both aromatic and aliphatic carbon. Considerable effort is directed toward eliminating the aromatic shell, but persistent graphitization of subsequent subsurface-layers makes perdurable purification a challenge. In this study we use some simple statistical methods, in combination with electronic structure simulations, to predict the impact of different fractions of aromatic and aliphatic carbon on the charge transfer properties of the ensembles of bucky-diamonds. By predicting quality factors for a variety of cases, we find that perfect purification is not necessary to preserve selectivity, and there is a clear motivation for purifying samples to improve the sensitivity of charge transfer reactions. This may prove useful in designing drug delivery systems where the release of (selected) drugs needs to be sensitive to specific conditions at the point of delivery. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr03068h
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
NASA Astrophysics Data System (ADS)
Wang, Lin-zhi; Wang, Sen; Wu, Jiao-jiao
2017-11-01
Effects of laser energy density (LED) on densities and surface roughness of AlSi10Mg samples processed by selective laser melting were studied. The densification behaviors of the SLM manufactured AlSi10Mg samples at different LEDs were characterized by a solid densitometer, an industrial X-ray and CT detection system. A field emission scanning electron microscope, an automatic optical measuring system, and a surface profiler were used for measurements of surface roughness. The results show that relatively high density can be obtained with the point distance of 80-105 μm and the exposure time of 140-160 μs. The LED has an important influence on the surface morphology of the forming part, too high LED may lead to balling effect, while too low LED tends to produce defects, such as porosity and microcrack, and then affect surface roughness and porosities of the parts finally.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
Adelmanesh, Farhad; Jalali, Ali; Jazayeri Shooshtari, Seyed Mostafa; Raissi, Gholam Reza; Ketabchi, Seyed Mehdi; Shir, Yoram
2015-10-01
The objective of this study was to compare the prevalence of gluteal trigger point in patients with lumbosacral radiculopathy with that in healthy volunteers. In a cross-sectional, multistage sampling method, patients with clinical, electromyographic, and magnetic resonance imaging findings consistent with lumbosacral radiculopathy were examined for the presence of gluteal trigger point. Age- and sex-matched clusters of healthy volunteers were selected as the control group. The primary outcome of the study was the presence or absence of gluteal trigger point in the gluteal region of the patients and the control group. Of 441 screened patients, 271 met all the inclusion criteria for lumbosacral radiculopathy and were included in the study. Gluteal trigger point was identified in 207 (76.4%) of the 271 patients with radiculopathy, compared with 3 (1.9%) of 152 healthy volunteers (P < 0.001). The location of gluteal trigger point matched the side of painful radiculopathy in 74.6% of patients with a unilateral radicular pain. There was a significant correlation between the side of the gluteal trigger point and the side of patients' radicular pain (P < 0.001). Although rare in the healthy volunteers, most of the patients with lumbosacral radiculopathy had gluteal trigger point, located at the painful side. Further studies are required to test the hypothesis that specific gluteal trigger point therapy could be beneficial in these patients.
Zhou, Haidong; Ying, Tianqi; Wang, Xuelian; Liu, Jianbo
2016-01-01
Twelve selected pharmaceuticals including antibiotics, analgesics, antiepileptics and lipid regulators were analysed and detected in water samples collected from 18 sampling sections along the three main urban rivers in Yangpu District of Shanghai, China during four sampling campaigns. Besides, algal growth inhibition test was conducted to preliminarily assess the eco-toxicology induced by the target pharmaceuticals in the rivers. Mean levels for most of target compounds were generally below 100 ng/L at sampling sections, with the exception of caffeine and paracetamol presenting considerably high concentration. The detected pharmaceuticals in the urban rivers ranged from
NASA Astrophysics Data System (ADS)
Zhou, Haidong; Ying, Tianqi; Wang, Xuelian; Liu, Jianbo
2016-10-01
Twelve selected pharmaceuticals including antibiotics, analgesics, antiepileptics and lipid regulators were analysed and detected in water samples collected from 18 sampling sections along the three main urban rivers in Yangpu District of Shanghai, China during four sampling campaigns. Besides, algal growth inhibition test was conducted to preliminarily assess the eco-toxicology induced by the target pharmaceuticals in the rivers. Mean levels for most of target compounds were generally below 100 ng/L at sampling sections, with the exception of caffeine and paracetamol presenting considerably high concentration. The detected pharmaceuticals in the urban rivers ranged from
Soft γ-ray selected radio galaxies: favouring giant size discovery
NASA Astrophysics Data System (ADS)
Bassani, L.; Venturi, T.; Molina, M.; Malizia, A.; Dallacasa, D.; Panessa, F.; Bazzano, A.; Ubertini, P.
2016-09-01
Using the recent INTEGRAL/IBIS and Swift/BAT surveys we have extracted a sample of 64 confirmed plus three candidate radio galaxies selected in the soft gamma-ray band. The sample covers all optical classes and is dominated by objects showing a Fanaroff-Riley type II radio morphology; a large fraction (70 per cent) of the sample is made of `radiative mode' or high-excitation radio galaxies. We measured the source size on images from the NRAO VLA Sky Survey, the Faint Images of the Radio Sky at twenty-cm and the Sydney University Molonglo Sky Survey images and have compared our findings with data in the literature obtaining a good match. We surprisingly found that the soft gamma-ray selection favours the detection of large size radio galaxies: 60 per cent of objects in the sample have size greater than 0.4 Mpc while around 22 per cent reach dimension above 0.7 Mpc at which point they are classified as giant radio galaxies (GRGs), the largest and most energetic single entities in the Universe. Their fraction among soft gamma-ray selected radio galaxies is significantly larger than typically found in radio surveys, where only a few per cent of objects (1-6 per cent) are GRGs. This may partly be due to observational biases affecting radio surveys more than soft gamma-ray surveys, thus disfavouring the detection of GRGs at lower frequencies. The main reasons and/or conditions leading to the formation of these large radio structures are still unclear with many parameters such as high jet power, long activity time and surrounding environment all playing a role; the first two may be linked to the type of active galactic nucleus discussed in this work and partly explain the high fraction of GRGs found in the present sample. Our result suggests that high energy surveys may be a more efficient way than radio surveys to find these peculiar objects.
Redler, Silke; Tazi-Ahnini, Rachid; Drichel, Dmitriy; Birch, Mary P; Brockschmidt, Felix F; Dobson, Kathy; Giehl, Kathrin A; Refke, Melanie; Kluck, Nadine; Kruse, Roland; Lutz, Gerhard; Wolff, Hans; Böhm, Markus; Becker, Tim; Nöthen, Markus M; Betz, Regina C; Messenger, Andrew
2012-05-01
Female pattern hair loss (FPHL) is a common disorder with a complex mode of inheritance. Although understanding of its etiopathogenesis is incomplete, an interaction between genetic and hormonal factors is assumed to be important. The involvement of an androgen-dependent pathway and sex steroid hormones is the most likely hypothesis. We therefore selected a total of 21 variants from the steroid-5-alpha-reductase isoforms SRD5A1 and SRD5A2, the sex steroid hormone receptors ESR1, ESR2 (oestrogen receptor) and PGR (progesterone receptor) and genotyped these in a case-control sample of 198 patients (145 UK; 53 German patients) and 329 controls (179 UK; 150 German). None of these variants showed any significant association, either in the overall UK and German samples or in the subgroup analyses. In summary, the present results, while based on a limited selection of gene variants, do not point to the involvement of SRD5A1, SRD5A2, ESR1, ESR2 or PGR in FPHL. © 2012 John Wiley & Sons A/S.
Kartal Temel, Nuket; Gürkan, Ramazan
2018-03-01
A novel ultrasound assisted-cloud point extraction method was developed for preconcentration and determination of V(V) in beverage samples. After complexation by pyrogallol in presence of safranin T at pH 6.0, V(V) ions as ternary complex are extracted into the micellar phase of Triton X-114. The complex was monitored at 533 nm by spectrophotometry. The matrix effect on the recovery of V(V) from the spiked samples at 50 μg L-1 was evaluated. In optimized conditions, the limits of detection and quantification of the method, respectively, was 0.58 and 1.93 μg L-1 in linear range of 2-500 μg L-1 with sensitivity enhancement and preconcentration factors of 47.7 and 40 for preconcentration from 15 mL of sample solution. The recoveries from spiked samples were in range of 93.8-103.2% with a relative standard deviation ranging from 2.6% to 4.1% (25, 100 and 250 μg L-1, n: 5). The accuracy was verified by analysis of two certified samples, and the results were in a good agreement with the certified values. The intra-day and inter-day precision were tested by reproducibility (as 3.3-3.4%) and repeatability (as 3.4-4.1%) analysis for five replicate measurements of V(V) in quality control samples spiked with 5, 10 and 15 μg L-1. Trace V(V) contents of the selected beverage samples by the developed method were successfully determined.
A Novel Database to Rank and Display Archeomagnetic Intensity Data
NASA Astrophysics Data System (ADS)
Donadini, F.; Korhonen, K.; Riisager, P.; Pesonen, L. J.; Kahma, K.
2005-12-01
To understand the content and the causes of the changes in the Earth's magnetic field beyond the observatory records one has to rely on archeomagnetic and lake sediment paleomagnetic data. The regional archeointensity curves are often of different quality and temporally variable which hampers the global analysis of the data in terms of dipole vs non-dipole field. We have developed a novel archeointensity database application utilizing MySQL, PHP (PHP Hypertext Preprocessor), and the Generic Mapping Tools (GMT) for ranking and displaying geomagnetic intensity data from the last 12000 years. Our application has the advantage that no specific software is required to query the database and view the results. Querying the database is performed using any Web browser; a fill-out form is used to enter the site location and a minimum ranking value to select the data points to be displayed. The form also features the possibility to select plotting of the data as an archeointensity curve with error bars, and a Virtual Axial Dipole Moment (VADM) or ancient field value (Ba) curve calculated using the CALS7K model (Continuous Archaeomagnetic and Lake Sediment geomagnetic model) of (Korte and Constable, 2005). The results of a query are displayed on a Web page containing a table summarizing the query parameters, a table showing the archeointensity values satisfying the query parameters, and a plot of VADM or Ba as a function of sample age. The database consists of eight related tables. The main one, INTENSITIES, stores the 3704 archeointensity measurements collected from 159 publications as VADM (and VDM when available) and Ba values, including their standard deviations and sampling locations. It also contains the number of samples and specimens measured from each site. The REFS table stores the references to a particular study. The names, latitudes, and longitudes of the regions where the samples were collected are stored in the SITES table. The MATERIALS, METHODS, SPECIMEN_TYPES and DATING_METHODS tables store information about the sample materials, intensity determination methods, specimen types and age determination methods. The SIGMA_COUNT table is used indirectly for ranking data according to the number of samples measured and their standard deviations. Each intensity measurement is assigned a score (0--2) depending on the number of specimens measured and their standard deviations, the intensity determination method, the type of specimens measured and materials. The ranking of each data point is calculated as the sum of the four scores and varies between 0 and 8. Additionally, users can select the parameters that will be included in the ranking.
NASA Technical Reports Server (NTRS)
Baumann, P. R. (Principal Investigator)
1979-01-01
Three computer quantitative techniques for determining urban land cover patterns are evaluated. The techniques examined deal with the selection of training samples by an automated process, the overlaying of two scenes from different seasons of the year, and the use of individual pixels as training points. Evaluation is based on the number and type of land cover classes generated and the marks obtained from an accuracy test. New Orleans, Louisiana and its environs form the study area.
On simple aerodynamic sensitivity derivatives for use in interdisciplinary optimization
NASA Technical Reports Server (NTRS)
Doggett, Robert V., Jr.
1991-01-01
Low-aspect-ratio and piston aerodynamic theories are reviewed as to their use in developing aerodynamic sensitivity derivatives for use in multidisciplinary optimization applications. The basic equations relating surface pressure (or lift and moment) to normal wash are given and discussed briefly for each theory. The general means for determining selected sensitivity derivatives are pointed out. In addition, some suggestions in very general terms are included as to sample problems for use in studying the process of using aerodynamic sensitivity derivatives in optimization studies.
An evaluation of the occupational health risks to workers in a hazardous waste incinerator.
Bakoğlu, Mithat; Karademir, Aykan; Ayberk, Savaş
2004-03-01
A study was conducted to evaluate the health impact of airborne pollutants on incinerator workers at IZAYDAS Incinerator, Turkey. Ambient air samples were taken from two sampling points in the incinerator area and analyzed for particulate matter, heavy metals, volatile and semi-volatile organic compounds (VOCs and SVOCs) and dioxins. The places where the maximum exposure was expected to occur were selected in determining the sampling points. The first point was placed in the front area of the rotary kiln, between the areas of barrel feeding, aqueous and liquid waste storage and solid waste feeding, and the second one was near the fly ash transfer line from the ash silo. Results were evaluated based on the regulations related to occupational health. Benzene, dibromochloropropane (DBCP) and hexachlorobutadiene (HCBD) concentrations in the ambient air of the plant were measured at levels higher than the occupational exposure limits. Dioxin concentrations were measured as 0.050 and 0.075 pg TEQ.m(-3), corresponding to a daily intake between 0.007 and 0.01 pg TEQ. kg body weight(-1).day (-1). An assessment of dioxin congener and homologue profiles suggested that gaseous fractions of dioxin congeners are higher in front of the rotary kiln, while most of them are in particle-bound phases near the ash conveyor. Finally, the necessity of further studies including occupational health and medical surveillance assessments on the health effects of the pollutants for the workers and the general population in such an industrialized area was emphasized.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
Nicholls, Barry; Racey, Paul A.
2007-01-01
Large numbers of bats are killed by collisions with wind turbines, and there is at present no direct method of reducing or preventing this mortality. We therefore determine whether the electromagnetic radiation associated with radar installations can elicit an aversive behavioural response in foraging bats. Four civil air traffic control (ATC) radar stations, three military ATC radars and three weather radars were selected, each surrounded by heterogeneous habitat. Three sampling points matched for habitat type and structure, dominant vegetation species, altitude and surrounding land class were located at increasing distances from each station. A portable electromagnetic field meter measured the field strength of the radar at three distances from the source: in close proximity (<200 m) with a high electromagnetic field (EMF) strength >2 volts/metre, an intermediate point within line of sight of the radar (200–400 m) and with an EMF strength <2 v/m, and a control site out of sight of the radar (>400 m) and registering an EMF of zero v/m. At each radar station bat activity was recorded three times with three independent sampling points monitored on each occasion, resulting in a total of 90 samples, 30 of which were obtained within each field strength category. At these sampling points, bat activity was recorded using an automatic bat recording station, operated from sunset to sunrise. Bat activity was significantly reduced in habitats exposed to an EMF strength of greater than 2 v/m when compared to matched sites registering EMF levels of zero. The reduction in bat activity was not significantly different at lower levels of EMF strength within 400 m of the radar. We predict that the reduction in bat activity within habitats exposed to electromagnetic radiation may be a result of thermal induction and an increased risk of hyperthermia. PMID:17372629
In-line inspection of unpiggable buried live gas pipes using circumferential EMAT guided waves
NASA Astrophysics Data System (ADS)
Ren, Baiyang; Xin, Junjun
2018-04-01
Unpiggable buried gas pipes need to be inspected to ensure their structural integrity and safe operation. The CIRRIS XITM robot, developed and operated by ULC Robotics, conducts in-line nondestructive inspection of live gas pipes. With the no-blow launching system, the inspection operation has reduced disruption to the public and by eliminating the need to dig trenches, has minimized the site footprint. This provides a highly time and cost effective solution for gas pipe maintenance. However, the current sensor on the robot performs a point-by-point measurement of the pipe wall thickness which cannot cover the whole volume of the pipe in a reasonable timeframe. The study of ultrasonic guided wave technique is discussed to improve the volume coverage as well as the scanning speed. Circumferential guided wave is employed to perform axial scanning. Mode selection is discussed in terms of sensitivity to different defects and defect characterization capability. To assist with the mode selection, finite element analysis is performed to evaluate the wave-defect interaction and to identify potential defect features. Pulse-echo and through-transmission mode are evaluated and compared for their pros and cons in axial scanning. Experiments are also conducted to verify the mode selection and detect and characterize artificial defects introduced into pipe samples.
NASA Astrophysics Data System (ADS)
Tanaka, Hidefumi; Yamamoto, Yuhji
2016-05-01
Palaeointensity experiments were carried out to a sample collection from two sections of basalt lava flow sequences of Pliocene age in north central Iceland (Chron C2An) to further refine the knowledge of the behaviour of the palaeomagnetic field. Selection of samples was mainly based on their stability of remanence to thermal demagnetization as well as good reversibility in variations of magnetic susceptibility and saturation magnetization with temperature, which would indicate the presence of magnetite as a product of deuteric oxidation of titanomagnetite. Among 167 lava flows from two sections, 44 flows were selected for the Königsberger-Thellier-Thellier experiment in vacuum. In spite of careful pre-selection of samples, an Arai plot with two linear segments, or a concave-up appearance, was often encountered during the experiments. This non-ideal behaviour was probably caused by an irreversible change in the domain state of the magnetic grains of the pseudo-single-domain (PSD) range. This is assumed because an ideal linear plot was obtained in the second run of the palaeointensity experiment in which a laboratory thermoremanence acquired after the final step of the first run was used as a natural remanence. This experiment was conducted on six selected samples, and no clear difference between the magnetic grains of the experimented and pristine sister samples was found by scanning electron microscope and hysteresis measurements, that is, no occurrence of notable chemical/mineralogical alteration, suggesting that no change in the grain size distribution had occurred. Hence, the two-segment Arai plot was not caused by the reversible multidomain/PSD effect in which the curvature of the Arai plot is dependent on the grain size. Considering that the irreversible change in domain state must have affected data points at not only high temperatures but also low temperatures, fv ≥ 0.5 was adopted as one of the acceptance criteria where fv is a vectorially defined fraction of the linear segment. A measure of curvature k' was also used to check the linearity of the selected linear segment. It was avoided, however, to reject the result out of hand by the large curvature k of the entire data points because it might still include a linear segment with a large fraction. Combining with the results of Shaw's experiments, 52 palaeointensities were obtained out of 192 specimens, or 11 flow means were obtained out of the 44 lava flows. Most of the palaeointensities were from the upper part of the lava section (Chron C2An.1n) and ranged between 30 and 66 μT. Including two results from the bottom part of the lava section, the mean virtual dipole moment for 2.5-3.5 Ma is 6.3 ± 1.4 × 1022 Am2 (N = 11), which is ˜19 per cent smaller than the present-day dipole moment.
Landslide susceptibility analysis with logistic regression model based on FCM sampling strategy
NASA Astrophysics Data System (ADS)
Wang, Liang-Jie; Sawada, Kazuhide; Moriguchi, Shuji
2013-08-01
Several mathematical models are used to predict the spatial distribution characteristics of landslides to mitigate damage caused by landslide disasters. Although some studies have achieved excellent results around the world, few studies take the inter-relationship of the selected points (training points) into account. In this paper, we present the Fuzzy c-means (FCM) algorithm as an optimal method for choosing the appropriate input landslide points as training data. Based on different combinations of the Fuzzy exponent (m) and the number of clusters (c), five groups of sampling points were derived from formal seed cells points and applied to analyze the landslide susceptibility in Mizunami City, Gifu Prefecture, Japan. A logistic regression model is applied to create the models of the relationships between landslide-conditioning factors and landslide occurrence. The pre-existing landslide bodies and the area under the relative operative characteristic (ROC) curve were used to evaluate the performance of all the models with different m and c. The results revealed that Model no. 4 (m=1.9, c=4) and Model no. 5 (m=1.9, c=5) have significantly high classification accuracies, i.e., 90.0%. Moreover, over 30% of the landslide bodies were grouped under the very high susceptibility zone. Otherwise, Model no. 4 and Model no. 5 had higher area under the ROC curve (AUC) values, which were 0.78 and 0.79, respectively. Therefore, Model no. 4 and Model no. 5 offer better model results for landslide susceptibility mapping. Maps derived from Model no. 4 and Model no. 5 would offer the local authorities crucial information for city planning and development.
Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael
2017-11-01
Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Penny, Samantha J.; Masters, Karen L.; Weijmans, Anne-Marie; Westfall, Kyle B.; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Falcón-Barroso, Jesús; Law, David; Nichol, Robert C.; Thomas, Daniel; Bizyaev, Dmitry; Brownstein, Joel R.; Freischlad, Gordon; Gaulme, Patrick; Grabowski, Katie; Kinemuchi, Karen; Malanushenko, Elena; Malanushenko, Viktor; Oravetz, Daniel; Roman-Lopes, Alexandre; Pan, Kaike; Simmons, Audrey; Wake, David A.
2016-11-01
Using kinematic maps from the Sloan Digital Sky Survey (SDSS) Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey, we reveal that the majority of low-mass quenched galaxies exhibit coherent rotation in their stellar kinematics. Our sample includes all 39 quenched low-mass galaxies observed in the first year of MaNGA. The galaxies are selected with Mr > -19.1, stellar masses 109 M⊙ < M* < 5 × 109 M⊙, EWHα < 2 Å, and all have red colours (u - r) > 1.9. They lie on the size-magnitude and σ-luminosity relations for previously studied dwarf galaxies. Just six (15 ± 5.7 per cent) are found to have rotation speeds ve, rot < 15 km s-1 at ˜1 Re, and may be dominated by pressure support at all radii. Two galaxies in our sample have kinematically distinct cores in their stellar component, likely the result of accretion. Six contain ionized gas despite not hosting ongoing star formation, and this gas is typically kinematically misaligned from their stellar component. This is the first large-scale Integral Field Unit (IFU) study of low-mass galaxies selected without bias against low-density environments. Nevertheless, we find the majority of these galaxies are within ˜1.5 Mpc of a bright neighbour (MK < -23; or M* > 5 × 1010 M⊙), supporting the hypothesis that galaxy-galaxy or galaxy-group interactions quench star formation in low-mass galaxies. The local bright galaxy density for our sample is ρproj = 8.2 ± 2.0 Mpc-2, compared to ρproj = 2.1 ± 0.4 Mpc-2 for a star-forming comparison sample, confirming that the quenched low-mass galaxies are preferentially found in higher density environments.
Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang
2014-11-01
Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
Active machine learning for rapid landslide inventory mapping with VHR satellite images (Invited)
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2013-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
Gómez, María José; Herrera, Sonia; Solé, David; García-Calvo, Eloy; Fernández-Alba, Amadeo R
2012-03-15
This study aims to assess the occurrence, fate and temporal and spatial distribution of anthropogenic contaminants in a river subjected to different pressures (industrial, agricultural, wastewater discharges). For this purpose, the Henares River basin (central Spain) can be considered a representative basin within a continental Mediterranean climate. As the studied river runs through several residential, industrial and agricultural areas, it would be expected that the chemical water quality is modified along its course. Thereby the selection of sampling points and timing of sample collection are critical factors in the monitoring of a river basin. In this study, six different monitoring campaigns were performed in 2010 and contaminants were measured at the effluent point of the main wastewater treatment plant (WWTP) in the river basin and at five different points upstream and downstream from the WWTP emission point. The target compounds evaluated were personal care products (PCPs), polycyclic aromatic hydrocarbons (PAHs) and pesticides. Results show that the river is clearly influenced by wastewater discharges and also by its proximity to agricultural areas. The contaminants detected at higher concentrations were the PCPs. The spatial distribution of the contaminants indicates that the studied contaminants persist along the river. In the time period studied no great seasonal variations of PCPs at the river collection points were observed. In contrast, a temporal trend of pesticides and PAHs was observed. Besides the target compounds, other new contaminants were identified and evaluated in the water samples, some of them being investigated for the first time in the aquatic environment. The behaviour of three important transformation products was also evaluated: 9,10-anthracenodione, galaxolide-lactone and 4-amino-musk xylene. These were found at higher concentrations than their parent compounds, indicating the significance of including the study of transformation products in the monitoring programmes. Copyright © 2012 Elsevier B.V. All rights reserved.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Investigating Possible Outliers in the Fermi Blazar AGN Sample
NASA Astrophysics Data System (ADS)
Shrader, Chris
2018-01-01
The Fermi Gamma-Ray Space Telescope (Fermi) has cataloged over 3000 gamma-ray (>100 MeV) point sources of which more than 1100 are likely AGN. These AGN are predominantly among the radio-loud “blazar” subclass. Recently however, a significant sample of bright (F_15GHz >1.5 Jy), radio selected AGN was found to overlap with Fermi at only the ~80% level (Lister et. al., 2015). This could be a result of some selection bias or it could be due to deficient Doppler boosting among that ~20%. Additionally, a recent survey of high-latitude gamma-ray sources by Schinzel et al. (2017) reveals a sample of ~100 objects which are not detected in the 4-10 GHz radio band to a limiting flux of about 2mJy. This apparent lack of radio flux is puzzling, and may indicate either an extreme Compton-dominated sample, or copious gamma-ray emission from a heretofore unknown population such as a subclass of radio-quiet AGN. Speculatively, these radio-loud/gamma-quiet and gamma-loud/radio quiet samples could be odd cases of the blazar phenomena which reside outside of the well-known blazar sequence. To explore this problem further we have undertaken a study to construct or constrain individual source SEDs as a first step towards their classification. In this contribution we present results from our search for emission in the Swift-BAT 15-100-keV hard X-ray band for each of these samples.
Culture adaptation of malaria parasites selects for convergent loss-of-function mutants.
Claessens, Antoine; Affara, Muna; Assefa, Samuel A; Kwiatkowski, Dominic P; Conway, David J
2017-01-24
Cultured human pathogens may differ significantly from source populations. To investigate the genetic basis of laboratory adaptation in malaria parasites, clinical Plasmodium falciparum isolates were sampled from patients and cultured in vitro for up to three months. Genome sequence analysis was performed on multiple culture time point samples from six monoclonal isolates, and single nucleotide polymorphism (SNP) variants emerging over time were detected. Out of a total of five positively selected SNPs, four represented nonsense mutations resulting in stop codons, three of these in a single ApiAP2 transcription factor gene, and one in SRPK1. To survey further for nonsense mutants associated with culture, genome sequences of eleven long-term laboratory-adapted parasite strains were examined, revealing four independently acquired nonsense mutations in two other ApiAP2 genes, and five in Epac. No mutants of these genes exist in a large database of parasite sequences from uncultured clinical samples. This implicates putative master regulator genes in which multiple independent stop codon mutations have convergently led to culture adaptation, affecting most laboratory lines of P. falciparum. Understanding the adaptive processes should guide development of experimental models, which could include targeted gene disruption to adapt fastidious malaria parasite species to culture.
Sobel Leonard, Ashley; McClain, Micah T; Smith, Gavin J D; Wentworth, David E; Halpin, Rebecca A; Lin, Xudong; Ransier, Amy; Stockwell, Timothy B; Das, Suman R; Gilbert, Anthony S; Lambkin-Williams, Robert; Ginsburg, Geoffrey S; Woods, Christopher W; Koelle, Katia
2016-12-15
Knowledge of influenza virus evolution at the point of transmission and at the intrahost level remains limited, particularly for human hosts. Here, we analyze a unique viral data set of next-generation sequencing (NGS) samples generated from a human influenza challenge study wherein 17 healthy subjects were inoculated with cell- and egg-passaged virus. Nasal wash samples collected from 7 of these subjects were successfully deep sequenced. From these, we characterized changes in the subjects' viral populations during infection and identified differences between the virus in these samples and the viral stock used to inoculate the subjects. We first calculated pairwise genetic distances between the subjects' nasal wash samples, the viral stock, and the influenza virus A/Wisconsin/67/2005 (H3N2) reference strain used to generate the stock virus. These distances revealed that considerable viral evolution occurred at various points in the human challenge study. Further quantitative analyses indicated that (i) the viral stock contained genetic variants that originated and likely were selected for during the passaging process, (ii) direct intranasal inoculation with the viral stock resulted in a selective bottleneck that reduced nonsynonymous genetic diversity in the viral hemagglutinin and nucleoprotein, and (iii) intrahost viral evolution continued over the course of infection. These intrahost evolutionary dynamics were dominated by purifying selection. Our findings indicate that rapid viral evolution can occur during acute influenza infection in otherwise healthy human hosts when the founding population size of the virus is large, as is the case with direct intranasal inoculation. Influenza viruses circulating among humans are known to rapidly evolve over time. However, little is known about how influenza virus evolves across single transmission events and over the course of a single infection. To address these issues, we analyze influenza virus sequences from a human challenge experiment that initiated infection with a cell- and egg-passaged viral stock, which appeared to have adapted during its preparation. We find that the subjects' viral populations differ genetically from the viral stock, with subjects' viral populations having lower representation of the amino-acid-changing variants that arose during viral preparation. We also find that most of the viral evolution occurring over single infections is characterized by further decreases in the frequencies of these amino-acid-changing variants and that only limited intrahost genetic diversification through new mutations is apparent. Our findings indicate that influenza virus populations can undergo rapid genetic changes during acute human infections. Copyright © 2016 Sobel Leonard et al.
Wang, Gang; Mao, Bing; Xiong, Ze-Yu; Fan, Tao; Chen, Xiao-Dong; Wang, Lei; Liu, Guan-Jian; Liu, Jia; Guo, Jia; Chang, Jing; Wu, Tai-Xiang; Li, Ting-Qian
2007-07-01
The number of randomized controlled trials (RCTs) of traditional Chinese medicine (TCM) is increasing. However, there have been few systematic assessments of the quality of reporting of these trials. This study was undertaken to evaluate the quality of reporting of RCTs in TCM journals published in mainland China from 1999 to 2004. Thirteen TCM journals were randomly selected by stratified sampling of the approximately 100 TCM journals published in mainland China. All issues of the selected journals published from 1999 to 2004 were hand-searched according to guidelines from the Cochrane Centre. All reviewers underwent training in the evaluation of RCTs at the Chinese Centre of Evidence-based Medicine. A comprehensive quality assessment of each RCT was completed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT) checklist (total of 30 items) and the Jadad scale. Disagreements were resolved by consensus. Seven thousand four hundred twenty-two RCTs were identified. The proportion of published RCTs relative to all types of published clinical trials increased significantly over the period studied, from 18.6% in 1999 to 35.9% in 2004 (P < 0.001). The mean (SD) Jadad score was 1.03 (0.61) overall. One RCT had a Jadad score of 5 points; 14 had a score of 4 points; and 102 had a score of 3 points. The mean (SD) Jadad score was 0.85 (0.53) in 1999 (746 RCTs) and 1.20 (0.62) in 2004 (1634 RCTs). Across all trials, 39.4% of the items on the modified CONSORT checklist were reported, which was equivalent to 11.82 (5.78) of the 30 items. Some important methodologic components of RCTs were incompletely reported, such as sample-size calculation (reported in 1.1% of RCTs), randomization sequence (7.9%), allocation concealment (0.3 %), implementation of the random-allocation sequence (0%), and analysis of intention to treat (0%). The findings of this study indicate that the quality of reporting of RCTs of TCM has improved, but remains poor.
Bernatas, J J; Mohamed Ali, I; Ali Ismaël, H; Barreh Matan, A
2008-12-01
The purpose of this report was to describe a tuberculin survey conducted in 2001 to assess the trend in the annual risk for tuberculosis infection in Djibouti and compare resulting data with those obtained in a previous survey conducted in 1994. In 2001 cluster sampling allowed selection of 5599 school children between the ages of 6 and 10 years including 31.2% (1747/5599) without BCG vaccination scar. In this sample the annual risk of infection (ARI) estimated using cutoff points of 6 mm, 10 mm, and 14 mm corrected by a factor of 1/0.82 and a mode value (18 mm) determined according to the "mirror" method were 4.67%, 3.64%, 3.19% and 2.66% respectively. The distribution of positive tuberculin skin reaction size was significantly different from the normal law. In 1994 a total of 5257 children were selected using the same method. The distribution of positive reactions was not significantly different from the gaussian distribution and 28.6% (1505/5257) did not have a BCG scar. The ARI estimated using cutoff points of 6 mm, 10 mm, and 14 mm corrected by a factor of 1/0.82 and a mode value (17 mm) determined according to the "mirror" method were 2.68%, 2.52%, 2.75% and 3.32 respectively. Tuberculin skin reaction size among positive skin test reactors was correlated with the presence of a BCG scar, and its mean was significantly higher among children with BCG scar. The proportion of positive skin test reactors was also higher in the BCG scar group regardless of the cutoff point selected. Comparison of prevalence rates and ARI values did not allow any clear conclusion to be drawn, mainly because of a drastic difference in the positive reaction distribution profiles between the two studies. The distribution of the skin test reaction's size 1994 study could be modelized by a gaussian distribution while it could not in 2001. A partial explanation for the positive reaction distribution observed in the 2001 study might be the existence of cross-reactions with environmental mycobacteria.
Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M
2014-07-01
The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.
Grey W. Pendleton
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...
Adaptive 4d Psi-Based Change Detection
NASA Astrophysics Data System (ADS)
Yang, Chia-Hsiang; Soergel, Uwe
2018-04-01
In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.
Effects of changing canopy directional reflectance on feature selection
NASA Technical Reports Server (NTRS)
Smith, J. A.; Oliver, R. E.; Kilpela, O. E.
1973-01-01
The use of a Monte Carlo model for generating sample directional reflectance data for two simplified target canopies at two different solar positions is reported. Successive iterations through the model permit the calculation of a mean vector and covariance matrix for canopy reflectance for varied sensor view angles. These data may then be used to calculate the divergence between the target distributions for various wavelength combinations and for these view angles. Results of a feature selection analysis indicate that different sets of wavelengths are optimum for target discrimination depending on sensor view angle and that the targets may be more easily discriminated for some scan angles than others. The time-varying behavior of these results is also pointed out.
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
A modern soil carbon stock baseline for the conterminous United States
NASA Astrophysics Data System (ADS)
Loecke, T.; Wills, S. A.; Teachman, G.; Sequeira, C.; West, L.; Wijewardane, N.; Ge, Y.
2016-12-01
The Rapid Carbon Assessment Project was undertaken to ascertain the soil carbon stocks across the conterminous US at one point in time. Sample locations were chosen randomly from the NRI (National Resource Inventory) sampling framework and cover all areas in CONUS with SSURGO certified maps as of Dec 2010. The project was regionalized into 17 areas for logistical reasons. Within each region, soils were grouped by official series description properties. Sites were selected by soil groups and land use/cover as indicated by NRI or NLCD (USGS National Land Cover Dataset) class so that more extensive soils groups and/or land use/covers received more points and less extensive fewer points (with a minimum of 5 sites). Each region had 375 - 400 sites, for a total of approximately 6,400 sites. At each site, basic information about land use, vegetation and management were collected as appropriate and available. Samples were collected from 5 pedons (a central and 4 satellites) per site to a depth of 1m, at 0 - 5cm and by genetic horizon. A volumetric sample was collected for horizons above 50 cm to determine bulk density. For horizons below 50cm (or when a volumetric sample could not be obtained) bulk density was modeled from morphological information. All samples were air dried and crushed to <2mm. The central pedon was analyzed for total and organic carbon at the Kellogg Soil Science Laboratory in Lincoln, NE. A visible near-infrared (VNIR) spectrophotometer was used to predict organic and inorganic carbon contents for all satellites samples. A Hierarchical Bayesian statistical approach was used to estimate C stocks, concentrations, and uncertainty for each sampling level (i.e., CONUS, region, soil group, landuse and site). Carbon concentration and stocks were summarized by surface horizon and depth increments for sites, soil groups, and land use/groups and mapped by linking the values to a raster of SSURGO (Jan 2012) that includes map unit and NLCD classification. This modern soil C stock baseline data set will be useful for many application in climate science and biogeochemistry.
NASA Astrophysics Data System (ADS)
Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui
2018-04-01
A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.
A proposed method for world weightlifting championships team selection.
Chiu, Loren Z F
2009-08-01
The caliber of competitors at the World Weightlifting Championships (WWC) has increased greatly over the past 20 years. As the WWC are the primary qualifiers for Olympic slots (1996 to present), it is imperative for a nation to select team members who will finish with a high placing and score team points. Previous selection methods were based on a simple percentage system. Analysis of the results from the 2006 and 2007 WWC indicates a curvilinear trend in each weight class, suggesting a simple percentage system will not maximize the number of team points earned. To maximize team points, weightlifters should be selected based on their potential to finish in the top 25. A 5-tier ranking system is proposed that should ensure the athletes with the greatest potential to score team points are selected.
[Epidemiological study of cytopenia among benzene-exposed workers and its influential factors].
Peng, Juan-juan; Liu, Mei-xia; Yang, Feng; Guo, Wei-wei; Zhuang, Ran; Jia, Xian-dong
2013-03-01
To evaluate the benzene exposure level and cytopenia among the benzene exposed workers in Shanghai, China and to analyze the influential factors for the health of benzene-exposed workers. A total of 3314 benzene-exposed workers, who were from 85 benzene-related enterprises selected by stratified random sampling based on enterprise sizes and industries, were included in the study. The time-weighted average (TWA) concentration of benzene in each workshop was measured by individual sampling and fixed point sampling, and the benzene exposure level in workshop was evaluated accordingly. The occupational health examination results and health status of benzene-exposed workers were collected. The median of TW A concentrations of benzene was 0.3 mg/m3. The TWA concentrations measured at 7 ( 1.4%) of the 504 sampling points were above the safety limit. Of the 7 points, 3 were from large enterprises, 2 from medium enterprises, and 2 from small enterprises; 3 were from shipbuilding industry, 1 from chemical industry, and 3 from light industry. Of the 3314 benzene-exposed workers, 451 ( 13.6%) had cytopenia, including 339 males ( 339/2548, 13.3%) and 112 females ( 112/766, 14.6% ). There were significant differences in the incidence rates of leukopenia and neutropenia among the benzene-exposed workers of different sexes and ages (P<0.05); there were significant differences in the incidence rate of cytopenia among the benzene-exposed workers of different ages and working years ( P<0.05 ); there were significant differences in the incidence of neutropenia among the benzene exposed workers of different working years ( P<0.05). Monitoring and intervention measures should be enhanced to protect the benzene-exposed workers in the large enterprises in shipbuilding industry and medium and private enterprises in chemical industry from occupational hazards.
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Leukodepletion as a Point-of-Care Method for Monitoring HIV-1 Viral Load in Whole Blood
Titchmarsh, Logan; Zeh, Clement; Verpoort, Thierry; Allain, Jean-Pierre
2014-01-01
In order to limit the interference of HIV-1 cellular nucleic acids in estimating viral load (VL), the feasibility of leukodepletion of a small whole-blood (WB) volume to eliminate only leukocyte cell content was investigated, using a selection of filters. The efficacy of leukocyte filtration was evaluated by counting, CD45 quantitative PCR, and HIV-1 DNA quantification. Plasma HIV-1 was tested by real-time reverse transcription (RT)-PCR. A specific, miniaturized filter was developed and tested for leukocyte and plasma virus retention, WB sample dilution, and filtration parameters in HIV-1-spiked WB samples. This device proved effective to retain >99.9% of white blood cells in 100 μl of WB without affecting plasma VL. The Samba sample preparation chemistry was adapted to use a leukodepleted WB sample for VL monitoring using the point-of-care Samba-1 semiautomated system. The clinical performance of the assay was evaluated by testing 207 consecutive venous EDTA WB samples from HIV-1-infected patients attending a CD4 testing clinic. Most patients were on antiretroviral treatment (ART), but their VL status was unknown. Compared to the Roche Cobas AmpliPrep/Cobas TaqMan HIV-1 test, the new Samba assay had a concordance of 96.5%. The use of the Samba system with a VL test for WB might contribute to HIV-1 ART management and reduce loss-to-follow-up rates in resource-limited settings. PMID:25428162
Fazeli Dehkordy, Soudabeh; Hall, Kelli S; Dalton, Vanessa K; Carlos, Ruth C
2016-10-01
Research has not adequately examined the potential negative effects of perceiving routine discrimination on general healthcare utilization or health status, especially among reproductive-aged women. We sought to evaluate the association between everyday discrimination, health service use, and perceived health among a national sample of women in the United States. Data were drawn from the Women's Healthcare Experiences and Preferences survey, a randomly selected, national probability sample of 1078 U.S. women aged 18-55 years. We examined associations between everyday discrimination (via a standardized scale) on frequency of health service utilization and perceived general health status using chi-square and multivariable logistic regression modeling. Compared with women who reported healthcare visits every 3 years or less (reference group), each one-point increase in discrimination score was associated with higher odds of having healthcare visits annually or more often (odds ratio [OR] = 1.36, confidence interval [95% CI] = 1.01-1.83). Additionally, each one-point increase in discrimination score was significantly associated with lower odds of having excellent/very good perceived health (OR = 0.65; 95% CI = 0.54-0.80). Perceived discrimination was associated with increased exposure to the healthcare setting among this national sample of women. Perceived discrimination was also inversely associated with excellent/very good perceived health status.
Melo, Armindo; Pinto, Edgar; Aguiar, Ana; Mansilha, Catarina; Pinho, Olívia; Ferreira, Isabel M P L V O
2012-07-01
A monitoring program of nitrate, nitrite, potassium, sodium, and pesticides was carried out in water samples from an intensive horticulture area in a vulnerable zone from north of Portugal. Eight collecting points were selected and water-analyzed in five sampling campaigns, during 1 year. Chemometric techniques, such as cluster analysis, principal component analysis (PCA), and discriminant analysis, were used in order to understand the impact of intensive horticulture practices on dug and drilled wells groundwater and to study variations in the hydrochemistry of groundwater. PCA performed on pesticide data matrix yielded seven significant PCs explaining 77.67% of the data variance. Although PCA rendered considerable data reduction, it could not clearly group and distinguish the sample types. However, a visible differentiation between the water samples was obtained. Cluster and discriminant analysis grouped the eight collecting points into three clusters of similar characteristics pertaining to water contamination, indicating that it is necessary to improve the use of water, fertilizers, and pesticides. Inorganic fertilizers such as potassium nitrate were suspected to be the most important factors for nitrate contamination since highly significant Pearson correlation (r = 0.691, P < 0.01) was obtained between groundwater nitrate and potassium contents. Water from dug wells is especially prone to contamination from the grower and their closer neighbor's practices. Water from drilled wells is also contaminated from distant practices.
Kassem, Mohammed A; Amin, Alaa S
2015-02-05
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4'-nitro-2',6'-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50°C, the surfactant-rich phase was heated again at 100°C to remove water after decantation and the remaining phase was dissolved using 0.5mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75ngmL(-1) and the detection limit was 0.15ngmL(-1) of the original solution. The enhancement factor of 500 was achieved for 250mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Hao, Zhi-hong; Yao, Jian-zhen; Tang, Rui-ling; Zhang, Xue-mei; Li, Wen-ge; Zhang, Qin
2015-02-01
The method for the determmation of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current are full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES) was established. Direct current are full spectrum direct reading atomic emission spectrometer with a large area of solid-state detectors has functions of full spectrum direct reading and real-time background correction. The new electrodes and new buffer recipe were proposed in this paper, and have applied for national patent. Suitable analytical line pairs, back ground correcting points of elements and the internal standard method were selected, and Ge was used as internal standard. Multistage currents were selected in the research on current program, and each current set different holding time to ensure that each element has a good signal to noise ratio. Continuous rising current mode selected can effectively eliminate the splash of the sample. Argon as shielding gas can eliminate CN band generating and reduce spectral background, also plays a role in stabilizing the are, and argon flow 3.5 L x min(-1) was selected. Evaporation curve of each element was made, and it was concluded that the evaporation behavior of each element is consistent, and combined with the effects of different spectrographic times on the intensity and background, the spectrographic time of 35s was selected. In this paper, national standards substances were selected as a standard series, and the standard series includes different nature and different content of standard substances which meet the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples. In the optimum experimental conditions, the detection limits for B, Mo, Ag, Sn and Pb are 1.1, 0.09, 0.01, 0.41, and 0.56 microg x g(-1) respectively, and the precisions (RSD, n=12) for B, Mo, Ag, Sn and Pb are 4.57%-7.63%, 5.14%-7.75%, 5.48%-12.30%, 3.97%-10.46%, and 4.26%-9.21% respectively. The analytical accuracy was validated by national standards and the results are in agreement with certified values. The method is simple, rapid, is an advanced analytical method for the determination of trace amounts of geochemical samples' boron, molybdenum, silver, tin and lead, and has a certain practicality.
Salinas-Rodríguez, Aarón; Manrique-Espinoza, Betty; Acosta-Castillo, Gilberto Isaac; Franco-Núñez, Aurora; Rosas-Carrasco, Oscar; Gutiérrez-Robledo, Luis Miguel; Sosa-Ortiz, Ana Luisa
2014-01-01
To identify a valid cutoff point associated with Center for Epidemiologic Studies, Depression Scale (CES-D) of seven items, which allows the classification of older adults according to presence/absence of clinically significant depressive symptoms. Screening study with 229 older adults residing in two states of Mexico (Morelos and Tlaxcala), which were part of the sample from the National Survey of Health and Nutrition, 2012. We estimated the sensitivity and specificity associated with the selected cutoff points using the diagnostic criteria of ICD-10 (International Classification of Diseases, 10th revision) and DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, fourth edition). The cutoff point estimated was CES-D=5. According to the ICD-10 sensitivity and specificity were 83.3 and 90.2%, and ROC was 87%. Using DSM-IV, the values were 85, 83.2, and 84%, respectively. The short version of the CES-D can be used as a screening test to identify probable cases of older adults with clinically significant depressive symptoms.
High temperature pressurized high frequency testing rig and test method
De La Cruz, Jose; Lacey, Paul
2003-04-15
An apparatus is described which permits the lubricity of fuel compositions at or near temperatures and pressures experienced by compression ignition fuel injector components during operation in a running engine. The apparatus consists of means to apply a measured force between two surfaces and oscillate them at high frequency while wetted with a sample of the fuel composition heated to an operator selected temperature. Provision is made to permit operation at or near the flash point of the fuel compositions. Additionally a method of using the subject apparatus to simulate ASTM Testing Method D6079 is disclosed, said method involving using the disclosed apparatus to contact the faces of prepared workpieces under a measured load, sealing the workface contact point into the disclosed apparatus while immersing said contact point between said workfaces in a lubricating media to be tested, pressurizing and heating the chamber and thereby the fluid and workfaces therewithin, using the disclosed apparatus to impart a differential linear motion between the workpieces at their contact point until a measurable scar is imparted to at least one workpiece workface, and then evaluating the workface scar.
Protection reduces loss of natural land-cover at sites of conservation importance across Africa.
Beresford, Alison E; Eshiamwata, George W; Donald, Paul F; Balmford, Andrew; Bertzky, Bastian; Brink, Andreas B; Fishpool, Lincoln D C; Mayaux, Philippe; Phalan, Ben; Simonetti, Dario; Buchanan, Graeme M
2013-01-01
There is an emerging consensus that protected areas are key in reducing adverse land-cover change, but their efficacy remains difficult to quantify. Many previous assessments of protected area effectiveness have compared changes between sets of protected and unprotected sites that differ systematically in other potentially confounding respects (e.g. altitude, accessibility), have considered only forest loss or changes at single sites, or have analysed changes derived from land-cover data of low spatial resolution. We assessed the effectiveness of protection in reducing land-cover change in Important Bird Areas (IBAs) across Africa using a dedicated visual interpretation of higher resolution satellite imagery. We compared rates of change in natural land-cover over a c. 20-year period from around 1990 at a large number of points across 45 protected IBAs to those from 48 unprotected IBAs. A matching algorithm was used to select sample points to control for potentially confounding differences between protected and unprotected IBAs. The rate of loss of natural land-cover at sample points within protected IBAs was just 42% of that at matched points in unprotected IBAs. Conversion was especially marked in forests, but protection reduced rates of forest loss by a similar relative amount. Rates of conversion increased from the centre to the edges of both protected and unprotected IBAs, but rates of loss in 20-km buffer zones surrounding protected IBAs and unprotected IBAs were similar, with no evidence of displacement of conversion from within protected areas to their immediate surrounds (leakage).
[Usefulness of a screening questionnaire for post traumatic stress in a Colombian population].
Pineda, D A; Guerrero, O L; Pinilla, M L; Estupiñán, M
Rating scales for post traumatic stress disorder (PTSD) should be consistents with DSM IV criteria, and should be validate for each culture. To validate a PTSD checklist in a Colombian little town population, which was semi destructed by a guerrilla attack. A stratified, representative and randomized sample of 202 adult participants, aged over 15 year old, was selected from San Joaquin (Santander Colombia) two year after an guerrilla attack. A structured interview (SCID I), based on DSM IV criteria, was developed with each member of the sample. 76 participants (37.6%) met criteria for PTSD, and 126 (62.4%) were classified as non PTSD. A rating checklist with 24 symptoms of PTSD was applied by self report. Each item of the scale was scored 1 to 4. PTSD checklist had a reliability Cronbach s alpha coefficient of 0.97. PTSD group scored 70.4 22.9, and non PTSD 37.2 13.7 (p< 0.0001) on the PTSD checklist. A discriminant analysis found that the scale had a correctly classification capability of 88.6% (p< 0.0001). Sensibility was found between 76.3% for a cut off point of 51 and 81.6% for cut off point of 45. Specificity changed between 71.4% for a cut off point of 45 and 84.4% for a cut off point of 51. Checklist for PTSD had a high reliability, good discriminant capability, and good sensibility and specificity.
NASA Astrophysics Data System (ADS)
Mardirossian, Narbe; Head-Gordon, Martin
2018-06-01
A meta-generalized gradient approximation, range-separated double hybrid (DH) density functional with VV10 non-local correlation is presented. The final 14-parameter functional form is determined by screening trillions of candidate fits through a combination of best subset selection, forward stepwise selection, and random sample consensus (RANSAC) outlier detection. The MGCDB84 database of 4986 data points is employed in this work, containing a training set of 870 data points, a validation set of 2964 data points, and a test set of 1152 data points. Following an xDH approach, orbitals from the ωB97M-V density functional are used to compute the second-order perturbation theory correction. The resulting functional, ωB97M(2), is benchmarked against a variety of leading double hybrid density functionals, including B2PLYP-D3(BJ), B2GPPLYP-D3(BJ), ωB97X-2(TQZ), XYG3, PTPSS-D3(0), XYGJ-OS, DSD-PBEP86-D3(BJ), and DSD-PBEPBE-D3(BJ). Encouragingly, the overall performance of ωB97M(2) on nearly 5000 data points clearly surpasses that of all of the tested density functionals. As a Rung 5 density functional, ωB97M(2) completes our family of combinatorially optimized functionals, complementing B97M-V on Rung 3, and ωB97X-V and ωB97M-V on Rung 4. The results suggest that ωB97M(2) has the potential to serve as a powerful predictive tool for accurate and efficient electronic structure calculations of main-group chemistry.
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
Kmiecik, Ewa; Tomaszewska, Barbara; Wątor, Katarzyna; Bodzek, Michał
2016-06-01
The aim of the study was to compare the two reference methods for the determination of boron in water samples and further assess the impact of the method of preparation of samples for analysis on the results obtained. Samples were collected during different desalination processes, ultrafiltration and the double reverse osmosis system, connected in series. From each point, samples were prepared in four different ways: the first was filtered (through a membrane filter of 0.45 μm) and acidified (using 1 mL ultrapure nitric acid for each 100 mL of samples) (FA), the second was unfiltered and not acidified (UFNA), the third was filtered but not acidified (FNA), and finally, the fourth was unfiltered but acidified (UFA). All samples were analysed using two analytical methods: inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma optical emission spectrometry (ICP-OES). The results obtained were compared and correlated, and the differences between them were studied. The results show that there are statistically significant differences between the concentrations obtained using the ICP-MS and ICP-OES techniques regardless of the methods of sampling preparation (sample filtration and preservation). Finally, both the ICP-MS and ICP-OES methods can be used for determination of the boron concentration in water. The differences in the boron concentrations obtained using these two methods can be caused by several high-level concentrations in selected whole-water digestates and some matrix effects. Higher concentrations of iron (from 1 to 20 mg/L) than chromium (0.02-1 mg/L) in the samples analysed can influence boron determination. When iron concentrations are high, we can observe the emission spectrum as a double joined and overlapping peak.
Path planning during combustion mode switch
Jiang, Li; Ravi, Nikhil
2015-12-29
Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.
Clustering properties of g -selected galaxies at z ~ 0.8
Favole, Ginevra; Comparat, Johan; Prada, Francisco; ...
2016-06-21
In current and future large redshift surveys, as the Sloan Digital Sky Survey IV extended Baryon Oscillation Spectroscopic Survey (SDSS-IV/eBOSS) or the Dark Energy Spectroscopic Instrument (DESI), we will use emission-line galaxies (ELGs) to probe cosmological models by mapping the large-scale structure of the Universe in the redshift range 0.6 < z < 1.7. We explore the halo-galaxy connection, with current data and by measuring three clustering properties of g-selected ELGs as matter tracers in the redshift range 0.6 < z < 1: (i) the redshift-space two-point correlation function using spectroscopic redshifts from the BOSS ELG sample and VIPERS; (ii)more » the angular two-point correlation function on the footprint of the CFHT-LS; (iii) the galaxy-galaxy lensing signal around the ELGs using the CFHTLenS. Furthermore, we interpret these observations by mapping them on to the latest high-resolution MultiDark Planck N-body simulation, using a novel (Sub)Halo-Abundance Matching technique that accounts for the ELG incompleteness. ELGs at z ~ 0.8 live in haloes of (1 ± 0.5) × 10 12 h -1 M⊙ and 22.5 ± 2.5 per cent of them are satellites belonging to a larger halo. The halo occupation distribution of ELGs indicates that we are sampling the galaxies in which stars form in the most efficient way, according to their stellar-to-halo mass ratio.« less
Active control of acoustic field-of-view in a biosonar system.
Yovel, Yossi; Falk, Ben; Moss, Cynthia F; Ulanovsky, Nachum
2011-09-01
Active-sensing systems abound in nature, but little is known about systematic strategies that are used by these systems to scan the environment. Here, we addressed this question by studying echolocating bats, animals that have the ability to point their biosonar beam to a confined region of space. We trained Egyptian fruit bats to land on a target, under conditions of varying levels of environmental complexity, and measured their echolocation and flight behavior. The bats modulated the intensity of their biosonar emissions, and the spatial region they sampled, in a task-dependant manner. We report here that Egyptian fruit bats selectively change the emission intensity and the angle between the beam axes of sequentially emitted clicks, according to the distance to the target, and depending on the level of environmental complexity. In so doing, they effectively adjusted the spatial sector sampled by a pair of clicks-the "field-of-view." We suggest that the exact point within the beam that is directed towards an object (e.g., the beam's peak, maximal slope, etc.) is influenced by three competing task demands: detection, localization, and angular scanning-where the third factor is modulated by field-of-view. Our results suggest that lingual echolocation (based on tongue clicks) is in fact much more sophisticated than previously believed. They also reveal a new parameter under active control in animal sonar-the angle between consecutive beams. Our findings suggest that acoustic scanning of space by mammals is highly flexible and modulated much more selectively than previously recognized.
Abnormal behavior associated with a point mutation in the structural gene for monoamine oxidase A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunner, H.G.; Nelen, M.; Ropers, H.H.
1993-10-22
Genetic and metabolic studies have been done on a large kindred in which several males are affected by a syndrome of borderline mental retardation and abnormal behavior. The types of behavior that occurred include impulsive aggression, arson, attempted rape, and exhibitionism. Analysis of 24-hour urine samples indicated markedly disturbed monoamine metabolism. This syndrome was associated with a complete and selective deficiency of enzymatic activity of monoamine oxidase A (MAOA). In each of five affected males, a point mutation was identified in the eighth exon of the MAOA structural gene, which changes a glutamine to a termination codon. Thus, isolated completemore » MAOA deficiency in this family is associated with a recognizable behavioral phenotype that includes disturbed regulation of impulsive aggression.« less
Point process statistics in atom probe tomography.
Philippe, T; Duguay, S; Grancher, G; Blavette, D
2013-09-01
We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.
Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization
Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin
2017-01-01
In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935
Impact assessment of treated wastewater on water quality of the receiver using the Wilcoxon test
NASA Astrophysics Data System (ADS)
Ofman, Piotr; Puchlik, Monika; Simson, Grzegorz; Krasowska, Małgorzata; Struk-Sokołowska, Joanna
2017-11-01
Wastewater treatment is a process which aims to reduce the concentration of pollutants in wastewater to the level allowed by current regulations. This is to protect the receivers which typically are rivers, streams, lakes. Examination of the quality of treated wastewater allows for quick elimination of possible negative effects, and the study of water receiver prevents from excessive contamination. The paper presents the results of selected physical and chemical parameters of treated wastewater from the largest on the region in north-eastern Poland city of Bialystok municipal wastewater treatment and Biała River, the receiver. The samples for research were taken 3-4 a month in 2015 from two points: before and after discharge. The impact of the wastewater treatment plant on the quality of the receiver waters was studied by using non-parametric Wilcoxon test. This test determined whether the analyzed indicators varied significantly depending on different sampling points of the river, above and below place of discharge of treated wastewater. These results prove that the treated wastewater does not affect the water quality in the Biała River.
Detection of heavy metals in water in Negeri Sembilan, Malaysia: From source to consumption
NASA Astrophysics Data System (ADS)
Khalaf, Baydaa; Abdullah, Md. Pauzi; Tahrim, Nurfaizah Abu
2018-04-01
Drinking water should be free from harmful levels of impurities, such as heavy metals. The aim of this study is to investigate the heavy metals concentrations in a water reticulation system of Negeri Sembilan. 25 stations were selected along Sungai Linggi (upstream of intake point) and through there reticulation system of Sungai Linggi Water Treatment Plant encompassing raw water through to the last point of use. Sampling activities were carried out in June and July 2016. The samples taken were analysed for heavymetals using an Inductively Coupled Plasma - Optical Emission Spectrometer (ICP-OES). In addition other water quality parameters were measured in situ (pH, water temperature, conductivity and dissolved oxygen) and analysed in the laboratory (BOD, COD, TSS, NH3-N, TOC and residual chlorine). The results showed a high level of Ca in the distribution system, while in the treatment plant it was normal, as well as Fe is decreased. Meanwhile Mn is decreased after treatment processes. The concentrations of DO and temperature in the tap water exceeded the standard concentrations.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Results of Performance Tests Performed on the John Watts WW Casing Connection on 7" Pipe
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Watts
2000-02-01
Stress Engineering Services (SES) was contracted by Mr. John Watts to test his ''WW'' threaded connection developed for oilfield oil and gas service. This work was a continuation of testing performed by SES as reported in August of 1999. The connection design tested was identified as ''WW''. The samples were all integral (no coupled connections) and contained a wedge thread form with 90{sup o} flank angles relative to the pipe centerline. The wedge thread form is a variable width thread that primarily engages on the flanks. This thread form provides very high torque capacity and good stabbing ability and makeup.more » The test procedure selected for one of the samples was the newly written ISO 13679 procedure for full scale testing of casing and tubing connections, which is currently going through the ISO acceptance process. The ISO procedure requires a variety of tests that includes makeup/breakout testing, internal gas sealability/external water sealability testing with axial tension, axial compression, bending, internal gas thermal cycle tests and limit load (failure) tests. This test procedure was performed with one sample. Four samples were tested to failure. Table 1 contains a summary of the tasks performed by SES. The project started with the delivery of test samples by Mr. Watts. Pipe from the previous round of tests was used for the new samples. Figure 1 shows the structural and sealing results relative to the pipe body. Sample 1 was used to determine the torque capacity of the connection. Torque was applied to the capacity of SES's equipment which was 28,424 ft-lbs. From this, an initial recommended torque range of 7,200 to 8,800 ft-lbs. was selected. The sample was disassembled and while there was no galling observed in the threads, the end of the pin had collapsed inward. Sample 2 received three makeups. Breakouts 1 and 2 also had collapsing of the pin end, with no thread galling. From these make/breaks, it was decided to reduce the amount of lubricant applied to the connection by applying it to the box or pin only and reducing the amount applied. Samples 3 and 4 received one makeup only. Sample 5 initially received two make/breaks to test for galling resistance before final makeup, No galling was observed. Later, three additional make/breaks were performed with no pin end collapse and galling over 1/2 a thread occurring on one of the breakouts. During the make/break tests, the stabbing and hand tight makeup of the WW connection was found to be very easy and trouble free. There was no tendency to crossthread, even when stabbed at an angle, and it screwed together very smoothly up to hand tight. During power tight makeup, there was no heat generated in the box (as checked by hand contact) and no jerkiness associated with any of the makeups or breakouts. Sample 2 was tested in pure compression. The maximum load obtained was 1,051 kips and the connection was beginning to significantly deform as the sample buckled. Actual pipe yield was 1,226 kips. Sample 3 was capped-end pressure tested to failure. The capped-end yield pressure of the pipe was 16,572 psi and the sample began to leak at 12,000 psi. Sample 4 was tested in pure tension. The maximum load obtained was 978 kips and the connection failed by fracture at the pin critical section. Actual pipe yield was 1,226 kips. Sample 5 was tested in combined tension/compression and internal gas pressure. The sample was assembled, setup and tested four times. The first time was with a torque of 7,298 ft-lbs and the connection leaked halfway to ISO Load Point 2 with loads of 693 kips and 4,312 psi. The second time the torque was increased to 14,488 ft-lbs and a leak occurred at 849 kips and 9,400 psi, which was ISO Load Point 2. The third time the makeup torque was again increased, to 20,456 ft-lbs, and a leak occurred at 716 kips and 11,342 psi, ISO Load Point 4. The fourth test was with the same torque as before, 20,617 ft-lbs, and the connection successfully tested up to load step 56, ISO Load Point 6 (second round) before leaking at 354 kips and 11,876 psi. At this point, time and funds prevented additional testing to be performed.« less
Influence of speech sample on perceptual rating of hypernasality.
Medeiros, Maria Natália Leite de; Fukushiro, Ana Paula; Yamashita, Renata Paciello
2016-07-07
To investigate the influence of speech sample of spontaneous conversation or sentences repetition on intra and inter-rater hypernasality reliability. One hundred and twenty audio recorded speech samples (60 containing spontaneous conversation and 60 containing repeated sentences) of individuals with repaired cleft palate±lip, both genders, aged between 6 and 52 years old (mean=21±10) were selected and edited. Three experienced speech and language pathologists rated hypernasality according to their own criteria using 4-point scale: 1=absence of hypernasality, 2=mild hypernasality, 3=moderate hypernasality and 4=severe hypernasality, first in spontaneous speech samples and 30 days after, in sentences repetition samples. Intra- and inter-rater agreements were calculated for both speech samples and were statistically compared by the Z test at a significance level of 5%. Comparison of intra-rater agreements between both speech samples showed an increase of the coefficients obtained in the analysis of sentences repetition compared to those obtained in spontaneous conversation. Comparison between inter-rater agreement showed no significant difference among the three raters for the two speech samples. Sentences repetition improved intra-raters reliability of perceptual judgment of hypernasality. However, the speech sample had no influence on reliability among different raters.
Tracer simulation study of potential solute movement in Port Royal Sound, South Carolina
Kilpatrick, F.A.; Cummings, T. Ray
1972-01-01
A tracer study was conducted in Port Royal Sound to simulate the movement and ultimate pattern of concentration of a solute continuously injected into the flow. A total of 750 pounds of Rhodamine WT dye was injected by boat during a period of 24.8 hours in a line across the Colleton River. During the following 43 days, samples of water were taken at selected points in the sound, and the concentration of dye in the samples was determined by fluorometric analysis. The data obtained in the field study were used with theoretical models to compute the ultimate pattern of concentration of nonconservative and conservative solutes for a hypothetical continuous injection at the site on the Colleton River.
Vendor compliance with Ontario's tobacco point of sale legislation.
Dubray, Jolene M; Schwartz, Robert M; Garcia, John M; Bondy, Susan J; Victor, J Charles
2009-01-01
On May 31, 2006, Ontario joined a small group of international jurisdictions to implement legislative restrictions on tobacco point of sale promotions. This study compares the presence of point of sale promotions in the retail tobacco environment from three surveys: one prior to and two following implementation of the legislation. Approximately 1,575 tobacco vendors were randomly selected for each survey. Each regionally-stratified sample included equal numbers of tobacco vendors categorized into four trade classes: chain convenience, independent convenience and discount, gas stations, and grocery. Data regarding the six restricted point of sale promotions were collected using standardized protocols and inspection forms. Weighted estimates and 95% confidence intervals were produced at the provincial, regional and vendor trade class level using the bootstrap method for estimating variance. At baseline, the proportion of tobacco vendors who did not engage in each of the six restricted point of sale promotions ranged from 41% to 88%. Within four months following implementation of the legislation, compliance with each of the six restricted point of sale promotions exceeded 95%. Similar levels of compliance were observed one year later. Grocery stores had the fewest point of sale promotions displayed at baseline. Compliance rates did not differ across vendor trade classes at either follow-up survey. Point of sale promotions did not differ across regions in any of the three surveys. Within a short period of time, a high level of compliance with six restricted point of sale promotions was achieved.
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos
Short summary of the software's functionality: built-in scan feature to acquire optical image of the surface to be analyzed click-and-point selection of points of interest on the surface supporting standalone autosampler/HPLC/MS operation: creating independent batch files after points of interests are selected for LEAPShell (autosampler control software from Leap Technologies) and Analyst® (mass spectrometry (MS) software from AB Sciex) supporting integrated autosampler/HPLC/MS operation: creating one batch file for all instruments controlled by Analyst® (mass spectrometry software from AB Sciex) after points of interests are selected creating heatmaps of analytes of interests from collected MS files inmore » a hand-off fashion« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pignata, M.L.; Canas, M.S.; Carreras, H.A.
1997-09-01
A diagnostic study was done on Ligustrum lucidum Ait. f. tricolor (Rehd.) Rehd. in relation to atmospheric pollutants in Cordoba city, Argentina. The study area receives regional Pollutants and was categorized taking into account traffic level, industrial density, type of industry, location of the sample point in relation to the street corner, treeless condition, and topographic level. Dried weight/fresh weight ratio (DW/FW) and specific leaf area (SLA) were calculated, and concentrations of chlorophylls, carotenoids, total sulfur, soluble proteins, malondialdehyde (MDA), and hydroperoxy conjugated dienes (HPCD) were determined in leaf samples. Sulfur content correlates positively with traffic density and SLA correlatesmore » negatively with some combinations of the categorical variables; MDA correlates positively with topographic level and total protein concentration correlates negatively with treeless condition. On the basis of our results, traffic, location of trees, type of industry, situation of a tree with respect to others, and topographic level are the environmental variables to bear in mind when selecting analogous sampling points in a passive monitoring program. An approximation to predict tree injury may be obtained by measuring DW/FW ratio, proteins, pigments, HPCD, and MDA as they are responsible for the major variability of data.« less
Prokešová, Radka; Brabcová, Iva; Pokojová, Radka; Bártlová, Sylva
2016-12-01
The goal of this study was to assess specific features of risk management from the point of view of nurses in leadership positions in inpatient units in Czech hospitals. The study was performed using a quantitative research strategy, i.e., a questionnaire. The data sample was analyzed using SPSS v. 23.0. Pearson's chi-square and analysis of adjusted residues were used for identifying the existence associations of nominal and/or ordinal quantities. 315 nurses in leadership positions working in inpatient units of Czech hospitals were included in the sample. The sample was created using random selection by means of quotas. Based on the study results, statistically significant relations between the respondents' education and the utilization of methods to identify risks were identified. Furthermore, statistically significant relationships were found between a nurse's functional role within the system and regular analysis and evaluation of risks and between the type of the healthcare facility and the degree of patient involvement in risk management. The study found statistically significant correlations that can be used to increase the effectiveness of risk management in inpatient units of Czech hospitals. From this perspective, the fact that patient involvement in risk management was only reported by 37.8% of respondents seems to be the most notable problem.
Lee, Hye-Seung; Burkhardt, Brant R; McLeod, Wendy; Smith, Susan; Eberhard, Chris; Lynch, Kristian; Hadley, David; Rewers, Marian; Simell, Olli; She, Jin-Xiong; Hagopian, Bill; Lernmark, Ake; Akolkar, Beena; Ziegler, Anette G; Krischer, Jeffrey P
2014-07-01
The Environmental Determinants of Diabetes in the Young planned biomarker discovery studies on longitudinal samples for persistent confirmed islet cell autoantibodies and type 1 diabetes using dietary biomarkers, metabolomics, microbiome/viral metagenomics and gene expression. This article describes the details of planning The Environmental Determinants of Diabetes in the Young biomarker discovery studies using a nested case-control design that was chosen as an alternative to the full cohort analysis. In the frame of a nested case-control design, it guides the choice of matching factors, selection of controls, preparation of external quality control samples and reduction of batch effects along with proper sample allocation. Our design is to reduce potential bias and retain study power while reducing the costs by limiting the numbers of samples requiring laboratory analyses. It also covers two primary end points (the occurrence of diabetes-related autoantibodies and the diagnosis of type 1 diabetes). The resulting list of case-control matched samples for each laboratory was augmented with external quality control samples. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hraber, Peter; Korber, Bette; Wagh, Kshitij
Within-host genetic sequencing from samples collected over time provides a dynamic view of how viruses evade host immunity. Immune-driven mutations might stimulate neutralization breadth by selecting antibodies adapted to cycles of immune escape that generate within-subject epitope diversity. Comprehensive identification of immune-escape mutations is experimentally and computationally challenging. With current technology, many more viral sequences can readily be obtained than can be tested for binding and neutralization, making down-selection necessary. Typically, this is done manually, by picking variants that represent different time-points and branches on a phylogenetic tree. Such strategies are likely to miss many relevant mutations and combinations ofmore » mutations, and to be redundant for other mutations. Longitudinal Antigenic Sequences and Sites from Intrahost Evolution (LASSIE) uses transmitted founder loss to identify virus “hot-spots” under putative immune selection and chooses sequences that represent recurrent mutations in selected sites. LASSIE favors earliest sequences in which mutations arise. Here, with well-characterized longitudinal Env sequences, we confirmed selected sites were concentrated in antibody contacts and selected sequences represented diverse antigenic phenotypes. Finally, practical applications include rapidly identifying immune targets under selective pressure within a subject, selecting minimal sets of reagents for immunological assays that characterize evolving antibody responses, and for immunogens in polyvalent “cocktail” vaccines.« less
Islam, Md Rafiqul; Attia, John; Alauddin, Mohammad; McEvoy, Mark; McElduff, Patrick; Slater, Christine; Islam, Md Monirul; Akhter, Ayesha; d'Este, Catherine; Peel, Roseanne; Akter, Shahnaz; Smith, Wayne; Begg, Stephen; Milton, Abul Hasnat
2014-12-04
Early life exposure to inorganic arsenic may be related to adverse health effects in later life. However, there are few data on postnatal arsenic exposure via human milk. In this study, we aimed to determine arsenic levels in human milk and the correlation between arsenic in human milk and arsenic in mothers and infants urine. Between March 2011 and March 2012, this prospective study identified a total of 120 new mother-baby pairs from Kashiani (subdistrict), Bangladesh. Of these, 30 mothers were randomly selected for human milk samples at 1, 6 and 9 months post-natally; the same mother baby pairs were selected for urine sampling at 1 and 6 months. Twelve urine samples from these 30 mother baby pairs were randomly selected for arsenic speciation. Arsenic concentration in human milk was low and non-normally distributed. The median arsenic concentration in human milk at all three time points remained at 0.5 μg/L. In the mixed model estimates, arsenic concentration in human milk was non-significantly reduced by -0.035 μg/L (95% CI: -0.09 to 0.02) between 1 and 6 months and between 6 and 9 months. With the progression of time, arsenic concentration in infant's urine increased non-significantly by 0.13 μg/L (95% CI: -1.27 to 1.53). Arsenic in human milk at 1 and 6 months was not correlated with arsenic in the infant's urine at the same time points (r = -0.13 at 1 month and r = -0.09 at 6 month). Arsenite (AsIII), arsenate (AsV), monomethyl arsonic acid (MMA), dimethyl arsinic acid (DMA) and arsenobetaine (AsB) were the constituents of total urinary arsenic; DMA was the predominant arsenic metabolite in infant urine. We observed a low arsenic concentration in human milk. The concentration was lower than the World Health Organization's maximum permissible limit (WHO Permissible Limit 15 μg/kg-bw/week). Our findings support the safety of breastfeeding even in arsenic contaminated areas.
Melo, Geruza L; Miotto, Barbara; Peres, Brisa; Cáceres, Nilton C
2013-01-01
Each animal species selects specific microhabitats for protection, foraging, or micro-climate. To understand the distribution patterns of small mammals on the ground and in the understorey, we investigated the use of microhabitats by small mammals in a deciduous forest of southern Brazil. Ten trap stations with seven capture points were used to sample the following microhabitats: liana, fallen log, ground litter, terrestrial ferns, simple-trunk tree, forked tree, and Piper sp. shrubs. Seven field phases were conducted, each for eight consecutive days, from September 2006 through January 2008. Four species of rodents (Akodon montensis, Sooretamys angouya, Oligoryzomys nigripes and Mus musculus) and two species of marsupials (Didelphis albiventris and Gracilinanus microtarsus) were captured. Captured species presented significant differences on their microhabitat use (ANOVA, p = 0.003), particularly between ground and understorey sites. Akodon montensis selected positively terrestrial ferns and trunks, S. angouya selected lianas, D. albiventris selected fallen trunks and Piper sp., and G. microtarsus choose tree trunks and lianas. We demonstrated that the local small-mammal assemblage does select microhabitats, with different types of associations between species and habitats. Besides, there is a strong evidence of habitat selection in order to diminish predation.
Efficient generation of discontinuity-preserving adaptive triangulations from range images.
Garcia, Miguel Angel; Sappa, Angel Domingo
2004-10-01
This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.
Understanding Land Use Impacts on Groundwater Quality Using Chemical Analysis
NASA Astrophysics Data System (ADS)
Nitka, A.; Masarik, K.; Masterpole, D.; Johnson, B.; Piette, S.
2017-12-01
Chippewa County, in western Wisconsin, has a unique historical set of groundwater quality data. The county conducted extensive groundwater sampling of private wells in 1985 (715 wells) and 2007 (800 wells). In 2016, they collaborated with UW-Extension and UW-Stevens Point to evaluate the current status of groundwater quality in Chippewa County by sampling of as many of the previously studied wells as possible. Nitrate was a primary focus of this groundwater quality inventory. Of the 744 samples collected, 60 were further analyzed for chemical indicators of agricultural and septic waste, two major sources of nitrate contamination. Wells for nitrate source analysis were selected from the 2016 participants based upon certain criteria. Only wells with a Wisconsin Unique Well Number were considered to ensure well construction information was available. Next, an Inverse Distance Weighting tool in ESRI ArcMap was used to assign values categorizing septic density. Two-thirds of the wells were selected in higher density areas and one-third in lower density areas. Equally prioritized was an even distribution of nitrate - N concentrations, with 28 of the wells having nitrate - N concentrations higher than the drinking water standard of 10 mg/L and 32 wells with concentrations between 2 and 10 mg/L. All wells with WUWN and nitrate - N concentrations greater than 20 mg/L were selected. The results of the nitrate source analyses will aid in determining temporal changes and spatial relationships of groundwater quality to soils, geology and land use in Chippewa County.
THE CLUSTERING CHARACTERISTICS OF H I-SELECTED GALAXIES FROM THE 40% ALFALFA SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Ann M.; Giovanelli, Riccardo; Haynes, Martha P.
The 40% Arecibo Legacy Fast ALFA survey catalog ({alpha}.40) of {approx}10,150 H I-selected galaxies is used to analyze the clustering properties of gas-rich galaxies. By employing the Landy-Szalay estimator and a full covariance analysis for the two-point galaxy-galaxy correlation function, we obtain the real-space correlation function and model it as a power law, {xi}(r) = (r/r{sub 0}){sup -{gamma}}, on scales <10 h{sup -1} Mpc. As the largest sample of blindly H I-selected galaxies to date, {alpha}.40 provides detailed understanding of the clustering of this population. We find {gamma} = 1.51 {+-} 0.09 and r{sub 0} = 3.3 + 0.3, -0.2more » h{sup -1} Mpc, reinforcing the understanding that gas-rich galaxies represent the most weakly clustered galaxy population known; we also observe a departure from a pure power-law shape at intermediate scales, as predicted in {Lambda}CDM halo occupation distribution models. Furthermore, we measure the bias parameter for the {alpha}.40 galaxy sample and find that H I galaxies are severely antibiased on small scales, but only weakly antibiased on large scales. The robust measurement of the correlation function for gas-rich galaxies obtained via the {alpha}.40 sample constrains models of the distribution of H I in simulated galaxies, and will be employed to better understand the role of gas in environmentally dependent galaxy evolution.« less
Influencing Food Selection with Point-of-Choice Nutrition Information.
ERIC Educational Resources Information Center
Davis-Chervin, Doryn; And Others
1985-01-01
Evaluated the effectiveness of a point-of-choice nutrition information program that used a comprehensive set of communication functions in its design. Results indicate that point-of-choice information without direct tangible rewards can (to a moderate degree) modify food-selection behavior of cafeteria patrons. (JN)
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
Eramo, Alessia; Delos Reyes, Hannah; Fahrenfeld, Nicole L.
2017-01-01
Combined sewer overflows (CSOs) degrade water quality through the release of microbial contaminants in CSO effluent. Improved understanding of the partitioning of microbial contaminants onto settleable particles can provide insight into their fate in end-of-pipe treatment systems or following release during CSO events. Sampling was performed across the hydrograph for three storm events as well as during baseflow and wet weather in three surface waters impacted by CSO. qPCR was performed for select antibiotic resistance genes (ARG) and a marker gene for human fecal indicator organisms (BacHum) in samples processed the partitioning of microbial contaminants on settleable particles versus suspended in the aqueous phase. Amplicon sequencing was performed on both fractions of storm samples to further define the timing and partitioning of microbial contaminants released during CSO events. Samples collected at the CSO outfall exhibited microbial community signatures of wastewater at select time points early or late in the storm events. CSOs were found to be a source of ARG. In surrounding surface waters, sul1 was higher in samples from select locations during wet weather compared to baseflow. Otherwise, ARG concentrations were variable with no differences between baseflow and wet weather conditions. The majority of ARG at the CSO outfall were observed on the attached fraction of samples: 64–79% of sul1 and 59–88% of tet(G). However, the timing of peak ARG and human fecal indicator marker gene BacHum did not necessarily coincide with observation of the microbial signature of wastewater in CSO effluent. Therefore, unit processes that remove settleable particles (e.g., hydrodynamic separators) operated throughout a CSO event would achieve up to (0.5–0.9)-log removal of ARG and fecal indicators by removing the attached fraction of measured genes. Secondary treatment would be required if greater removal of these targets is needed. PMID:29104562
Impact of the China Healthy Cities Initiative on Urban Environment.
Yue, Dahai; Ruan, Shiman; Xu, Jin; Zhu, Weiming; Zhang, Luyu; Cheng, Gang; Meng, Qingyue
2017-04-01
The China Healthy Cities initiative, a nationwide public health campaign, has been implemented for 25 years. As "Healthy China 2030" becomes the key national strategy for improving population health, this initiative is an important component. However, the effects of the initiative have not been well studied. This paper aims to explore its impact on urban environment using a multiple time series design. We adopted a stratified and systematic sampling method to choose 15 China healthy cities across the country. For the selected healthy cities, 1:1 matched non-healthy cities were selected as the comparison group. We collected longitudinal data from 5 years before cities achieved the healthy city title up to 2012. We used hierarchical models to calculate difference-in-differences estimates for examining the impact of the initiative. We found that the China Healthy Cities initiative was associated with increases in the proportion of urban domestic sewage treated (32 percentage points), the proportion of urban domestic garbage treated (30 percentage points), and the proportion of qualified farmers' markets (40 percentage points), all of which are statistically significant (P < 0.05). No significant change was found for increases in green coverage of urban built-up area (5 percentage points), green space per capita (2 square meter), and days with Air Quality Index/Air Pollution Index ≤ 100 (25 days). In conclusion, the China Healthy Cities initiative was associated with significant improved urban environment in terms of infrastructure construction, yet had little impact on green space and air quality.
Thomas B. Lynch; Jeffrey H. Gove
2013-01-01
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...
Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H
2014-01-01
A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
Assessment of hygienic quality of some types of cheese sampled from retail outlets.
Prencipe, Vincenza; Migliorati, Giacomo; Matteucci, Osvaldo; Calistri, Paolo; Di Giannatale, Elisabetta
2010-01-01
The authors evaluated the prevalence of Listeria monocytogenes, Escherichia coli O157:H7, Salmonella spp. and staphylococcal enterotoxin, in 2,132 samples selected from six types of cheese on the basis of recorded consumption in Italy in 2004. In L. monocytogenes-positive samples the precise level of contamination was determined. To define the physical-chemical characteristics of the selected natural cheeses, the pH values, water activity and sodium chloride content were determined. The results suggest that blue and soft cheeses (Brie, Camembert, Gorgonzola and Taleggio) are more likely to be contaminated with L. monocytogenes. The mean prevalence of L. monocytogenes in the six types of cheese was 2.4% (from 0.2% in Asiago and Crescenza to 6.5% in Taleggio), with contamination levels of up to 460 MPN/g. No presence of Salmonella spp. and E. coli O157 was found in any sample. Staphylococcus enterotoxin was found in 0.6% of the samples examined. Physical and chemical parameter values confirmed that all types of cheese are considered capable of supporting the growth of L. monocytogenes. The study confirmed the need to apply effective control at production and sales levels to reduce the probability of contamination by L. monocytogenes. This micro-organism can attain high levels of contamination in food products, such as cheeses that have a long shelf-life when associated with difficulties of maintaining appropriate storage temperatures in both sales points and in the home.
Colloidal mode of transport in the Potomac River watershed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maher, I.L.; Foster, G.D.
1995-12-31
Similarly to the particulate phase the colloidal phase may play an important role in the organic contaminant transport downstream the river. The colloidal phase consisting of microparticles and micromolecules which are small enough to be mobile and large enough to attract pollutants can absorb nonpolar organic compounds similarly as do soil and sediment particles. To test the hypothesis three river water samples have been analyzed for PAH content in the dissolved, the colloidal, and the particulate phase. The first sample was collected at the Blue Ridge province of Potomac River watershed, at Point of Rocks, the second one in themore » Pidmont province, at Riverbend Park, and the third sample at Coastal Plane, at Dyke Marsh (Belle Heven marina). In the laboratory environment each water sample was prefiltered to separate the particulate phase form the dissolved and colloidal phase. One part of the prefiltered water sample was ultrafiltered to separate colloids while the second part of the water was Goulden extracted. The separated colloidal phase was liquid-liquid extracted (LLE) while filters containing the suspended solids were Soxhlet extracted. The extracts of the particulate phase, the colloidal phase, and the dissolved plus colloidal phase were analyzed for selected PAHs via GC/MS. It is planned that concentrations of selected PAHs in three phases will be used for calculations of the partition coefficients, the colloid/dissolved partition coefficient and the particle/dissolved partition coefficient. Both partition coefficients will be compared to define the significance of organic contaminant transport by aquatic colloids.« less
Method and apparatus for aligning a solar concentrator using two lasers
Diver Jr., Richard Boyer
2003-07-22
A method and apparatus are provided for aligning the facets of a solar concentrator. A first laser directs a first laser beam onto a selected facet of the concentrator such that a target board positioned adjacent to the first laser at approximately one focal length behind the focal point of the concentrator is illuminated by the beam after reflection thereof off of the selected facet. A second laser, located adjacent to the vertex of the optical axis of the concentrator, is used to direct a second laser beam onto the target board at a target point thereon. By adjusting the selected facet to cause the first beam to illuminate the target point on the target board produced by the second beam, the selected facet can be brought into alignment with the target point. These steps are repeated for other selected facets of the concentrator, as necessary, to provide overall alignment of the concentrator.
Bernard, Larry C; Walsh, R Patricia
2002-10-01
The present study replicated and extended earlier research on temporal sampling effects in university subject pools. Data were obtained from 236 participants, 79 men and 157 women, in a university subject pool during a 15-wk. semester. Without knowing the purpose of the study, participants self-selected to participate earlier (Weeks 4 and 5; n = 105) or later (Weeks 14 and 15; n = 131). Three hypotheses were investigated: (1) that the personality patterns of earlier and later participants on the NEO Personality Inventory-Revised and the Personality Research Form differ significantly, with earlier participants scoring higher on the latter scales reflecting social responsibility and higher on former Conscientiousness and Neuroticism scales; (2) that there are similar significant differences between participants in the earlier and later groups compared to the male and female college normative samples for the two tests: and (3) that earlier participants will have higher actual Scholastic Assessment Test scores and Grade Point Averages. Also investigated was whether participants' foreknowledge that their actual Scholastic Assessment Test scores and Grade Point Averages would be obtained would affect their accuracy of self-report. In contrast to prior research, neither the first nor second hypothesis was supported by the current study; there do not appear to be consistent differences on personality variables. However, the third hypothesis was supported. Earlier participants had higher actual high school Grade Point Average, college Grade Point Average, and Scholastic Assessment Test Verbal scores. Foreknowledge that actual Scholastic Assessment Test scores and Grade Point Averages would be obtained did not affect the accuracy of self-report. In addition, later participants significantly over-reported their scores, and significantly more women than men and more first-year than senior-year subjects participated in the early group.
Influence of crystal habit on the compression and densification mechanism of ibuprofen
NASA Astrophysics Data System (ADS)
Di Martino, Piera; Beccerica, Moira; Joiris, Etienne; Palmieri, Giovanni F.; Gayot, Anne; Martelli, Sante
2002-08-01
Ibuprofen was recrystallized from several solvents by two different methods: addition of a non-solvent to a drug solution and cooling of a drug solution. Four samples, characterized by different crystal habit, were selected: sample A, sample E and sample T, recrystallized respectively from acetone, ethanol and THF by addition of water as non-solvent and sample M recrystallized from methanol by temperature decrease. By SEM analysis, sample were characterized with the respect of their crystal habit, mean particle diameter and elongation ratio. Sample A appears stick-shaped, sample E acicular with lamellar characteristics, samples T and M polyhedral. DSC and X-ray diffraction studies permit to exclude a polymorphic modification of ibuprofen during crystallization. For all samples micromeritics properties, densification behaviour and compression ability was analysed. Sample M shows a higher densification tendency, evidenciated by its higher apparent and tapped particle density. The ability to densificate is also pointed out by D0' value of Heckel's plot, which indicate the rearrangement of original particles at the initial stage of compression. This fact is related to the crystal habit of sample M, which is characterized by strongly smoothed coins. The increase in powder bed porosity permits a particle-particle interaction of greater extent during the subsequent stage of compression, which allows higher tabletability and compressibility.
NASA Astrophysics Data System (ADS)
Saprykin, A. A.; Sharkeev, Yu P.; Ibragimov, E. A.; Babakova, E. V.; Dudikhin, D. V.
2016-07-01
Alloys based on the titanium-niobium system are widely used in implant production. It is conditional, first of all, on the low modulus of elasticity and bio-inert properties of an alloy. These alloys are especially important for tooth replacement and orthopedic surgery. At present alloys based on the titanium-niobium system are produced mainly using conventional metallurgical methods. The further subtractive manufacturing an end product results in a lot of wastes, increasing, therefore, its cost. The alternative of these processes is additive manufacturing. Selective laser melting is a technology, which makes it possible to synthesize products of metal powders and their blends. The point of this technology is laser melting a layer of a powdered material; then a sintered layer is coated with the next layer of powder etc. Complex products and working prototypes are made on the base of this technology. The authors of this paper address to the issue of applying selective laser melting in order to synthesize a binary alloy of a composite powder based on the titanium-niobium system. A set of 10x10 mm samples is made in various process conditions. The samples are made by an experimental selective laser synthesis machine «VARISKAF-100MB». The machine provides adjustment of the following process variables: laser emission power, scanning rate and pitch, temperature of powder pre-heating, thickness of the layer to be sprinkled, and diameter of laser spot focusing. All samples are made in the preliminary vacuumized shielding atmosphere of argon. The porosity and thickness of the sintered layer related to the laser emission power are shown at various scanning rates. It is revealed that scanning rate and laser emission power are adjustable process variables, having the greatest effect on forming the sintered layer.
Raina, Renata; Sun, Lina
2008-05-01
This paper describes a new analytical method for determination of organophosphorus pesticides (OPs) along with their degradation products involving liquid chromatography (LC) positive ion electrospray (ESI+) tandem mass spectrometry (MS-MS) with selective reaction monitoring (SRM). Chromatography was performed on a Gemini C6-Phenyl (150 mmx2.0 mm, 3 microm) with a gradient elution using water-methanol with 0.1% formic acid, 2 mM ammonium acetate mobile phase at a flow rate of 0.2 mL min(-1). The LC separation and MS/MS operating conditions were optimized with a total analysis time less than 40 minutes. Method detection limits of 0.1-5 microg L(-1) for selected organophosphorus pesticides (OP), OP oxon degradation products, and other degradation products: 3,5,6-trichloro-2-pyridinol (TCP); 2-isopropyl-6-methyl-4-pyrimidol (IMP); and diethyl phosphate (DEP). Some OPs such as fenchlorphos are less sensitive (MDL 30 microg L(-1)). Calibration curves were linear with coefficients of correlation better than 0.995. A three-point identification approach was adopted with area from first selective reaction monitoring (SRM) transition used for quantitative analysis, while a second SRM transition along with the ratio of areas obtained from the first to second transition are used for confirmation with sample tolerance established by the relative standard deviation of the ratio obtained from standards. This new method permitted the first known detection of OP oxon degradation products including chlorpyrifos oxon at Bratt's Lake, SK and diazinon oxon and malathion oxon at Abbotsford, BC in atmospheric samples. Atmospheric detection limits typically ranged from 0.2-10 pg m(-3).
[Participation of migrants in health surveys conducted by telephone: potential and limits].
Schenk, L; Neuhauser, H
2005-10-01
Migrants living in Germany are a both large and vulnerable population subgroup. They are not easily induced to participate in health surveys, Hence, achieving high participation rates of migrants in health surveys and avoiding selection bias is a difficult task. In this study, we report on the participation of migrants in the German National Health Telephone Survey 2003 (GSTel03), the first comprehensive national health survey conducted by telephone in Germany. Three migrant groups were identified: individuals with non-German citizenship (foreigners), naturalized migrants, and ethnic German immigrants (Spätaussiedler). The aim of this study is to evaluate the degree to which the GSTel03 subsample of foreigners is representative for foreigners living in Germany. We compare the prevalence of sociodemographic characteristics and selected health indicators of foreigners in the GNTel03 subsample with prevalences from national statistics and from a large national household survey ("Mikrozensus 2003"). The proportion of participants with non-German nationality in the overall GSTel03 sample was significantly lower than the proportion of foreigners in the residential population in Germany (3.7 % vs. 8.9 %). While there was no evidence of selection bias with regard to age and sex distribution, we found significant differences with regard to other factors, including nationality, length of stay in Germany, unemployment rate and education. The comparison of health indicators showed only moderate differences between GSTel03 sample and "Mikrozensus" results. However, these differences did not consistently point to a better or worse health status in the GSTel03 sample of foreigners and should therefore not be generalised in respect of other health indicators. Our study emphasises the importance of a continuous effort to improve migrant participation in health studies and of a thorough analysis of selection bias when interpreting results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, L. M.
In September 2005, periodic sampling of groundwater was initiated by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) in the vicinity of a grain storage facility formerly operated by the CCC/USDA at Centralia, Kansas. The sampling at Centralia is being performed on behalf of the CCC/USDA by Argonne National Laboratory, in accord with a monitoring program approved by the Kansas Department of Health and Environment (KDHE). The objective is to monitor levels of carbon tetrachloride contamination identified in the groundwater at Centralia (Argonne 2003, 2004, 2005a). Under the KDHE-approved monitoring plan (Argonne 2005b), the groundwater was sampledmore » twice yearly from September 2005 until September 2007 for analyses for volatile organic compounds (VOCs), as well as measurement of selected geochemical parameters to aid in the evaluation of possible natural contaminant degradation (reductive dechlorination) processes in the subsurface environment. The results from the two-year sampling program demonstrated the presence of carbon tetrachloride contamination at levels exceeding the KDHE Tier 2 risk-based screening level (RBSL) of 5 {micro}g/L for this compound in a localized groundwater plume that has shown little movement. The relative concentrations of chloroform, the primary degradation product of carbon tetrachloride, suggested that some degree of reductive dechlorination or natural biodegradation was taking place in situ at the former CCC/USDA facility on a localized scale. The CCC/USDA subsequently developed an Interim Measure Conceptual Design (Argonne 2007b), proposing a pilot test of the Adventus EHC technology for in situ chemical reduction (ISCR). The proposed interim measure (IM) was approved by the KDHE in November 2007 (KDHE 2007). Implementation of the pilot test occurred in November-December 2007. The objective was to create highly reducing conditions that would enhance both chemical and biological reductive dechlorination in the injection test area (Argonne 2009a). The KDHE (2008a) has requested that sitewide monitoring continue at Centralia until a final remedy has been selected (as part of a Corrective Action Study [CAS] evaluation) and implemented for this site. In response to this request, twice-yearly sampling of 10 monitoring wells and 6 piezometers (Figure 1.1) previously approved by the KDHE for monitoring of the groundwater at Centralia (KDHE 2005a,b) was continued in 2008. The sampling events under this extension of the two-year (2005-2007) monitoring program occurred in March and September 2008 (Argonne 2008b, 2009b). Additional piezometers specifically installed to evaluate the progress of the IM pilot test (PMP1-PMP9; Figure 1.2) were also sampled in 2008; the results of these analyses were reported and discussed separately (Argonne 2009a). On the basis of results of the 2005-2008 sitewide monitoring and the 2008 IM pilot test monitoring, the CCC/USDA recommended a revised sampling program to address both of the continuing monitoring objectives until a CAS for Centralia is developed (Section 4.2 in Argonne 2009b). The elements of this interim monitoring plan are as follows: (1) Annual sampling of twelve previously established (before the pilot test) monitoring points (locations identified in Figure 1.3) and the five outlying pilot test monitoring points (PMP4, PMP5, PMP6, PMP7, PMP9; Figure 1.4); and (2) Sampling twice yearly at the five pilot test monitoring points inside the injection area (PMP1-PMP3, PMP8, MW02; Figure 1.4). With the approval of the KDHE (2009), groundwater sampling for analyses of VOCs and selected other geochemical parameters was conducted at Centralia under the interim monitoring program outlined above in April and October 2009. This report documents the findings of the 2009 monitoring events.« less
1980-07-01
his extreme help in the field nd laboratory. Mr. 1. Husler is thanked for his contribution with the chemical studies. The author is grateful to the Air...conducted by the staff chemist, John Husler . Detailed laboratory procedures used in the weathering experiments are given in Appendix B. Procedures...Edo-- AnAM 9 4A Appendix C Chemical Analysis of Selected * Pedogenic Caliche Samples (Whole Rock Analysis by John Husler , Staff Chemist) 130
Sm-Nd isotopic systematics of the ancient Gneiss complex, southern Africa
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Hunter, D. R.; Barker, F.
1983-01-01
In order to shed some new light on the question of the absolute and relative ages of the Ancient Gneiss Complex and Onverwacht Group, a Sm-Nd whole-rock and mineral isochron study of the AGC was begun. At this point, the whole-rock study of samples from the Bimodal Suite selected from those studied for their geochemical characteristics by Hunter et al., is completed. These results and their implications for the chronologic evolution of the Kaapvaal craton and the sources of these ancient rocks are discussed.
Mapping land use changes in the carboniferous region of Santa Catarina, report 2
NASA Technical Reports Server (NTRS)
Valeriano, D. D. (Principal Investigator); Bitencourtpereira, M. D.
1983-01-01
The techniques applied to MSS-LANDSAT data in the land-use mapping of Criciuma region (Santa Catarina state, Brazil) are presented along with the results of a classification accuracy estimate tested on the resulting map. The MSS-LANDSAT data digital processing involves noise suppression, features selection and a hybrid classifier. The accuracy test is made through comparisons with aerial photographs of sampled points. The utilization of digital processing to map the classes agricultural lands, forest lands and urban areas is recommended, while the coal refuse areas should be mapped visually.
NASA Technical Reports Server (NTRS)
Helmreich, Robert L.; Sawin, Linda L.; Carsrud, Alan L.
1986-01-01
Correlations between a job performance criterion and personality measures reflecting achievement motivation and an interpersonal orientation were examined at three points in time after completion of job training for a sample of airline reservations agents. Although correlations between the personality predictors and performance were small and nonsignificant for the 3-month period after beginning the job, by the end of six and eight months a number of significant relationships had emerged. Implications for the utility of personality measures in selection and performance prediction are discussed.
A Mars orbiter/rover/penetrator mission for the 1984 opportunity
NASA Technical Reports Server (NTRS)
Hastrup, R.; Driver, J.; Nagorski, R.
1977-01-01
A point design mission is described that utilizes the 1984 opportunity to extend the exploration of Mars after the successful Viking operations and provide the additional scientific information needed before conducting a sample return mission. Two identical multi-element spacecraft are employed, each consisting of (1) an orbiter, (2) a Viking-derived landing system that delivers a heavily instrumented, semi-autonomous rover, and (3) three penetrators deployed from the approach trajectory. Selection of the orbit profiles requires consideration of several important factors in order to satisfy all of the mission goals.
NASA Astrophysics Data System (ADS)
Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.
The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.
NASA Astrophysics Data System (ADS)
Strozyk, Joanna
2017-12-01
The paper presents results of laboratory shear strength test conducted on fine-grained soil samples with different grain size distribution and with different geological age and stress history. The Triaxial Isotopic Consolidation Undrained Tests (TXCIU) were performed under different consolidation stress in normal and overconsolidadion stress state on the samples with natural structure. Soil samples were selected from soil series of different age and geological origins: overconsolidated sensu stricto Miopliocene silty clay (siCl) and quasi overconsolidated Pleistocene clayey silt (clSi). Paper pointed out that overconsolidated sensu stricto and quasi overconsolidated fine-grained soil in same stress and environmental condition could show almost similar behaviour, and in other condition could behave significantly different. The correct evaluation of geotechnical parameters, the possibility of predicting their time-correct ability is only possible with appropriately recognized geological past and past processes that accompanied the soil formation.
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
Design Optimization of a Centrifugal Fan with Splitter Blades
NASA Astrophysics Data System (ADS)
Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong
2015-05-01
Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.
Selective preparation of hard dental tissue: classical and laser treatments comparison
NASA Astrophysics Data System (ADS)
Dostálova, Tat'jana; Jelínkova, Helena; Němec, Michal; Koranda, Petr; Miyagi, Mitsunobu; Iwai, Katsumasa; Shi, Yi-Wei; Matsuura, Yuji
2006-02-01
For the purpose of micro-selective preparation which is part of the modern dentistry four various methods were examined: ablation by Er:YAG laser radiation (free-running or Q-switching regime), preparation of tissues by ultrasonic round ball tip, and by the classical dental drilling machine using diamond round bur. In the case of Er:YAG laser application the interaction energy 40 mJ in pulse of 200 us yielding to the interaction intensity 62 kW/cm2, and 20 mJ in pulse of 100 ns yielding to the interaction intensity 62 MW/cm2 was used for the case of free running, and Q-switch regime, respectively. For comparisson with the classical methods the ultrasound preparation tip (Sonixflex cariex TC, D - Sonicsys micro) and dental driller together with usual preparation burrs and standard handpiece were used. For the interaction experiment the samples of extracted human teeth and ebony cut into longitudinal sections and polished were used. The thickness of the prepared samples ranged from 5 to 7 mm. The methods were compared from the point of prepared cavity shape (SEM), inner surface, and possibility of selective removal of carries. The composite filling material was used to reconstruct the cavities. The dye penetrating analysis was performed.
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.
1994-01-01
A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.
Relationship between student selection criteria and learner success for medical dosimetry students
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Jamie, E-mail: jabaker@mdanderson.org; Tucker, Debra; Raynes, Edilberto
Medical dosimetry education occupies a specialized branch of allied health higher education. Noted international shortages of health care workers, reduced university funding, limitations on faculty staffing, trends in learner attrition, and increased enrollment of nontraditional students force medical dosimetry educational leadership to reevaluate current admission practices. Program officials wish to select medical dosimetry students with the best chances of successful graduation. The purpose of the quantitative ex post facto correlation study was to investigate the relationship between applicant characteristics (cumulative undergraduate grade point average (GPA), science grade point average (SGPA), prior experience as a radiation therapist, and previous academic degrees)more » and the successful completion of a medical dosimetry program, as measured by graduation. A key finding from the quantitative study was the statistically significant positive correlation between a student's previous degree and his or her successful graduation from the medical dosimetry program. Future research investigations could include a larger research sample, representative of more medical dosimetry student populations, and additional studies concerning the relationship of previous work as a radiation therapist and the effect on success as a medical dosimetry student. Based on the quantitative correlation analysis, medical dosimetry leadership on admissions committees could revise student selection rubrics to place less emphasis on an applicant's undergraduate cumulative GPA and increase the weight assigned to previous degrees.« less
Relationship between student selection criteria and learner success for medical dosimetry students.
Baker, Jamie; Tucker, Debra; Raynes, Edilberto; Aitken, Florence; Allen, Pamela
2016-01-01
Medical dosimetry education occupies a specialized branch of allied health higher education. Noted international shortages of health care workers, reduced university funding, limitations on faculty staffing, trends in learner attrition, and increased enrollment of nontraditional students force medical dosimetry educational leadership to reevaluate current admission practices. Program officials wish to select medical dosimetry students with the best chances of successful graduation. The purpose of the quantitative ex post facto correlation study was to investigate the relationship between applicant characteristics (cumulative undergraduate grade point average (GPA), science grade point average (SGPA), prior experience as a radiation therapist, and previous academic degrees) and the successful completion of a medical dosimetry program, as measured by graduation. A key finding from the quantitative study was the statistically significant positive correlation between a student׳s previous degree and his or her successful graduation from the medical dosimetry program. Future research investigations could include a larger research sample, representative of more medical dosimetry student populations, and additional studies concerning the relationship of previous work as a radiation therapist and the effect on success as a medical dosimetry student. Based on the quantitative correlation analysis, medical dosimetry leadership on admissions committees could revise student selection rubrics to place less emphasis on an applicant׳s undergraduate cumulative GPA and increase the weight assigned to previous degrees. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Piezolelectric surgery in dentistry: a review.
Carini, F; Saggese, V; Porcaro, G; Baldoni, M
2014-01-01
In the last ten years was observed a significant increase of publications about piezoelectric bone surgery. The purpose of this review was to define the state of art and to realize a comparison between piezoelectric devices and manual or rotating traditional techniques, analyzing advantages and disadvantages from a clinical and histological point of view for various dental procedures. The literature review has been carried out using medical databases on line: MEDLINE and COCHRANE LIBRARY. The authors selected 37 publications about dental field and consistent with inclusion criteria established. From the clinical point of view, the analysis of selected publications concerning procedures such as maxillary sinus lift, alveolar ridge expansion, samples of autologous bone, etc, showed surgical trauma reduction, especially towards to soft and nervous tissues, surgical mini-invasiveness, cut precision and selectivity and speed of learning guaranteed by piezoelectric devices compared to traditional ones. Histologically, however, the study of biology and postintervention bone tissue healing showed a lower loss of bone with piezoelectric instruments than with conventional devices, as well as a better healing quality by reducing patient's postsurgery morbidity. The use of piezoelectric devices seems thus to simplify different sinus lift surgical procedures and to allow greater predictability, although some studies reveal that there are not substantial differences in comparison of long-term results between conventional and piezoelectric instruments and also criticize their increase in operation time.
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2011-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Glassmeyer, S.T.; Furlong, E.T.; Kolpin, D.W.; Cahill, J.D.; Zaugg, S.D.; Werner, S.L.; Meyer, M.T.; Kryak, D.D.
2005-01-01
The quality of drinking and recreational water is currently (2005) determined using indicator bacteria. However, the culture tests used to analyze for these bacteria require a long time to complete and do not discriminate between human and animal fecal material sources. One complementary approach is to use chemicals found in human wastewater, which would have the advantages of (1) potentially shorter analysis times than the bacterial culture tests and (2) being selected for human-source specificity. At 10 locations, water samples were collected upstream and at two successive points downstream from a wastewaster treatment plant (WWTP); a treated effluent sample was also collected at each WWTP. This sampling plan was used to determine the persistence of a chemically diverse suite of emerging contaminants in streams. Samples were also collected at two reference locations assumed to have minimal human impacts. Of the 110 chemical analytes investigated in this project, 78 were detected at least once. The number of compounds in a given sample ranged from 3 at a reference location to 50 in a WWTP effluent sample. The total analyte load at each location varied from 0.018 μg/L at the reference location to 97.7 μg/L in a separate WWTP effluent sample. Although most of the compound concentrations were in the range of 0.01−1.0 μg/L, in some samples, individual concentrations were in the range of 5−38 μg/L. The concentrations of the majority of the chemicals present in the samples generally followed the expected trend: they were either nonexistent or at trace levels in the upstream samples, had their maximum concentrations in the WWTP effluent samples, and then declined in the two downstream samples. This research suggests that selected chemicals are useful as tracers of human wastewater discharge.
2016-01-01
Copper is an essential nutrient for life, but at the same time, hyperaccumulation of this redox-active metal in biological fluids and tissues is a hallmark of pathologies such as Wilson’s and Menkes diseases, various neurodegenerative diseases, and toxic environmental exposure. Diseases characterized by copper hyperaccumulation are currently challenging to identify due to costly diagnostic tools that involve extensive technical workup. Motivated to create simple yet highly selective and sensitive diagnostic tools, we have initiated a program to develop new materials that can enable monitoring of copper levels in biological fluid samples without complex and expensive instrumentation. Herein, we report the design, synthesis, and properties of PAF-1-SMe, a robust three-dimensional porous aromatic framework (PAF) densely functionalized with thioether groups for selective capture and concentration of copper from biofluids as well as aqueous samples. PAF-1-SMe exhibits a high selectivity for copper over other biologically relevant metals, with a saturation capacity reaching over 600 mg/g. Moreover, the combination of PAF-1-SMe as a material for capture and concentration of copper from biological samples with 8-hydroxyquinoline as a colorimetric indicator affords a method for identifying aberrant elevations of copper in urine samples from mice with Wilson’s disease and also tracing exogenously added copper in serum. This divide-and-conquer sensing strategy, where functional and robust porous materials serve as molecular recognition elements that can be used to capture and concentrate analytes in conjunction with molecular indicators for signal readouts, establishes a valuable starting point for the use of porous polymeric materials in noninvasive diagnostic applications. PMID:27285482
Can prospect theory explain risk-seeking behavior by terminally ill patients?
Rasiel, Emma B; Weinfurt, Kevin P; Schulman, Kevin A
2005-01-01
Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.
NASA Astrophysics Data System (ADS)
Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang
2016-11-01
Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.
Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw
2016-01-01
Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity's sampling shovel and the contours of the Himalayan marmot's claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops' resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop. PMID:28127229
Genetic Mapping of Fixed Phenotypes: Disease Frequency as a Breed Characteristic
Jones, Paul; Martin, Alan; Ostrander, Elaine A.; Lark, Karl G.
2009-01-01
Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pacreatitis. PMID:19321632
A long-term outcome study of selective mutism in childhood.
Steinhausen, Hans-Christoph; Wachter, Miriam; Laimböck, Karin; Metzke, Christa Winkler
2006-07-01
Controlled study of the long-term outcome of selective mutism (SM) in childhood. A sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied. The symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood. This first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.
Stardust Entry: Landing and Population Hazards in Mission Planning and Operations
NASA Technical Reports Server (NTRS)
Desai, P.; Wawrzyniak, G.
2006-01-01
The 385 kg Stardust mission was launched on Feb 7, 1999 on a mission to collect samples from the tail of comet Wild 2 and from interplanetary space. Stardust returned to Earth in the early morning of January 15, 2006. The sample return capsule landed in the Utah Test and Training Range (UTTR) southwest of Salt Lake City. Because Stardust was landing on Earth, hazard analysis was required by the National Aeronautics and Space Administration, UTTR, and the Stardust Project to ensure the safe return of the landing capsule along with the safety of people, ground assets, and aircraft. This paper focuses on the requirements affecting safe return of the capsule and safety of people on the ground by investigating parameters such as probability of impacting on UTTR, casualty expectation, and probability of casualty. This paper introduces the methods for the calculation of these requirements and shows how they affected mission planning, site selection, and mission operations. By analyzing these requirements before and during entry it allowed for the selection of a robust landing point that met all of the requirements during the actual landing event.
Genetic mapping of fixed phenotypes: disease frequency as a breed characteristic.
Chase, Kevin; Jones, Paul; Martin, Alan; Ostrander, Elaine A; Lark, Karl G
2009-01-01
Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pancreatitis.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
40 CFR 761.247 - Sample site selection for pipe segment removal.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.247 Sample site selection for pipe segment removal. (a) General. (1) Select the pipe... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample site selection for pipe segment...
40 CFR 761.247 - Sample site selection for pipe segment removal.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sample site selection for pipe segment... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.247 Sample site selection for pipe segment removal. (a) General. (1) Select the pipe...
Masadome, Takashi; Yamagishi, Yuichi; Takano, Masaki; Hattori, Toshiaki
2008-03-01
A potentiometric titration method using a cationic surfactant as an indicator cation and a plasticized poly(vinyl chloride) membrane electrode sensitive to the cationic surfactant is proposed for the determination of polyhexamethylene biguanide hydrochloride (PHMB-HCl), which is a cationic polyelectrolyte. A sample solution of PHMB-HCl containing an indicator cation (hexadecyltrimethylammonium ion, HTA) was titrated with a standard solution of an anionic polyelectrolyte, potassium poly(vinyl sulfate) (PVSK). The end-point was detected as a sharp potential change due to an abrupt decrease in the concentration of the indicator cation, HTA, which is caused by its association with PVSK. The effects of the concentrations of HTA ion and coexisting electrolytes in the sample solution on the degree of the potential change at the end-point were examined. A linear relationship between the concentration of PHMB-HCl and the end-point volume of the titrant exists in the concentration range from 2.0 x 10(-5) to 4.0 x 10(-4) M in the case that the concentration of HTA is 1.0 x 10(-5) M, and that from 1.0 x 10(-6) to 1.2 x 10(-5) M in the case that the concentration of HTA is 5.0 x 10(-6) M, respectively. The proposed method was applied to the determination of PHMB-HCl in some contact-lens detergents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.; Whiteside, T. S.
The E-Area Vadose Zone Monitoring System (VZMS) includes lysimeter sampling points at many locations alongside and angling beneath the Engineered Trench #1 (ET1) disposal unit footprint. The sampling points for ET1 were selected for this study because collectively they showed consistently higher tritium (H-3) concentrations than lysimeters associated with other trench units. The VZMS tritium dataset for ET1 from 2001 through 2015 comprises concentrations at or near background levels at approximately half of locations through time, concentrations up to about 600 pCi/mL at a few locations, and concentrations at two locations that have exceeded 1000 pCi/mL. The highest three valuesmore » through 2015 were 6472 pCi/mL in 2014 and 4533 pCi/mL in 2013 at location VL-17, and 3152 pCi/mL in 2007 at location VL-15. As a point of reference, the drinking water standard for tritium and a DOE Order 435.1 performance objective in the saturated zone at the distant 100-meter facility perimeter is 20 pCi/mL. The purpose of this study is to assess whether these elevated concentrations are indicative of a general trend that could challenge 2008 E-Area Performance Assessment (PA) conclusions, or are isolated perturbations that when considered in the context of an entire disposal unit would support PA conclusions.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...
Code of Federal Regulations, 2012 CFR
2012-10-01
... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...
Code of Federal Regulations, 2013 CFR
2013-10-01
... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...
Code of Federal Regulations, 2014 CFR
2014-10-01
... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...
Code of Federal Regulations, 2011 CFR
2011-10-01
... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...
Apertureless near-field optical microscopy
NASA Astrophysics Data System (ADS)
Kazantsev, D. V.; Kuznetsov, E. V.; Timofeev, S. V.; Shelaev, A. V.; Kazantseva, E. A.
2017-05-01
We discuss the operating principles of the apertureless scanning near-field optical microscope (ASNOM), in which the probe acts as a rod antenna and its electromagnetic radiation plays the role of the registered signal. The phase and amplitude of the emitted wave vary depending on the ‘grounding conditions’ of the antenna tip at the sample point under study. Weak radiation from a tiny (2-15 μm long) tip is detected using optical homo- and heterodyning and the nonlinear dependence of the tip polarizability on the tip-surface distance. The lateral resolution of ASNOMs is determined by the tip curvature radius (1- 20 nm), regardless of the wavelength (500 nm-100 μm). ASNOMs are shown to be capable of providing a surface optical map with nanometer resolution and carrying out spectral- and time-resolved measurements at a selected point on the surface.
Segmentation and clustering as complementary sources of information
NASA Astrophysics Data System (ADS)
Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.
2007-03-01
This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.
Paul, Angela P.; Thodal, Carl E.; Baker, Gretchen M.; Lico, Michael S.; Prudic, David E.
2014-01-01
Water in caves, discharging from springs, and flowing in streams in the upper Baker and Snake Creek drainages are important natural resources in Great Basin National Park, Nevada. Water and rock samples were collected from 15 sites during February 2009 as part of a series of investigations evaluating the potential for water resource depletion in the park resulting from the current and proposed groundwater withdrawals. This report summarizes general geochemical characteristics of water samples collected from the upper Baker and Snake Creek drainages for eventual use in evaluating possible hydrologic connections between the streams and selected caves and springs discharging in limestone terrain within each watershed.Generally, water discharging from selected springs in the upper Baker and Snake Creek watersheds is relatively young and, in some cases, has similar chemical characteristics to water collected from associated streams. In the upper Baker Creek drainage, geochemical data suggest possible hydrologic connections between Baker Creek and selected springs and caves along it. The analytical results for water samples collected from Wheelers Deep and Model Caves show characteristics similar to those from Baker Creek, suggesting a hydrologic connection between the creek and caves, a finding previously documented by other researchers. Generally, geochemical evidence does not support a connection between water flowing in Pole Canyon Creek to that in Model Cave, at least not to any appreciable extent. The water sample collected from Rosethorn Spring had relatively high concentrations of many of the constituents sampled as part of this study. This finding was expected as the water from the spring travelled through alluvium prior to being discharged at the surface and, as a result, was provided the opportunity to interact with soil minerals with which it came into contact. Isotopic evidence does not preclude a connection between Baker Creek and the water discharging from Rosethorn Spring. The residence time of water discharging into the caves and from selected springs sampled as part of this study ranged from 10 to 25 years.Within the upper Snake Creek drainage, the results of this study show geochemical similarities between Snake Creek and Outhouse Spring, Spring Creek Spring, and Squirrel Spring Cave. The strontium isotope ratio (87Sr/86Sr) for intrusive rock samples representative of the Snake Creek drainage were similar to carbonate rock samples. The water sample collected from Snake Creek at the pipeline discharge point had lower strontium concentrations than the sample downstream and a similar 87Sr/86Sr value as the carbonate and intrusive rocks. The chemistry of the water sample was considered representative of upstream conditions in Snake Creek and indicates minimal influence of rock dissolution. The results of this study suggest that water discharging from Outlet Spring is not hydrologically connected to Snake Creek but rather is recharged at high altitude(s) within the Snake Creek drainage. These findings for Outlet Spring largely stem from the relatively high specific conductance and chloride concentration, the lightest deuterium (δD) and oxygen-18 (δ18O) values, and the longest calculated residence time (60 to 90 years) relative to any other sample collected as part of this study. With the exception of water sampled from Outlet Spring, the residence time of water discharging into Squirrel Spring Cave and selected springs in the upper Snake Creek drainage was less than 30 years.
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-11-22
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k -nearest neighbor ( k -NN) method produced the greatest accuracy. A geostatistically-weighted k -NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer's and user's accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas.
A Microbial Assessment Scheme to measure microbial performance of Food Safety Management Systems.
Jacxsens, L; Kussaga, J; Luning, P A; Van der Spiegel, M; Devlieghere, F; Uyttendaele, M
2009-08-31
A Food Safety Management System (FSMS) implemented in a food processing industry is based on Good Hygienic Practices (GHP), Hazard Analysis Critical Control Point (HACCP) principles and should address both food safety control and assurance activities in order to guarantee food safety. One of the most emerging challenges is to assess the performance of a present FSMS. The objective of this work is to explain the development of a Microbial Assessment Scheme (MAS) as a tool for a systematic analysis of microbial counts in order to assess the current microbial performance of an implemented FSMS. It is assumed that low numbers of microorganisms and small variations in microbial counts indicate an effective FSMS. The MAS is a procedure that defines the identification of critical sampling locations, the selection of microbiological parameters, the assessment of sampling frequency, the selection of sampling method and method of analysis, and finally data processing and interpretation. Based on the MAS assessment, microbial safety level profiles can be derived, indicating which microorganisms and to what extent they contribute to food safety for a specific food processing company. The MAS concept is illustrated with a case study in the pork processing industry, where ready-to-eat meat products are produced (cured, cooked ham and cured, dried bacon).
Performance of a Heating Block System Designed for Studying the Heat Resistance of Bacteria in Foods
NASA Astrophysics Data System (ADS)
Kou, Xiao-Xi; Li, Rui; Hou, Li-Xia; Huang, Zhi; Ling, Bo; Wang, Shao-Jin
2016-07-01
Knowledge of bacteria’s heat resistance is essential for developing effective thermal treatments. Choosing an appropriate test method is important to accurately determine bacteria’s heat resistances. Although being a major factor to influence the thermo-tolerance of bacteria, the heating rate in samples cannot be controlled in water or oil bath methods due to main dependence on sample’s thermal properties. A heating block system (HBS) was designed to regulate the heating rates in liquid, semi-solid and solid foods using a temperature controller. Distilled water, apple juice, mashed potato, almond powder and beef were selected to evaluate the HBS’s performance by experiment and computer simulation. The results showed that the heating rates of 1, 5 and 10 °C/min with final set-point temperatures and holding times could be easily and precisely achieved in five selected food materials. A good agreement in sample central temperature profiles was obtained under various heating rates between experiment and simulation. The experimental and simulated results showed that the HBS could provide a sufficiently uniform heating environment in food samples. The effect of heating rate on bacterial thermal resistance was evaluated with the HBS. The system may hold potential applications for rapid and accurate assessments of bacteria’s thermo-tolerances.
Surface water monitoring in the mercury mining district of Asturias (Spain).
Loredo, Jorge; Petit-Domínguez, María Dolores; Ordóñez, Almudena; Galán, María Pilar; Fernández-Martínez, Rodolfo; Alvarez, Rodrigo; Rucandio, María Isabel
2010-04-15
Systematic monitoring of surface waters in the area of abandoned mine sites constitutes an essential step in the characterisation of pollution from historic mine sites. The analytical data collected throughout a hydrologic period can be used for hydrological modelling and also to select appropriate preventive and/or corrective measures in order to avoid pollution of watercourses. Caudal River drains the main abandoned Hg mine sites (located in Mieres and Pola de Lena districts) in Central Asturias (NW Spain). This paper describes a systematic monitoring of physical and chemical parameters in eighteen selected sampling points within the Caudal River catchment. At each sampling station, water flow, pH, specific conductance, dissolved oxygen, salinity, temperature, redox potential and turbidity were controlled "in situ" and major and trace elements were analysed in the laboratory. In the Hg-mineralised areas, As is present in the form of As-rich pyrite, realgar and occasionally arsenopyrite. Mine drainage and leachates from spoil heaps exhibit in some cases acidic conditions and high As contents, and they are incorporated to Caudal River tributaries. Multivariate statistical analysis aids to the interpretation of the spatial and temporary variations found in the sampled areas, as part of a methodology applicable to different environmental and geological studies. 2009 Elsevier B.V. All rights reserved.
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
Hraber, Peter; Korber, Bette; Wagh, Kshitij; ...
2015-10-21
Within-host genetic sequencing from samples collected over time provides a dynamic view of how viruses evade host immunity. Immune-driven mutations might stimulate neutralization breadth by selecting antibodies adapted to cycles of immune escape that generate within-subject epitope diversity. Comprehensive identification of immune-escape mutations is experimentally and computationally challenging. With current technology, many more viral sequences can readily be obtained than can be tested for binding and neutralization, making down-selection necessary. Typically, this is done manually, by picking variants that represent different time-points and branches on a phylogenetic tree. Such strategies are likely to miss many relevant mutations and combinations ofmore » mutations, and to be redundant for other mutations. Longitudinal Antigenic Sequences and Sites from Intrahost Evolution (LASSIE) uses transmitted founder loss to identify virus “hot-spots” under putative immune selection and chooses sequences that represent recurrent mutations in selected sites. LASSIE favors earliest sequences in which mutations arise. Here, with well-characterized longitudinal Env sequences, we confirmed selected sites were concentrated in antibody contacts and selected sequences represented diverse antigenic phenotypes. Finally, practical applications include rapidly identifying immune targets under selective pressure within a subject, selecting minimal sets of reagents for immunological assays that characterize evolving antibody responses, and for immunogens in polyvalent “cocktail” vaccines.« less
Air slab-correction for Γ-ray attenuation measurements
NASA Astrophysics Data System (ADS)
Mann, Kulwinder Singh
2017-12-01
Gamma (γ)-ray shielding behaviour (GSB) of a material can be ascertained from its linear attenuation coefficient (μ, cm-1). Narrow-beam transmission geometry is required for μ-measurement. In such measurements, a thin slab of the material has to insert between point-isotropic γ-ray source and detector assembly. The accuracy in measurements requires that sample's optical thickness (OT) remain below 0.5 mean free path (mfp). Sometimes it is very difficult to produce thin slab of sample (absorber), on the other hand for thick absorber, i.e. OT >0.5 mfp, the influence of the air displaced by it cannot be ignored during μ-measurements. Thus, for a thick sample, correction factor has been suggested which compensates the air present in the transmission geometry. The correction factor has been named as an air slab-correction (ASC). Six samples of low-Z engineering materials (cement-black, clay, red-mud, lime-stone, cement-white and plaster-of-paris) have been selected for investigating the effect of ASC on μ-measurements at three γ-ray energies (661.66, 1173.24, 1332.50 keV). The measurements have been made using point-isotropic γ-ray sources (Cs-137 and Co-60), NaI(Tl) detector and multi-channel-analyser coupled with a personal computer. Theoretical values of μ have been computed using a GRIC2-toolkit (standardized computer programme). Elemental compositions of the samples were measured with Wavelength Dispersive X-ray Fluorescence (WDXRF) analyser. Inter-comparison of measured and computed μ-values, suggested that the application of ASC helps in precise μ-measurement for thick samples of low-Z materials. Thus, this hitherto widely ignored ASC factor is recommended to use in similar γ-ray measurements.
Green, W. Reed; Haggard, Brian E.
2001-01-01
Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.
Satellite Power Systems (SPS) concept definition study. Volume 2: SPS system requirements
NASA Technical Reports Server (NTRS)
Hanley, G.
1978-01-01
Collected data reflected the level of definition resulting from the evaluation of a broad spectrum of SPS (satellite power systems) concepts. As the various concepts matured, these requirements were updated to reflect the requirements identified for the projected satellite system/subsystem point design(s). The study established several candidate concepts which were presented to provide a basis for the selection of one or two approaches that would be given a more comprehensive examination. The two selected concepts were expanded and constitute the selected system point designs. The identified system/subsystem requirements was emphasized and information on the selected point design was provided.
Neighborhood sampling: how many streets must an auditor walk?
McMillan, Tracy E; Cubbin, Catherine; Parmenter, Barbara; Medina, Ashley V; Lee, Rebecca E
2010-03-12
This study tested the representativeness of four street segment sampling protocols using the Pedestrian Environment Data Scan (PEDS) in eleven neighborhoods surrounding public housing developments in Houston, TX. The following four street segment sampling protocols were used (1) all segments, both residential and arterial, contained within the 400 meter radius buffer from the center point of the housing development (the core) were compared with all segments contained between the 400 meter radius buffer and the 800 meter radius buffer (the ring); all residential segments in the core were compared with (2) 75% (3) 50% and (4) 25% samples of randomly selected residential street segments in the core. Analyses were conducted on five key variables: sidewalk presence; ratings of attractiveness and safety for walking; connectivity; and number of traffic lanes. Some differences were found when comparing all street segments, both residential and arterial, in the core to the ring. Findings suggested that sampling 25% of residential street segments within the 400 m radius of a residence sufficiently represents the pedestrian built environment. Conclusions support more cost effective environmental data collection for physical activity research.
Knacker, T; Schallnaß, H J; Klaschka, U; Ahlers, J
1995-11-01
The criteria for classification and labelling of substances as "dangerous for the environment" agreed upon within the European Union (EU) were applied to two sets of existing chemicals. One set (sample A) consisted of 41 randomly selected compounds listed in the European Inventory of Existing Chemical Substances (EINECS). The other set (sample B) comprised 115 substances listed in Annex I of Directive 67/548/EEC which were classified by the EU Working Group on Classification and Labelling of Existing Chemicals. The aquatic toxicity (fish mortality,Daphnia immobilisation, algal growth inhibition), ready biodegradability and n-octanol/water partition coefficient were measured for sample A by one and the same laboratory. For sample B, the available ecotoxicological data originated from many different sources and therefore was rather heterogeneous. In both samples, algal toxicity was the most sensitive effect parameter for most substances. Furthermore, it was found that, classification based on a single aquatic test result differs in many cases from classification based on a complete data set, although a correlation exists between the biological end-points of the aquatic toxicity test systems.
Neighborhood sampling: how many streets must an auditor walk?
2010-01-01
This study tested the representativeness of four street segment sampling protocols using the Pedestrian Environment Data Scan (PEDS) in eleven neighborhoods surrounding public housing developments in Houston, TX. The following four street segment sampling protocols were used (1) all segments, both residential and arterial, contained within the 400 meter radius buffer from the center point of the housing development (the core) were compared with all segments contained between the 400 meter radius buffer and the 800 meter radius buffer (the ring); all residential segments in the core were compared with (2) 75% (3) 50% and (4) 25% samples of randomly selected residential street segments in the core. Analyses were conducted on five key variables: sidewalk presence; ratings of attractiveness and safety for walking; connectivity; and number of traffic lanes. Some differences were found when comparing all street segments, both residential and arterial, in the core to the ring. Findings suggested that sampling 25% of residential street segments within the 400 m radius of a residence sufficiently represents the pedestrian built environment. Conclusions support more cost effective environmental data collection for physical activity research. PMID:20226052
Optimization of pressure probe placement and data analysis of engine-inlet distortion
NASA Astrophysics Data System (ADS)
Walter, S. F.
The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.
Silver(I) ion-selective membrane based on Schiff base-p-tert-butylcalix[4]arene.
Mahajan, R K; Kumar, M; Sharma, V; Kaur, I
2001-04-01
A PVC membrane electrode for silver(I) ion based on Schiff base-p-tert-butylcalix[4]arene is reported. The electrode works well over a wide range of concentration (1.0 x 10(-5)-1.0 x 10(-1) mol dm-3) with a Nernstian slope of 59.7 mV per decade. The electrode shows a fast response time of 20 s and operates in the pH range 1.0-5.6. The sensor can be used for more than 6 months without any divergence in the potential. The selectivity of the electrode was studied and it was found that the electrode exhibits good selectivity for silver ion over some alkali, alkaline earth and transition metal ions. The silver ion-selective electrode was used as an indicator electrode for the potentiometric titration of silver ion in solution using a standard solution of sodium chloride; a sharp potential change occurs at the end-point. The applicability of the sensor to silver(I) ion measurement in water samples spiked with silver nitrate is illustrated.
Potential candidate genomic biomarkers of drug induced vascular injury in the rat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmas, Deidre A., E-mail: Deidre.A.Dalmas@gsk.com; Scicchitano, Marshall S., E-mail: Marshall.S.Scicchitano@gsk.com; Mullins, David, E-mail: David.R.Mullins@gsk.com
2011-12-15
Drug-induced vascular injury is frequently observed in rats but the relevance and translation to humans present a hurdle for drug development. Numerous structurally diverse pharmacologic agents have been shown to induce mesenteric arterial medial necrosis in rats, but no consistent biomarkers have been identified. To address this need, a novel strategy was developed in rats to identify genes associated with the development of drug-induced mesenteric arterial medial necrosis. Separate groups (n = 6/group) of male rats were given 28 different toxicants (30 different treatments) for 1 or 4 days with each toxicant given at 3 different doses (low, mid andmore » high) plus corresponding vehicle (912 total rats). Mesentery was collected, frozen and endothelial and vascular smooth muscle cells were microdissected from each artery. RNA was isolated, amplified and Affymetrix GeneChip Registered-Sign analysis was performed on selectively enriched samples and a novel panel of genes representing those which showed a dose responsive pattern for all treatments in which mesenteric arterial medial necrosis was histologically observed, was developed and verified in individual endothelial cell- and vascular smooth muscle cell-enriched samples. Data were confirmed in samples containing mesentery using quantitative real-time RT-PCR (TaqMan Trade-Mark-Sign ) gene expression profiling. In addition, the performance of the panel was also confirmed using similarly collected samples obtained from a timecourse study in rats given a well established vascular toxicant (Fenoldopam). Although further validation is still required, a novel gene panel has been developed that represents a strategic opportunity that can potentially be used to help predict the occurrence of drug-induced mesenteric arterial medial necrosis in rats at an early stage in drug development. -- Highlights: Black-Right-Pointing-Pointer A gene panel was developed to help predict rat drug-induced mesenteric MAN. Black-Right-Pointing-Pointer A gene panel was identified following treatment of rats with 28 different toxicants. Black-Right-Pointing-Pointer There was a strong correlation of genes and histologic evidence of mesenteric MAN. Black-Right-Pointing-Pointer Many genes were also regulated prior to histologic evidence of arterial effects.« less