Sample records for point sampling methods

  1. Monte Carlo approaches to sampling forested tracts with lines or points

    Treesearch

    Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire

    2001-01-01

    Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...

  2. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  3. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    NASA Astrophysics Data System (ADS)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  4. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  5. THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES

    PubMed Central

    Song, Chi; Min, Xiaoyi; Zhang, Heping

    2016-01-01

    The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239

  6. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  7. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  8. Effects of sampling strategy, detection probability, and independence of counts on the use of point counts

    USGS Publications Warehouse

    Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.

  9. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  10. A field test of point relascope sampling of down coarse woody material in managed stands in the Acadian Forest

    Treesearch

    John C. Brissette; Mark J. Ducey; Jeffrey H. Gove

    2003-01-01

    We field tested a new method for sampling down coarse woody material (CWM) using an angle gauge and compared it with the more traditional line intersect sampling (LIS) method. Permanent sample locations in stands managed with different silvicultural treatments within the Penobscot Experimental Forest (Maine, USA) were used as the sampling locations. Point relascope...

  11. Do sampling methods differ in their utility for ecological monitoring? Comparison of line-point intercept, grid-point intercept, and ocular estimate methods

    USDA-ARS?s Scientific Manuscript database

    This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...

  12. Boiling point measurement of a small amount of brake fluid by thermocouple and its application.

    PubMed

    Mogami, Kazunari

    2002-09-01

    This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.

  13. Point Intercept (PO)

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON Point Intercept (PO) method is used to assess changes in plant species cover or ground cover for a macroplot. This method uses a narrow diameter sampling pole or sampling pins, placed at systematic intervals along line transects to sample within plot variation and quantify statistically valid changes in plant species cover and height over time. Plant...

  14. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Treesearch

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  15. A new mosaic method for three-dimensional surface

    NASA Astrophysics Data System (ADS)

    Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun

    2011-08-01

    Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.

  16. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  17. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  18. Mapping of bird distributions from point count surveys

    USGS Publications Warehouse

    Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.

  19. Evaluation of the performance of a point-of-care method for total and differential white blood cell count in clozapine users.

    PubMed

    Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H

    2016-12-01

    We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.

  20. A fast learning method for large scale and multi-class samples of SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  1. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  2. Experimental Investigations on Subsequent Yield Surface of Pure Copper by Single-Sample and Multi-Sample Methods under Various Pre-Deformation.

    PubMed

    Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci

    2018-02-10

    Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.

  3. Molecular analyses of two bacterial sampling methods in ligature-induced periodontitis in rats.

    PubMed

    Fontana, Carla Raquel; Grecco, Clovis; Bagnato, Vanderlei Salvador; de Freitas, Laura Marise; Boussios, Constantinos I; Soukos, Nikolaos S

    2018-02-01

    The prevalence profile of periodontal pathogens in dental plaque can vary as a function of the detection method; however, the sampling technique may also play a role in determining dental plaque microbial profiles. We sought to determine the bacterial composition comparing two sampling methods, one well stablished and a new one proposed here. In this study, a ligature-induced periodontitis model was used in 30 rats. Twenty-seven days later, ligatures were removed and microbiological samples were obtained directly from the ligatures as well as from the periodontal pockets using absorbent paper points. Microbial analysis was performed using DNA probes to a panel of 40 periodontal species in the checkerboard assay. The bacterial composition patterns were similar for both sampling methods. However, detection levels for all species were markedly higher for ligatures compared with paper points. Ligature samples provided more bacterial counts than paper points, suggesting that the technique for induction of periodontitis could also be applied for sampling in rats. Our findings may be helpful in designing studies of induced periodontal disease-associated microbiota.

  4. Strengths and weaknesses of temporal stability analysis for monitoring and estimating grid-mean soil moisture in a high-intensity irrigated agricultural landscape

    NASA Astrophysics Data System (ADS)

    Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.

    2017-01-01

    Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.

  5. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Preliminary Study on Appearance-Based Detection of Anatomical Point Landmarks in Body Trunk CT Images

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni

    Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.

  7. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    PubMed

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  8. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  9. Novel method of realizing metal freezing points by induced solidification

    NASA Astrophysics Data System (ADS)

    Ma, C. K.

    1997-07-01

    The freezing point of a pure metal, tf, is the temperature at which the solid and liquid phases are in equilibrium. The purest metal available is actually a dilute alloy. Normally, the liquidus point of a sample, tl, at which the amount of the solid phase in equilibrium with the liquid phase is minute, provides the closest approximation to tf. Thus the experimental realization of tf is a matter of realizing tl. The common method is to cool a molten sample continuously so that it supercools and recalesces. The highest temperature after recalescence is normally the best experimental value of tl. In the realization, supercooling of the sample at the sample container and the thermometer well is desirable for the formation of dual solid-liquid interfaces to thermally isolate the sample and the thermometer. However, the subsequent recalescence of the supercooled sample requires the formation of a certain amount of solid, which is not minute. Obviously, the plateau temperature is not the liquidus point. In this article we describe a method that minimizes supercooling. The condition that provides tl is closely approached so that the latter may be measured. As the temperature of the molten sample approaches the anticipated value of tl, a small solid of the same alloy is introduced into the sample to induce solidification. In general, solidification does not occur as long as the temperature is above or at tl, and occurs as soon as the sample supercools minutely. Thus tl can be obtained, in principle, by observing the temperature at which induced solidification begins. In case the solid is introduced after the sample has supercooled slightly, a slight recalescence results and the subsequent maximum temperature is a close approximation to tl. We demonstrate that the principle of induced solidification is indeed applicable to freezing point measurements by applying it to the design of a copper-freezing-point cell for industrial applications, in which a supercooled sample is reheated and then induced to solidify by the solidification of an auxiliary sample. Further experimental studies are necessary to assess the practical advantages and disadvantages of the induction method.

  10. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  11. Identification of driving network of cellular differentiation from single sample time course gene expression data

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  12. Correction for slope in point and transect relascope sampling of downed coarse woody debris

    Treesearch

    Goran Stahl; Anna Ringvall; Jeffrey H. Gove; Mark J. Ducey

    2002-01-01

    In this article, the effect of sloping terrain on estimates in point and transect relascope sampling (PRS and TRS, respectively) is studied. With these inventory methods, a wide angle relascope is used either from sample points (PRS) or along survey lines (TRS). Characteristics associated with line-shaped objects on the ground are assessed, e.g., the length or volume...

  13. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    PubMed Central

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  14. Study on high-resolution representation of terraces in Shanxi Loess Plateau area

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ma, Lei

    2008-10-01

    A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.

  15. Statistical approaches to the analysis of point count data: A little extra information can go a long way

    USGS Publications Warehouse

    Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.

  16. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  17. Biological tracer method

    DOEpatents

    Strong-Gunderson, Janet M.; Palumbo, Anthony V.

    1998-01-01

    The present invention is a biological tracer method for characterizing the movement of a material through a medium, comprising the steps of: introducing a biological tracer comprising a microorganism having ice nucleating activity into a medium; collecting at least one sample of the medium from a point removed from the introduction point; and analyzing the sample for the presence of the biological tracer. The present invention is also a method for using a biological tracer as a label for material identification by introducing a biological tracer having ice nucleating activity into a material, collecting a sample of a portion of the labelled material and analyzing the sample for the presence of the biological tracer.

  18. Biological tracer method

    DOEpatents

    Strong-Gunderson, J.M.; Palumbo, A.V.

    1998-09-15

    The present invention is a biological tracer method for characterizing the movement of a material through a medium, comprising the steps of: introducing a biological tracer comprising a microorganism having ice nucleating activity into a medium; collecting at least one sample of the medium from a point removed from the introduction point; and analyzing the sample for the presence of the biological tracer. The present invention is also a method for using a biological tracer as a label for material identification by introducing a biological tracer having ice nucleating activity into a material, collecting a sample of a portion of the labelled material and analyzing the sample for the presence of the biological tracer. 2 figs.

  19. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  20. Non-Aqueous Titration Method for Determining Suppressor Concentration in the MCU Next Generation Solvent (NGS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.

    A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less

  1. 40 CFR 60.74 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... select the sampling site, and the sampling point shall be the centroid of the stack or duct or at a point... the production rate (P) of 100 percent nitric acid for each run. Material balance over the production...

  2. Mapping of Bird Distributions from Point Count Surveys

    Treesearch

    John R. Sauer; Grey W. Pendleton; Sandra Orsillo

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes...

  3. Method and apparatus for millimeter-wave detection of thermal waves for materials evaluation

    DOEpatents

    Gopalsami, Nachappa; Raptis, Apostolos C.

    1991-01-01

    A method and apparatus for generating thermal waves in a sample and for measuring thermal inhomogeneities at subsurface levels using millimeter-wave radiometry. An intensity modulated heating source is oriented toward a narrow spot on the surface of a material sample and thermal radiation in a narrow volume of material around the spot is monitored using a millimeter-wave radiometer; the radiometer scans the sample point-by-point and a computer stores and displays in-phase and quadrature phase components of thermal radiations for each point on the scan. Alternatively, an intensity modulated heating source is oriented toward a relatively large surface area in a material sample and variations in thermal radiation within the full field of an antenna array are obtained using an aperture synthesis radiometer technique.

  4. Automatic initialization for 3D bone registration

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Taylor, Russell H.; Fichtinger, Gabor

    2008-03-01

    In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration is manually initialized by locating the sample points close to the corresponding points on the CT model. In this paper, we present an automatic initialization method that aligns the sample points collected from the surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for all registrations and facilitates the inclusion of application-specific information into the registration process. The CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape. This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for successful registration. The standard ICP has been used for final registration of datasets.

  5. Nitric Oxide Measurement Study. Volume II. Probe Methods,

    DTIC Science & Technology

    1980-05-01

    case of the Task I study, it should be pointed out that at lower gas temperatures where much of the study was performed, the mass flow through the...third body as pointed out by Matthews, et al. (1977) but also dependent on the viscosity of the sampled gas for standard commercial units (Folsom and...substantially above the dew point (based on the maximum pressure in the sampling system and the initial water concentration) or (2) sample line and

  6. Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.

    PubMed

    Morrison, Shane A; Luttbeg, Barney; Belden, Jason B

    2016-11-01

    Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50  = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. SPIRAL-SPRITE: a rapid single point MRI technique for application to porous media.

    PubMed

    Szomolanyi, P; Goodyear, D; Balcom, B; Matheson, D

    2001-01-01

    This study presents the application of a new, rapid, single point MRI technique which samples k space with spiral trajectories. The general principles of the technique are outlined along with application to porous concrete samples, solid pharmaceutical tablets and gas phase imaging. Each sample was chosen to highlight specific features of the method.

  8. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    PubMed

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  9. A new method for mapping multidimensional data to lower dimensions

    NASA Technical Reports Server (NTRS)

    Gowda, K. C.

    1983-01-01

    A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.

  10. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. A new method of regional CBF measurement using one point arterial sampling based on microsphere model with I-123 IMP SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odano, I.; Takahashi, N.; Ohkubo, M.

    1994-05-01

    We developed a new method for quantitative measurement of rCBF with Iodine-123-IMP based on the microsphere model, which was accurate, more simple and relatively non-invasive than the continuous withdrawal method. IMP is assumed to behave as a chemical microsphere in the brain. Then regional CBF is measured by the continuous withdrawal of arterial blood and the microsphere model as follows: F=Cb(t)/integral Ca(t)*N, where F is rCBF (ml/100g/min), Cb(t) is the brain activity concentration. The integral Ca(t) is the total activity of arterial whole-blood withdrawn, and N is the fraction of the integral Ca(t) that is true tracer activity. We analyzedmore » 14 patients. A dose of 222 MBq of IMP was injected i.v. over 1 min, and withdrawal of the arterial blood was performed from 0 to 5 min (integral Ca(t)), after which arterial blood samples (one point Ca(t)) were obtained at 5, 6, 7, 8, 9, 10 min, respectively. Then the integral Ca(t) was mathematically inferred from the value of one point Ca(t). When we examined the correlation between integral Ca(t)*N and one point Ca(t), and % error of one point Ca(t) compared with integral Ca(t)*N, the minimum of the % error was 8.1% and the maximum of the correlation coefficient was 0.943, the both values of which were obtained at 6 min. We concluded that 6 min was the best time to take arterial blood sample by one point sampling method for assuming the integral Ca(t)*N. IMP SPECT studies were performed with a ring-type SPECT scanner, Compared with rCBF measured by Xe-133 method, a significant correlation was observed in this method (r=0.773). One point Ca(t) method is very easy and quickly for measurement of rCBF without inserting catheters and without arterial blood treatment with octanol.« less

  12. Determination of residual solvents in bulk pharmaceuticals by thermal desorption/gas chromatography/mass spectrometry.

    PubMed

    Urakami, K; Saito, Y; Fujiwara, Y; Watanabe, C; Umemoto, K; Godo, M; Hashimoto, K

    2000-12-01

    Thermal desorption (TD) techniques followed by capillary GC/MS were applied for the analysis of residual solvents in bulk pharmaceuticals. Solvents desorbed from samples by heating were cryofocused at the head of a capillary column prior to GC/MS analysis. This method requires a very small amount of sample and no sample pretreatment. Desorption temperature was set at the point about 20 degrees C higher than the melting point of each sample individually. The relative standard deviations of this method tested by performing six consecutive analyses of 8 different samples were 1.1 to 3.1%, and analytical results of residual solvents were in agreement with those obtained by direct injection of N,N-dimethylformamide solution of the samples into the GC. This novel TD/GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents in bulk pharmaceuticals.

  13. Use of three-point taper systems in timber cruising

    Treesearch

    James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes

    2000-01-01

    Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...

  14. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  15. 40 CFR 61.32 - Emission standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... frequency of calibration. (b) Method of sample analysis. (c) Averaging technique for determining 30-day...) Plant and sampling area plots showing emission points and sampling sites. Topographic features...

  16. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  17. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  18. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  19. Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation

    Treesearch

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine

    2002-01-01

    New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...

  20. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  1. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  2. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  3. A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer

    NASA Astrophysics Data System (ADS)

    Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.

    2014-01-01

    Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.

  4. Methods for point-of-care detection of nucleic acid in a sample

    DOEpatents

    Bearinger, Jane P.; Dugan, Lawrence C.

    2015-12-29

    Provided herein are methods and apparatus for detecting a target nucleic acid in a sample and related methods and apparatus for diagnosing a condition in an individual. The condition is associated with presence of nucleic acid produced by certain pathogens in the individual.

  5. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  6. Point-of-Care Quantitative Measure of Glucose-6-Phosphate Dehydrogenase Enzyme Deficiency.

    PubMed

    Bhutani, Vinod K; Kaplan, Michael; Glader, Bertil; Cotten, Michael; Kleinert, Jairus; Pamula, Vamsee

    2015-11-01

    Widespread newborn screening on a point-of-care basis could prevent bilirubin neurotoxicity in newborns with glucose-6-phosphate dehydrogenase (G6PD) deficiency. We evaluated a quantitative G6PD assay on a digital microfluidic platform by comparing its performance with standard clinical methods. G6PD activity was measured quantitatively by using digital microfluidic fluorescence and the gold standard fluorescence biochemical test on a convenience sample of 98 discarded blood samples. Twenty-four samples were designated as G6PD deficient. Mean ± SD G6PD activity for normal samples using the digital microfluidic method and the standard method, respectively, was 9.7 ± 2.8 and 11.1 ± 3.0 U/g hemoglobin (Hb), respectively; for G6PD-deficient samples, it was 0.8 ± 0.7 and 1.4 ± 0.9 U/g Hb. Bland-Altman analysis determined a mean difference of -0.96 ± 1.8 U/g Hb between the digital microfluidic fluorescence results and the standard biochemical test results. The lower and upper limits for the digital microfluidic platform were 4.5 to 19.5 U/g Hb for normal samples and 0.2 to 3.7 U/g Hb for G6PD-deficient samples. The lower and upper limits for the Stanford method were 5.5 to 20.7 U/g Hb for normal samples and 0.1 to 2.8 U/g Hb for G6PD-deficient samples. The measured activity discriminated between G6PD-deficient samples and normal samples with no overlap. Pending further validation, a digital microfluidics platform could be an accurate point-of-care screening tool for rapid newborn G6PD screening. Copyright © 2015 by the American Academy of Pediatrics.

  7. Composite analysis for Escherichia coli at coastal beaches

    USGS Publications Warehouse

    Bertke, E.E.

    2007-01-01

    At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.

  8. A comparison of cover calculation techniques for relating point-intercept vegetation sampling to remote sensing imagery

    USDA-ARS?s Scientific Manuscript database

    Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitativ...

  9. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  10. H-point standard additions method for simultaneous determination of sulfamethoxazole and trimethoprim in pharmaceutical formulations and biological fluids with simultaneous addition of two analytes

    NASA Astrophysics Data System (ADS)

    Givianrad, M. H.; Saber-Tehrani, M.; Aberoomand-Azar, P.; Mohagheghian, M.

    2011-03-01

    The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L -1 for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples.

  11. H-point standard additions method for simultaneous determination of sulfamethoxazole and trimethoprim in pharmaceutical formulations and biological fluids with simultaneous addition of two analytes.

    PubMed

    Givianrad, M H; Saber-Tehrani, M; Aberoomand-Azar, P; Mohagheghian, M

    2011-03-01

    The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L(-1) for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows

    Treesearch

    Thomas B. Lynch; David Hamlin; Mark J. Ducey

    2016-01-01

    Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...

  13. Aeromechanics and Vehicle Configuration Demonstrations. Volume 2: Understanding Vehicle Sizing, Aeromechanics and Configuration Trades, Risks, and Issues for Next-Generations Access to Space Vehicles

    DTIC Science & Technology

    2014-01-01

    and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with

  14. Automated Parameter Studies Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosimis, Michael J.; Nemec, Marian

    2004-01-01

    Computational Fluid Dynamics (CFD) is now routinely used to analyze isolated points in a design space by performing steady-state computations at fixed flight conditions (Mach number, angle of attack, sideslip), for a fixed geometric configuration of interest. This "point analysis" provides detailed information about the flowfield, which aides an engineer in understanding, or correcting, a design. A point analysis is typically performed using high fidelity methods at a handful of critical design points, e.g. a cruise or landing configuration, or a sample of points along a flight trajectory.

  15. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  16. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  17. Convex Hull Aided Registration Method (CHARM).

    PubMed

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2017-09-01

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  18. Evaluation of the 5 and 8 pH point titration methods for monitoring anaerobic digesters treating solid waste.

    PubMed

    Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P

    2015-01-01

    Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.

  19. A new point contact surface acoustic wave transducer for measurement of acoustoelastic effect of polymethylmethacrylate.

    PubMed

    Lee, Yung-Chun; Kuo, Shi Hoa

    2004-01-01

    A new acoustic transducer and measurement method have been developed for precise measurement of surface wave velocity. This measurement method is used to investigate the acoustoelastic effects for waves propagating on the surface of a polymethylmethacrylate (PMMA) sample. The transducer uses two miniature conical PZT elements for acoustic wave transmitter and receiver on the sample surface; hence, it can be viewed as a point-source/point-receiver transducer. Acoustic waves are excited and detected with the PZT elements, and the wave velocity can be accurately determined with a cross-correlation waveform comparison method. The transducer and its measurement method are particularly sensitive and accurate in determining small changes in wave velocity; therefore, they are applied to the measurement of acoustoelastic effects in PMMA materials. Both the surface skimming longitudinal wave and Rayleigh surface wave can be simultaneously excited and measured. With a uniaxial-loaded PMMA sample, both acoustoelastic effects for surface skimming longitudinal wave and Rayleigh waves of PMMA are measured. The acoustoelastic coefficients for both types of surface wave motions are simultaneously determined. The transducer and its measurement method provide a practical way for measuring surface stresses nondestructively.

  20. Advantage of population pharmacokinetic method for evaluating the bioequivalence and accuracy of parameter estimation of pidotimod.

    PubMed

    Huang, Jihan; Li, Mengying; Lv, Yinghua; Yang, Juan; Xu, Ling; Wang, Jingjing; Chen, Junchao; Wang, Kun; He, Yingchun; Zheng, Qingshan

    2016-09-01

    This study was aimed at exploring the accuracy of population pharmacokinetic method in evaluating the bioequivalence of pidotimod with sparse data profiles and whether this method is suitable for bioequivalence evaluation in special populations such as children with fewer samplings. Methods In this single-dose, two-period crossover study, 20 healthy male Chinese volunteers were randomized 1 : 1 to receive either the test or reference formulation, with a 1-week washout before receiving the alternative formulation. Noncompartmental and population compartmental pharmacokinetic analyses were conducted. Simulated data were analyzed to graphically evaluate the model and the pharmacokinetic characteristics of the two pidotimod formulations. Various sparse sampling scenarios were generated from the real bioequivalence clinical trial data and evaluated by population pharmacokinetic method. The 90% confidence intervals (CIs) for AUC0-12h, AUC0-∞, and Cmax were 97.3 - 118.7%, 96.9 - 118.7%, and 95.1 - 109.8%, respectively, within the 80 - 125% range for bioequivalence using noncompartmental analysis. The population compartmental pharmacokinetics of pidotimod were described using a one-compartment model with first-order absorption and lag time. In the comparison of estimations in different dataset, the estimation of random three- and< fixed four-point sampling strategies can provide results similar to those obtained through rich sampling. The nonlinear mixed-effects model requires fewer data points. Moreover, compared with the noncompartmental analysis method, the pharmacokinetic parameters can be more accurately estimated using nonlinear mixed-effects model. The population pharmacokinetic modeling method was used to assess the bioequivalence of two pidotimod formulations with relatively few sampling points and further validated the bioequivalence of the two formulations. This method may provide useful information for regulating bioequivalence evaluation in special populations.

  1. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  2. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  3. Non-uniform sampling: post-Fourier era of NMR data collection and processing.

    PubMed

    Kazimierczuk, Krzysztof; Orekhov, Vladislav

    2015-11-01

    The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  5. A comparison of four porewater sampling methods for metal mixtures and dissolved organic carbon and the implications for sediment toxicity evaluations

    USGS Publications Warehouse

    Cleveland, Danielle; Brumbaugh, William G.; MacDonald, Donald D.

    2017-01-01

    Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater.

  6. Distortion correction of echo planar images applying the concept of finite rate of innovation to point spread function mapping (FRIP).

    PubMed

    Nunes, Rita G; Hajnal, Joseph V

    2018-06-01

    Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.

  7. A distance limited method for sampling downed coarse woody debris

    Treesearch

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2012-01-01

    A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...

  8. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  9. Estimate Soil Erodibility Factors Distribution for Maioli Block

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Ying

    2014-05-01

    The natural conditions in Taiwan are poor. Because of the steep slopes, rushing river and fragile geology, soil erosion turn into a serious problem. Not only undermine the sloping landscape, but also created sediment disaster like that reservoir sedimentation, river obstruction…etc. Therefore, predict and control the amount of soil erosion has become an important research topic. Soil erodibility factor (K) is a quantitative index of distinguish the ability of soil to resist the erosion separation and handling. Taiwan soil erodibility factors have been calculated 280 soil samples' erodibility factors by Wann and Huang (1989) use the Wischmeier and Smith nomorgraph. 221 samples were collected at the Maioli block in Miaoli. The coordinates of every sample point and the land use situations were recorded. The physical properties were analyzed for each sample. Three estimation methods, consist of Kriging, Inverse Distance Weighted (IDW) and Spline, were applied to estimate soil erodibility factors distribution for Maioli block by using 181 points data, and the remaining 40 points for the validation. Then, the SPSS regression analysis was used to comparison of the accuracy of the training data and validation data by three different methods. Then, the best method can be determined. In the future, we can used this method to predict the soil erodibility factors in other areas.

  10. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  11. A Robust False Matching Points Detection Method for Remote Sensing Image Registration

    NASA Astrophysics Data System (ADS)

    Shan, X. J.; Tang, P.

    2015-04-01

    Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.

  12. Geochemical, aeromagnetic, and generalized geologic maps showing distribution and abundance of molybdenum and zinc, Golconda and Iron Point quadrangles, Humboldt County, Nevada

    USGS Publications Warehouse

    Erickson, R.L.; Marsh, S.P.

    1972-01-01

    This series of maps shows the distribution and abundance of mercury, arsenic, antimony, tungsten, gold, copper, lead, and silver related to a geologic and aeromagnetic base in the Golconda and Iron Point 7½-minute quadrangles. All samples are rock samples; most are from shear or fault zones, fractures, jasperoid, breccia reefs, and altered rocks. All the samples were prepared and analyzed in truck-mounted laboratories at Winnemucca, Nevada. Arsenic, tungsten, copper, lead, and silver were determined by semiquantitative spectrographic methods by D.F. Siems and E.F. Cooley. Mercury and gold were determined by atomic absorption methods and antimony was determined by wet chemical methods by R.M. O'Leary, M.S. Erickson, and others.

  13. Development of spatial scaling technique of forest health sample point information

    NASA Astrophysics Data System (ADS)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  14. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  15. Guidelines for the detection of Trichinella larvae at the slaughterhouse in a quality assurance system.

    PubMed

    Rossi, Patrizia; Pozio, Edoardo

    2008-01-01

    The European Community Regulation (EC) No. 2075/2005 lays down specific rules on official controls for the detection of Trichinella in fresh meat for human consumption, recommending the pooled-sample digestion method as the reference method. The aim of this document is to provide specific guidance to implement an appropriate Trichinella digestion method by a laboratory accredited according to the ISO/IEC 17025:2005 international standard, and performing microbiological testing following the EA-04/10:2002 international guideline. Technical requirements for the correct implementation of the method, such as the personnel competence, specific equipments and reagents, validation of the method, reference materials, sampling, quality assurance of results and quality control of performance are provided, pointing out the critical control points for the correct implementation of the digestion method.

  16. Direct sampling for stand density index

    Treesearch

    Mark J. Ducey; Harry T. Valentine

    2008-01-01

    A direct method of estimating stand density index in the field, without complex calculations, would be useful in a variety of silvicultural situations. We present just such a method. The approach uses an ordinary prism or other angle gauge, but it involves deliberately "pushing the point" or, in some cases, "pulling the point." This adjusts the...

  17. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  18. Open-loop measurement of data sampling point for SPM

    NASA Astrophysics Data System (ADS)

    Wang, Yueyu; Zhao, Xuezeng

    2006-03-01

    SPM (Scanning Probe Microscope) provides "three-dimensional images" with nanometer level resolution, and some of them can be used as metrology tools. However, SPM's images are commonly distorted by non-ideal properties of SPM's piezoelectric scanner, which reduces metrological accuracy and data repeatability. In order to eliminate this limit, an "open-loop sampling" method is presented. In this method, the positional values of sampling points in all three directions on the surface of the sample are measured by the position sensor and recorded in SPM's image file, which is used to replace the image file from a conventional SPM. Because the positions in X and Y directions are measured at the same time of sampling height information in Z direction, the image distortion caused by scanner locating error can be reduced by proper image processing algorithm.

  19. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  20. Apparatus for point-of-care detection of nucleic acid in a sample

    DOEpatents

    Bearinger, Jane P.; Dugan, Lawrence C.

    2016-04-19

    Provided herein are methods and apparatus for detecting a target nucleic acid in a sample and related methods and apparatus for diagnosing a condition in an individual. The condition is associated with presence of nucleic acid produced by certain pathogens in the individual.

  1. Protocol for monitoring forest-nesting birds in National Park Service parks

    USGS Publications Warehouse

    Dawson, Deanna K.; Efford, Murray G.

    2013-01-01

    These documents detail the protocol for monitoring forest-nesting birds in National Park Service parks in the National Capital Region Network (NCRN). In the first year of sampling, counts of birds should be made at 384 points on the NCRN spatially randomized grid, developed to sample terrestrial resources. Sampling should begin on or about May 20 and continue into early July; on each day the sampling period begins at sunrise and ends five hours later. Each point should be counted twice, once in the first half of the field season and once in the second half, with visits made by different observers, balancing the within-season coverage of points and their spatial coverage by observers, and allowing observer differences to be tested. Three observers, skilled in identifying birds of the region by sight and sound and with previous experience in conducting timed counts of birds, will be needed for this effort. Observers should be randomly assigned to ‘routes’ consisting of eight points, in close proximity and, ideally, in similar habitat, that can be covered in one morning. Counts are 10 minutes in length, subdivided into four 2.5-min intervals. Within each time interval, new birds (i.e., those not already detected) are recorded as within or beyond 50 m of the point, based on where first detected. Binomial distance methods are used to calculate annual estimates of density for species. The data are also amenable to estimation of abundance and detection probability via the removal method. Generalized linear models can be used to assess between-year changes in density estimates or unadjusted count data. This level of sampling is expected to be sufficient to detect a 50% decline in 10 years for approximately 50 bird species, including 14 of 19 species that are priorities for conservation efforts, if analyses are based on unadjusted count data, and for 30 species (6 priority species) if analyses are based on density estimates. The estimates of required sample sizes are based on the mean number of individuals detected per 10 minutes in available data from surveys in three NCRN parks. Once network-wide data from the first year of sampling are available, this and other aspects of the protocol should be re-assessed, and changes made as desired or necessary before the start of the second field season. Thereafter, changes should not be made to the field methods, and sampling should be conducted annually for at least ten years. NCRN staff should keep apprised of new analytical methods developed for analysis of point-count data.

  2. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  3. A proof of the Woodward-Lawson sampling method for a finite linear array

    NASA Technical Reports Server (NTRS)

    Somers, Gary A.

    1993-01-01

    An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.

  4. Comparison of point-of-care-compatible lysis methods for bacteria and viruses.

    PubMed

    Heiniger, Erin K; Buser, Joshua R; Mireles, Lillian; Zhang, Xiaohong; Ladd, Paula D; Lutz, Barry R; Yager, Paul

    2016-09-01

    Nucleic acid sample preparation has been an especially challenging barrier to point-of-care nucleic acid amplification tests in low-resource settings. Here we provide a head-to-head comparison of methods for lysis of, and nucleic acid release from, several pathogenic bacteria and viruses-methods that are adaptable to point-of-care usage in low-resource settings. Digestion with achromopeptidase, a mixture of proteases and peptidoglycan-specific hydrolases, followed by thermal deactivation in a boiling water bath, effectively released amplifiable nucleic acid from Staphylococcus aureus, Bordetella pertussis, respiratory syncytial virus, and influenza virus. Achromopeptidase was functional after dehydration and reconstitution, even after eleven months of dry storage without refrigeration. Mechanical lysis methods proved to be effective against a hard-to-lyse Mycobacterium species, and a miniature bead-mill, the AudioLyse, is shown to be capable of releasing amplifiable DNA and RNA from this species. We conclude that point-of-care-compatible sample preparation methods for nucleic acid tests need not introduce amplification inhibitors, and can provide amplification-ready lysates from a wide range of bacterial and viral pathogens. Copyright © 2016. Published by Elsevier B.V.

  5. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    PubMed

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  6. An RBF-FD closest point method for solving PDEs on surfaces

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ling, L.; Ruuth, S. J.

    2018-10-01

    Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.

  7. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System

    PubMed Central

    Sun, Chenglu; Li, Wei; Chen, Wei

    2017-01-01

    For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188

  8. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  9. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers.

    PubMed

    Wang, X; Chauvat, M-P; Ruterana, P; Walther, T

    2017-12-01

    We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  10. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  11. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  12. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.

  13. Grain reconstruction of porous media: application to a Bentheim sandstone.

    PubMed

    Thovert, J-F; Adler, P M

    2011-05-01

    The two-point correlation measured on a thin section can be used to derive the probability density of the radii of a population of penetrable spheres. The geometrical, transport, and deformation properties of samples derived by this method compare well with the properties of the digitized real sample and of the samples generated by the standard grain reconstruction method. © 2011 American Physical Society

  14. 40 CFR 63.7824 - What test methods and other procedures must I use to establish and demonstrate initial compliance...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...

  15. 40 CFR 63.7824 - What test methods and other procedures must I use to establish and demonstrate initial compliance...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...

  16. Using shape contexts method for registration of contra lateral breasts in thermal images.

    PubMed

    Etehadtavakol, Mahnaz; Ng, Eddie Yin-Kwee; Gheissari, Niloofar

    2014-12-10

    To achieve symmetric boundaries for left and right breasts boundaries in thermal images by registration. The proposed method for registration consists of two steps. In the first step, shape context, an approach as presented by Belongie and Malik was applied for registration of two breast boundaries. The shape context is an approach to measure shape similarity. Two sets of finite sample points from shape contours of two breasts are then presented. Consequently, the correspondences between the two shapes are found. By finding correspondences, the sample point which has the most similar shape context is obtained. In this study, a line up transformation which maps one shape onto the other has been estimated in order to complete shape. The used of a thin plate spline permitted good estimation of a plane transformation which has capability to map unselective points from one shape onto the other. The obtained aligning transformation of boundaries points has been applied successfully to map the two breasts interior points. Some of advantages for using shape context method in this work are as follows: (1) no special land marks or key points are needed; (2) it is tolerant to all common shape deformation; and (3) although it is uncomplicated and straightforward to use, it gives remarkably powerful descriptor for point sets significantly upgrading point set registration. Results are very promising. The proposed algorithm was implemented for 32 cases. Boundary registration is done perfectly for 28 cases. We used shape contexts method that is simple and easy to implement to achieve symmetric boundaries for left and right breasts boundaries in thermal images.

  17. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    PubMed Central

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-01

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042

  18. 40 CFR 91.313 - Analyzers required.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  19. 40 CFR 90.313 - Analyzers required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  20. 40 CFR 90.313 - Analyzers required.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  1. 40 CFR 91.313 - Analyzers required.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  2. 40 CFR 91.313 - Analyzers required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  3. 40 CFR 91.313 - Analyzers required.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  4. 40 CFR 90.313 - Analyzers required.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  5. 40 CFR 90.313 - Analyzers required.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...

  6. Treatment of atomic and molecular line blanketing by opacity sampling. [atmospheric optics - stellar atmospheres

    NASA Technical Reports Server (NTRS)

    Johnson, H. R.; Krupp, B. M.

    1975-01-01

    An opacity sampling (OS) technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is presented. Tests were conducted and results show that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 500 frequency points is often adequate. The effects of atomic and molecular lines are separately studied. A test model computed by using the OS method agrees very well with a model having identical atmospheric parameters computed by the giant line (opacity distribution function) method.

  7. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  8. Validation of a modification to Performance-Tested Method 010403: microwell DNA hybridization assay for detection of Listeria spp. in selected foods and selected environmental surfaces.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.

  9. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  10. TU-AB-BRC-11: Moving a GPU-OpenCL-Based Monte Carlo (MC) Dose Engine Towards Routine Clinical Use: Automatic Beam Commissioning and Efficient Source Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Folkerts, M; Jiang, S

    Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less

  11. Filtering method of star control points for geometric correction of remote sensing image based on RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Xiangli; Yang, Jungang; Deng, Xinpu

    2018-04-01

    In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.

  12. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  13. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  14. Physically motivated global alignment method for electron tomography

    DOE PAGES

    Sanders, Toby; Prange, Micah; Akatay, Cem; ...

    2015-04-08

    Electron tomography is widely used for nanoscale determination of 3-D structures in many areas of science. Determining the 3-D structure of a sample from electron tomography involves three major steps: acquisition of sequence of 2-D projection images of the sample with the electron microscope, alignment of the images to a common coordinate system, and 3-D reconstruction and segmentation of the sample from the aligned image data. The resolution of the 3-D reconstruction is directly influenced by the accuracy of the alignment, and therefore, it is crucial to have a robust and dependable alignment method. In this paper, we develop amore » new alignment method which avoids the use of markers and instead traces the computed paths of many identifiable ‘local’ center-of-mass points as the sample is rotated. Compared with traditional correlation schemes, the alignment method presented here is resistant to cumulative error observed from correlation techniques, has very rigorous mathematical justification, and is very robust since many points and paths are used, all of which inevitably improves the quality of the reconstruction and confidence in the scientific results.« less

  15. Noncontact blood species identification method based on spatially resolved near-infrared transmission spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Sun, Meixiu; Wang, Zhennan; Li, Hongxiao; Li, Yingxin; Li, Gang; Lin, Ling

    2017-09-01

    The inspection and identification of whole blood are crucially significant for import-export ports and inspection and quarantine departments. In our previous research, we proved Near-Infrared diffuse transmitted spectroscopy method was potential for noninvasively identifying three blood species, including macaque, human and mouse, with samples measured in the cuvettes. However, in open sampling cases, inspectors may be endangered by virulence factors in blood samples. In this paper, we explored the noncontact measurement for classification, with blood samples measured in the vacuum blood vessels. Spatially resolved near-infrared spectroscopy was used to improve the prediction accuracy. Results showed that the prediction accuracy of the model built with nine detection points was more than 90% in identification between all five species, including chicken, goat, macaque, pig and rat, far better than the performance of the model built with single-point spectra. The results fully supported the idea that spatially resolved near-infrared spectroscopy method can improve the prediction ability, and demonstrated the feasibility of this method for noncontact blood species identification in practical applications.

  16. Treatment of atomic and molecular line blanketing by opacity sampling

    NASA Technical Reports Server (NTRS)

    Johnson, H. R.; Krupp, B. M.

    1976-01-01

    A sampling technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is subjected to several tests. In this opacity sampling (OS) technique, the global opacity is sampled at only a selected set of frequencies, and at each of these frequencies the total monochromatic opacity is obtained by summing the contribution of every relevant atomic and molecular line. In accord with previous results, we find that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 100 frequency points are adequate for many purposes. The effects of atomic and molecular lines are separately studied. A test model computed using the OS method agrees very well with a model having identical atmospheric parameters, but computed with the giant line (opacity distribution function) method.

  17. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    NASA Astrophysics Data System (ADS)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  18. Optical Ptychographic Microscope for Quantitative Bio-Mechanical Imaging

    NASA Astrophysics Data System (ADS)

    Anthony, Nicholas; Cadenazzi, Guido; Nugent, Keith; Abbey, Brian

    The role that mechanical forces play in biological processes such as cell movement and death is becoming of significant interest to further develop our understanding of the inner workings of cells. The most common method used to obtain stress information is photoelasticity which maps a samples birefringence, or its direction dependent refractive indices, using polarized light. However this method only provides qualitative data and for stress information to be useful quantitative data is required. Ptychography is a method for quantitatively determining the phase of a samples complex transmission function. The technique relies upon the collection of multiple overlapping coherent diffraction patterns from laterally displaced points on the sample. The overlap of measurement points provides complementary information that significantly aids in the reconstruction of the complex wavefield exiting the sample and allows for quantitative imaging of weakly interacting specimens. Here we describe recent advances at La Trobe University Melbourne on achieving quantitative birefringence mapping using polarized light ptychography with applications in cell mechanics. Australian Synchrotron, ARC Centre of Excellence for Advanced Molecular Imaging.

  19. Estimating abundance and survival in the endangered Point Arena Mountain beaver using noninvasive genetic methods

    Treesearch

    William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz

    2013-01-01

    The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...

  20. Compendium of selected methods for sampling and analysis at geothermal facilities

    NASA Astrophysics Data System (ADS)

    Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.

    1984-06-01

    An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.

  1. Research study on stabilization and control: Modern sampled data control theory

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.; Yackel, R. A.

    1973-01-01

    A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.

  2. Statistical approaches to the analysis of point count data: a little extra information can go a long way

    Treesearch

    George L. Farnsworth; James D. Nichols; John R. Sauer; Steven G. Fancy; Kenneth H. Pollock; Susan A. Shriner; Theodore R. Simons

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point...

  3. Theoretical repeatability assessment without repetitive measurements in gradient high-performance liquid chromatography.

    PubMed

    Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki

    2016-07-08

    This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.

  4. [Frequency of Candida in root canals of teeth with primary and persistent endodontic infections].

    PubMed

    Bernal-Treviño, Angel; González-Amaro, Ana María; Méndez González, Verónica; Pozos-Guillen, Amaury

    Microbiological identification in endodontic infections has focused mainly on bacteria without giving much attention to yeasts, which, due to their virulence factors, can affect the outcomes of root canal treatment. To determine the frequency of Candida in anaerobic conditions in root canals with primary and persistent endodontic infection, as well as to evaluate a microbiological sampling method using aspiration compared to the traditional absorption method with paper points. Fifty microbiological samples were obtained from teeth of 47 patients requiring endodontic treatments, due to either primary or persistent infections. Two microbiological sampling methods were used: an aspiration method, and the traditional paper point absorption method. In each of these methods, two types of medium were used (M 1 -M 4 ). Samples were cultured under anaerobic conditions until reaching 0.5 McFarland turbidity, and then inoculated on Sabouraud dextrose, as well as on anaerobic enriched blood agar plates. Macroscopic and microscopic observations of the colonies were performed. The germ-tube test, growth on CHROMagar, and biochemical identification were performed on the isolated yeasts. Fungal infection was found in 18 (36%) samples out of the 50 teeth evaluated. In the 18 samples positive for fungal infection, 15 out of 36 (41.6%) teeth were taken from a primary infection, and 3 out of 14 (21.4%) from a persistent infection. The aspiration method using Sabouraud dextrose medium recovered a greater diversity of species. Yeasts frequency was higher in teeth with primary infections compared to teeth with persistent infections. The predominant yeast species was Candida albicans. The aspirating sampling method was more efficient in the recovery of Candida isolates than the traditional absorption method. Copyright © 2018 Asociación Española de Micología. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Point-Sampling and Line-Sampling Probability Theory, Geometric Implications, Synthesis

    Treesearch

    L.R. Grosenbaugh

    1958-01-01

    Foresters concerned with measuring tree populations on definite areas have long employed two well-known methods of representative sampling. In list or enumerative sampling the entire tree population is tallied with a known proportion being randomly selected and measured for volume or other variables. In area sampling all trees on randomly located plots or strips...

  6. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  7. Definition of a new thermal contrast and pulse correction for defect quantification in pulsed thermography

    NASA Astrophysics Data System (ADS)

    Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo

    2008-01-01

    It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.

  8. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  9. Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, R.

    1973-01-01

    Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less

  10. An efficient method to compute microlensed light curves for point sources

    NASA Technical Reports Server (NTRS)

    Witt, Hans J.

    1993-01-01

    We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.

  11. General Constraints on Sampling Wildlife on FIA Plots

    Treesearch

    Larissa L. Bailey; John R. Sauer; James D. Nichols; Paul H. Geissler

    2005-01-01

    This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species...

  12. Determination of residual solvents in pharmaceuticals by thermal desorption-GC/MS.

    PubMed

    Hashimoto, K; Urakami, K; Fujiwara, Y; Terada, S; Watanabe, C

    2001-05-01

    A novel method for the determination of residual solvents in pharmaceuticals by thermal desorption (TD)-GC/MS has been established. A programmed temperature pyrolyzer (double shot pyrolyzer) is applied for the TD. This method does not require any sample pretreatment and allows very small amounts of the sample. Directly desorbed solvents from intact pharmaceuticals (ca. 1 mg) in the desorption cup (5 mm x 3.8 mm i.d.) were cryofocused at the head of a capillary column prior to a GC/MS analysis. The desorption temperature was set at a point about 20 degrees C higher than the melting point of each sample individually, and held for 3 min. The analytical results using 7 different pharmaceuticals were in agreement with those obtained by direct injection (DI) of the solution, followed by USP XXIII. This proposed TD-GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents. Furthermore, this method was simple, allowed rapid analysis and gave good repeatability.

  13. A novel method for rapid determination of total solid content in viscous liquids by multiple headspace extraction gas chromatography.

    PubMed

    Xin, Li-Ping; Chai, Xin-Sheng; Hu, Hui-Chao; Barnes, Donald G

    2014-09-05

    This work demonstrates a novel method for rapid determination of total solid content in viscous liquid (polymer-enriched) samples. The method is based multiple headspace extraction gas chromatography (MHE-GC) on a headspace vial at a temperature above boiling point of water. Thus, the trend of water loss from the tested liquid due to evaporation can be followed. With the limited MHE-GC testing (e.g., 5 extractions) and a one-point calibration procedure (i.e., recording the weight difference before and after analysis), the total amount of water in the sample can be determined, from which the total solid contents in the liquid can be calculated. A number of black liquors were analyzed by the new method which yielded results that closely matched those of the reference method; i.e., the results of these two methods differed by no more than 2.3%. Compared with the reference method, the MHE-GC method is much simpler and more practical. Therefore, it is suitable for the rapid determination of the solid content in many polymer-containing liquid samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Rapid, sensitive and reproducible method for point-of-collection screening of liquid milk for adulterants using a portable Raman spectrometer with novel optimized sample well

    NASA Astrophysics Data System (ADS)

    Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.

    2017-02-01

    Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.

  15. Is automated kinetic measurement superior to end-point for advanced oxidation protein product?

    PubMed

    Oguz, Osman; Inal, Berrin Bercik; Emre, Turker; Ozcan, Oguzhan; Altunoglu, Esma; Oguz, Gokce; Topkaya, Cigdem; Guvenen, Guvenc

    2014-01-01

    Advanced oxidation protein product (AOPP) was first described as an oxidative protein marker in chronic uremic patients and measured with a semi-automatic end-point method. Subsequently, the kinetic method was introduced for AOPP assay. We aimed to compare these two methods by adapting them to a chemistry analyzer and to investigate the correlation between AOPP and fibrinogen, the key molecule responsible for human plasma AOPP reactivity, microalbumin, and HbA1c in patients with type II diabetes mellitus (DM II). The effects of EDTA and citrate-anticogulated tubes on these two methods were incorporated into the study. This study included 93 DM II patients (36 women, 57 men) with HbA1c levels > or = 7%, who were admitted to the diabetes and nephrology clinics. The samples were collected in EDTA and in citrate-anticoagulated tubes. Both methods were adapted to a chemistry analyzer and the samples were studied in parallel. In both types of samples, we found a moderate correlation between the kinetic and the endpoint methods (r = 0.611 for citrate-anticoagulated, r = 0.636 for EDTA-anticoagulated, p = 0.0001 for both). We found a moderate correlation between fibrinogen-AOPP and microalbumin-AOPP levels only in the kinetic method (r = 0.644 and 0.520 for citrate-anticoagulated; r = 0.581 and 0.490 for EDTA-anticoagulated, p = 0.0001). We conclude that adaptation of the end-point method to automation is more difficult and it has higher between-run CV% while application of the kinetic method is easier and it may be used in oxidative stress studies.

  16. General constraints on sampling wildlife on FIA plots

    USGS Publications Warehouse

    Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.

    2005-01-01

    This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.

  17. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  18. Improving the power of clinical trials of rheumatoid arthritis by using data on continuous scales when analysing response rates: an application of the augmented binary method

    PubMed Central

    Jenkins, Martin

    2016-01-01

    Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084

  19. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    PubMed

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  20. Three-dimensional scene reconstruction from a two-dimensional image

    NASA Astrophysics Data System (ADS)

    Parkins, Franz; Jacobs, Eddie

    2017-05-01

    We propose and simulate a method of reconstructing a three-dimensional scene from a two-dimensional image for developing and augmenting world models for autonomous navigation. This is an extension of the Perspective-n-Point (PnP) method which uses a sampling of the 3D scene, 2D image point parings, and Random Sampling Consensus (RANSAC) to infer the pose of the object and produce a 3D mesh of the original scene. Using object recognition and segmentation, we simulate the implementation on a scene of 3D objects with an eye to implementation on embeddable hardware. The final solution will be deployed on the NVIDIA Tegra platform.

  1. Coalescent Inference Using Serially Sampled, High-Throughput Sequencing Data from Intrahost HIV Infection

    PubMed Central

    Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.

    2016-01-01

    Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628

  2. Methyl-CpG island-associated genome signature tags

    DOEpatents

    Dunn, John J

    2014-05-20

    Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.

  3. Trail resource impacts and an examination of alternative assessment techniques

    USGS Publications Warehouse

    Marion, J.L.; Leung, Y.-F.

    2001-01-01

    Trails are a primary recreation resource facility on which recreation activities are performed. They provide safe access to non-roaded areas, support recreational opportunities such as hiking, biking, and wildlife observation, and protect natural resources by concentrating visitor traffic on resistant treads. However, increasing recreational use, coupled with poorly designed and/or maintained trails, has led to a variety of resource impacts. Trail managers require objective information on trails and their conditions to monitor trends, direct trail maintenance efforts, and evaluate the need for visitor management and resource protection actions. This paper reviews trail impacts and different types of trail assessments, including inventory, maintenance, and condition assessment approaches. Two assessment methods, point sampling and problem assessment, are compared empirically from separate assessments of a 15-mile segment of the Appalachian Trail in Great Smoky Mountains National Park. Results indicate that point sampling and problem assessment methods yield distinctly different types of quantitative information. The point sampling method provides more accurate and precise measures of trail characteristics that are continuous or frequent (e.g., tread width or exposed soil). The problem assessment method is a preferred approach for monitoring trail characteristics that can be easily predefined or are infrequent (e.g., excessive width or secondary treads), particularly when information on the location of specific trail impact problems is needed. The advantages and limitations of these two assessment methods are examined in relation to various management and research information needs. The choice and utility of these assessment methods are also discussed.

  4. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    PubMed Central

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  5. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  6. Exploring a potential energy surface by machine learning for characterizing atomic transport

    NASA Astrophysics Data System (ADS)

    Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro

    2018-03-01

    We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.

  7. Experimental results for the rapid determination of the freezing point of fuels

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, B.

    1984-01-01

    Two methods for the rapid determination of the freezing point of fuels were investigated: an optical method, which detected the change in light transmission from the disappearance of solid particles in the melted fuel; and a differential thermal analysis (DTA) method, which sensed the latent heat of fusion. A laboratory apparatus was fabricated to test the two methods. Cooling was done by thermoelectric modules using an ice-water bath as a heat sink. The DTA method was later modified to eliminate the reference fuel. The data from the sample were digitized and a point of inflection, which corresponds to the ASTM D-2386 freezing point (final melting point), was identified from the derivative. The apparatus was modifified to cool the fuel to -60 C and controls were added for maintaining constant cooling rate, rewarming rate, and hold time at minimum temperature. A parametric series of tests were run for twelve fuels with freezing points from -10 C to -50 C, varying cooling rate, rewarming rate, and hold time. Based on the results, an optimum test procedure was established. The results showed good agreement with ASTM D-2386 freezing point and differential scanning calorimetry results.

  8. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  9. A Data Cleaning Method for Big Trace Data Using Movement Consistency

    PubMed Central

    Tang, Luliang; Zhang, Xia; Li, Qingquan

    2018-01-01

    Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456

  10. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh

    PubMed Central

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005

  11. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh.

    PubMed

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.

  12. Irregular and adaptive sampling for automatic geophysic measure systems

    NASA Astrophysics Data System (ADS)

    Avagnina, Davide; Lo Presti, Letizia; Mulassano, Paolo

    2000-07-01

    In this paper a sampling method, based on an irregular and adaptive strategy, is described. It can be used as automatic guide for rovers designed to explore terrestrial and planetary environments. Starting from the hypothesis that a explorative vehicle is equipped with a payload able to acquire measurements of interesting quantities, the method is able to detect objects of interest from measured points and to realize an adaptive sampling, while badly describing the not interesting background.

  13. Evaluation of methods to sample fecal indicator bacteria in foreshore sand and pore water at freshwater beaches.

    PubMed

    Vogel, Laura J; Edge, Thomas A; O'Carroll, Denis M; Solo-Gabriele, Helena M; Kushnir, Caitlin S E; Robinson, Clare E

    2017-09-15

    Fecal indicator bacteria (FIB) are known to accumulate in foreshore beach sand and pore water (referred to as foreshore reservoir) where they act as a non-point source for contaminating adjacent surface waters. While guidelines exist for sampling surface waters at recreational beaches, there is no widely-accepted method to collect sand/sediment or pore water samples for FIB enumeration. The effect of different sampling strategies in quantifying the abundance of FIB in the foreshore reservoir is unclear. Sampling was conducted at six freshwater beaches with different sand types to evaluate sampling methods for characterizing the abundance of E. coli in the foreshore reservoir as well as the partitioning of E. coli between different components in the foreshore reservoir (pore water, saturated sand, unsaturated sand). Methods were evaluated for collection of pore water (drive point, shovel, and careful excavation), unsaturated sand (top 1 cm, top 5 cm), and saturated sand (sediment core, shovel, and careful excavation). Ankle-depth surface water samples were also collected for comparison. Pore water sampled with a shovel resulted in the highest observed E. coli concentrations (only statistically significant at fine sand beaches) and lowest variability compared to other sampling methods. Collection of the top 1 cm of unsaturated sand resulted in higher and more variable concentrations than the top 5 cm of sand. There were no statistical differences in E. coli concentrations when using different methods to sample the saturated sand. Overall, the unsaturated sand had the highest amount of E. coli when compared to saturated sand and pore water (considered on a bulk volumetric basis). The findings presented will help determine the appropriate sampling strategy for characterizing FIB abundance in the foreshore reservoir as a means of predicting its potential impact on nearshore surface water quality and public health risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  15. A line-scan hyperspectral Raman system for spatially offset Raman spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Conventional methods of spatially offset Raman spectroscopy (SORS) typically use single-fiber optical measurement probes to slowly and incrementally collect a series of spatially offset point measurements moving away from the laser excitation point on the sample surface, or arrays of multiple fiber ...

  16. A Finite Difference Method for Modeling Migration of Impurities in Multilayer Systems

    NASA Astrophysics Data System (ADS)

    Tosa, V.; Kovacs, Katalin; Mercea, P.; Piringer, O.

    2008-09-01

    A finite difference method to solve the one-dimensional diffusion of impurities in a multilayer system was developed for the special case in which a partition coefficient K impose a ratio of the concentrations at the interface between two adiacent layers. The fictitious point method was applied to derive the algebraic equations for the mesh points at the interface, while for the non-uniform mesh points within the layers a combined method was used. The method was tested and then applied to calculate migration of impurities from multilayer systems into liquids or solids samples, in migration experiments performed for quality testing purposes. An application was developed in the field of impurities migrations from multilayer plastic packagings into food, a problem of increasing importance in food industry.

  17. A novel method of measuring the melting point of animal fats.

    PubMed

    Lloyd, S S; Dawkins, S T; Dawkins, R L

    2014-10-01

    The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.

  18. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    PubMed

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  19. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  20. [Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].

    PubMed

    He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo

    2010-01-01

    To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.

  1. Calculation of the ELISA's cut-off based on the change-point analysis method for detection of Trypanosoma cruzi infection in Bolivian dogs in the absence of controls.

    PubMed

    Lardeux, Frédéric; Torrico, Gino; Aliaga, Claudia

    2016-07-04

    In ELISAs, sera of individuals infected by Trypanosoma cruzi show absorbance values above a cut-off value. The cut-off is generally computed by means of formulas that need absorbance readings of negative (and sometimes positive) controls, which are included in the titer plates amongst the unknown samples. When no controls are available, other techniques should be employed such as change-point analysis. The method was applied to Bolivian dog sera processed by ELISA to diagnose T. cruzi infection. In each titer plate, the change-point analysis estimated a step point which correctly discriminated among known positive and known negative sera, unlike some of the six usual cut-off formulas tested. To analyse the ELISAs results, the change-point method was as good as the usual cut-off formula of the form "mean + 3 standard deviation of negative controls". Change-point analysis is therefore an efficient alternative method to analyse ELISA absorbance values when no controls are available.

  2. 40 CFR Table 3 to Subpart Yyyy of... - Requirements for Performance Tests and Initial Compliance Demonstrations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the Administrator formaldehyde concentration must be corrected to 15 percent O2, dry basis. Results of... 100 percent load. b. select the sampling port location and the number of traverse points AND Method 1... concentration at the sampling port location AND Method 3A or 3B of 40 CFR part 60, appendix A measurements to...

  3. 40 CFR 63.9914 - What test methods and other procedures must I use to demonstrate initial compliance with chlorine...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... appendix A to 40 CFR part 60: (i) Method 1 to select sampling port locations and the number of traverse points. Sampling ports must be located at the outlet of the control device and prior to any releases to... = Concentration of chlorine or hydrochloric acid in the gas stream, milligrams per dry standard cubic meter (mg...

  4. 40 CFR 63.9914 - What test methods and other procedures must I use to demonstrate initial compliance with chlorine...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... appendix A to 40 CFR part 60: (i) Method 1 to select sampling port locations and the number of traverse points. Sampling ports must be located at the outlet of the control device and prior to any releases to... = Concentration of chlorine or hydrochloric acid in the gas stream, milligrams per dry standard cubic meter (mg...

  5. Partially Identified Treatment Effects for Generalizability

    ERIC Educational Resources Information Center

    Chan, Wendy

    2017-01-01

    Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…

  6. Differential porosimetry and permeametry for random porous media.

    PubMed

    Hilfer, R; Lemmer, A

    2015-07-01

    Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.

  7. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  8. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Hoskinson; R C. Rope; L G. Blackwood

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less

  9. [Statistical prediction methods in violence risk assessment and its application].

    PubMed

    Liu, Yuan-Yuan; Hu, Jun-Mei; Yang, Min; Li, Xiao-Song

    2013-06-01

    It is an urgent global problem how to improve the violence risk assessment. As a necessary part of risk assessment, statistical methods have remarkable impacts and effects. In this study, the predicted methods in violence risk assessment from the point of statistics are reviewed. The application of Logistic regression as the sample of multivariate statistical model, decision tree model as the sample of data mining technique, and neural networks model as the sample of artificial intelligence technology are all reviewed. This study provides data in order to contribute the further research of violence risk assessment.

  10. Internal scanning method as unique imaging method of optical vortex scanning microscope

    NASA Astrophysics Data System (ADS)

    Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2018-06-01

    The internal scanning method is specific for the optical vortex microscope. It allows to move the vortex point inside the focused vortex beam with nanometer resolution while the whole beam stays in place. Thus the sample illuminated by the focused vortex beam can be scanned just by the vortex point. We show that this method enables high resolution imaging. The paper presents the preliminary experimental results obtained with the first basic image recovery procedure. A prospect of developing more powerful tools for topography recovery with the optical vortex scanning microscope is discussed shortly.

  11. Can cloud point-based enrichment, preservation, and detection methods help to bridge gaps in aquatic nanometrology?

    PubMed

    Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan

    2016-11-01

    Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.

  12. Soil Sampling Techniques For Alabama Grain Fields

    NASA Technical Reports Server (NTRS)

    Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.

    2003-01-01

    Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.

  13. [Assessment comparison between area sampling and personal sampling noise measurement in new thermal power plant].

    PubMed

    Zhang, Hua; Chen, Qing-song; Li, Nan; Hua, Yan; Zeng, Lin; Xu, Guo-yang; Tao, Li-yuan; Zhao, Yi-ming

    2013-05-01

    To compare the results of noise hazard evaluations based on area sampling and personal sampling in a new thermal power plant and to analyze the similarities and differences between the two measurement methods. According to Measurement of Physical agents in Workplace Part 8: Noise(GBZff 189.8-2007), area sampling was performed at various operating points for noise measurement, and meanwhile the workers under different types of work wore noise dosimeters for personal noise exposure measurement. The two measurement methods were used to evaluate the level of noise hazards in the enterprise according to the corresponding occupational health standards, and the evaluation results were compared. Area sampling was performed at 99 operating points, the mean noise level was 88.9 ± 11.1 dB (A)(range, 51.3-107.0 dB (A)), with an over-standard rate of 75.8%. Personal sampling was performed (73 person times),and the mean noise level was 79.3 ± 6.3 dB (A), with an over-standard rate of 6.6% ( 16/241 ). There was a statistically significant difference in the over-standard rate between the evaluation results of the two measurement methods ( x2=53.869, ?<0.001 ). Because of the characteristics of the work in new thermal power plants, the noise hazard evaluation based on area sampling cannot be used instead of personal noise exposure measurement among workers. Personal sampling should be used in the noise measurement in new thermal power plant.

  14. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  15. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    NASA Astrophysics Data System (ADS)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  16. Piecewise multivariate modelling of sequential metabolic profiling data.

    PubMed

    Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan

    2008-02-19

    Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.

  17. Subrandom methods for multidimensional nonuniform sampling.

    PubMed

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. On the improvement of blood sample collection at clinical laboratories

    PubMed Central

    2014-01-01

    Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140

  19. Electrical Nanocontact Between Bismuth Nanowire Edges and Electrodes

    NASA Astrophysics Data System (ADS)

    Murata, Masayuki; Nakamura, Daiki; Hasegawa, Yasuhiro; Komine, Takashi; Uematsu, Daisuke; Nakamura, Shinichiro; Taguchi, Takashi

    2010-09-01

    Three methods for attaching electrodes to a bismuth nanowire sample were investigated. In the first and second methods, thin layers of titanium and copper were deposited by ion plating under vacuum onto the edge surface of individual bismuth nanowire samples that were encapsulated in a quartz template. Good electrical contact between the electrodes and the nanowire was achieved using silver epoxy and conventional solder on the thin-film layers in the first and second methods, respectively. In the third method, a low-melting-point solder was utilized and was also successful in achieving good electrical contact in air atmosphere. The connection methods showed no difference in terms of resistivity temperature dependence or Seebeck coefficient. The third method has an advantage in that nanocontact is easily achieved; however, diffusion of the solder into the nanowire allows contamination near the melting point of the solder. In the first and second methods, the thin-film layer enabled electrical contact to be more safely achieved than the direct contact used in the third method, because the thin-film layer prevented diffusion of binder components.

  20. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  1. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  2. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  3. Distribution of volatile organic compounds in soil vapor in the vicinity of a defense fuel supply point, Hanahan, South Carolina

    USGS Publications Warehouse

    Robertson, J.F.; Aelion, C.M.; Vroblesky, D.A.

    1993-01-01

    Two passive soil-vapor sampling techniques were used in the vicinity of a defense fuel supply point in Hanahan, South Carolina, to identify areas of potential contamination of the shallow water table aquifer by volatile organic compounds (VOC's). Both techniques involved the burial of samplers in the vadose zone and the saturated bottom sediments of nearby streams. One method, the empty-tube technique, allowed vapors to pass through a permeable membrane and accumulate inside an inverted empty test tube. A sample was extracted and analyzed on site by using a portable gas chromatograph. As a comparison to this method, an activated-carbon technique, also was used in certain areas. This method uses a vapor collector consisting of a test tube containing activated carbon as a sorbent for VOC's.

  4. Sensitivity and Calibration of Non-Destructive Evaluation Method That Uses Neural-Net Processing of Characteristic Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  5. Modeling abundance using hierarchical distance sampling

    USGS Publications Warehouse

    Royle, Andy; Kery, Marc

    2016-01-01

    In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.

  6. Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.

    PubMed

    Graham, B L; Mink, J T; Cotton, D J

    1983-01-01

    It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.

  7. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  8. Comparison of three methods of sampling trout blood for measurements of hematocrit

    USGS Publications Warehouse

    Steucke, Erwin W.; Schoettger, Richard A.

    1967-01-01

    Trout blood is frequently collected for hematocrit measurements by excising the caudal fin (Snieszko, 1960), but this technique is impractical if valuable fish are to be sampled or if repeated observations are desired. Schiffman (1959) and Snieszko (1960) collected blood from the dorsal aorta and the heart, but these methods are relatively slow and require the preparation of needles and syringes. The use of pointed capillary tubes for cardiac punctures increases the speed of sampling, but body fluids may dilute the blood (Perkins, 1957; Larsen and Snieszko, 1961; and Normandau, 1962). There is need for methods of sampling which are rapid and which neither influence hematological determinations nor harm the fish.

  9. Four-point probe measurements using current probes with voltage feedback to measure electric potentials

    NASA Astrophysics Data System (ADS)

    Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert

    2018-02-01

    We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \

  10. Comparative study of some commercial samples of naga bhasma.

    PubMed

    Wadekar, Mrudula; Gogte, Viswas; Khandagale, Prasad; Prabhune, Asmita

    2004-04-01

    Naga bhasma is one of those reputed ayurvedic bhasmas which are claimed to possess some extraordinary medical properties. However, identification of a genuine sample of naga bhasma is a challenging problem. Because at present naga bhasma is manufactured by different ayurvedic pharmacies, by following different methods, these products are not standardised either from chemical and structural point of view. Therefore, comparative study of these samples using modern analytical techniques is important and necessary to understand their current status. In this communication, such study of naga bhasma from chemical and structural point of view is reported by using XRD, IR and UV spectroscopy and thermogravimetry.

  11. A classifying method analysis on the number of returns for given pulse of post-earthquake airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang

    2016-11-01

    Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.

  12. Evaluating different methods used in ethnobotanical and ecological studies to record plant biodiversity

    PubMed Central

    2014-01-01

    Background This study compares the efficiency of identifying the plants in an area of semi-arid Northeast Brazil by methods that a) access the local knowledge used in ethnobotanical studies using semi-structured interviews conducted within the entire community, an inventory interview conducted with two participants using the previously collected vegetation inventory, and a participatory workshop presenting exsiccates and photographs to 32 people and b) inventory the vegetation (phytosociology) in locations with different histories of disturbance using rectangular plots and quadrant points. Methods The proportion of species identified using each method was then compared with Cochran’s Q test. We calculated the use value (UV) of each species using semi-structured interviews; this quantitative index was correlated against values of the vegetation’s structural importance obtained from the sample plot method and point-centered quarter method applied in two areas with different historical usage. The analysis sought to correlate the relative importance of plants to the local community (use value - UV) with the ecological importance of the plants in the vegetation structure (importance value - IV; relative density - RD) by using different sampling methods to analyze the two areas. Results With regard to the methods used for accessing the local knowledge, a difference was observed among the ethnobotanical methods of surveying species (Q = 13.37, df = 2, p = 0.0013): 44 species were identified in the inventory interview, 38 in the participatory workshop and 33 in the semi-structured interviews with the community. There was either no correlation between the UV, relative density (RD) and importance value (IV) of some species, or this correlation was negative. Conclusion It was concluded that the inventory interview was the most efficient method for recording species and their uses, as it allowed more plants to be identified in their original environment. To optimize researchers’ time in future studies, the use of the point-centered quarter method rather than the sample plot method is recommended. PMID:24916833

  13. Research on Rigid Body Motion Tracing in Space based on NX MCD

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang

    2018-03-01

    In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.

  14. Determination of trace inorganic mercury species in water samples by cloud point extraction and UV-vis spectrophotometry.

    PubMed

    Ulusoy, Halil Ibrahim

    2014-01-01

    A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.

  15. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  16. An evaluation of the bioaccessibility of arsenic in corn and rice samples based on cloud point extraction and hydride generation coupled to atomic fluorescence spectrometry.

    PubMed

    Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor

    2016-08-01

    A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Hyperspectral microscopic imaging by multiplex coherent anti-Stokes Raman scattering (CARS)

    NASA Astrophysics Data System (ADS)

    Khmaladze, Alexander; Jasensky, Joshua; Zhang, Chi; Han, Xiaofeng; Ding, Jun; Seeley, Emily; Liu, Xinran; Smith, Gary D.; Chen, Zhan

    2011-10-01

    Coherent anti-Stokes Raman scattering (CARS) microscopy is a powerful technique to image the chemical composition of complex samples in biophysics, biology and materials science. CARS is a four-wave mixing process. The application of a spectrally narrow pump beam and a spectrally wide Stokes beam excites multiple Raman transitions, which are probed by a probe beam. This generates a coherent directional CARS signal with several orders of magnitude higher intensity relative to spontaneous Raman scattering. Recent advances in the development of ultrafast lasers, as well as photonic crystal fibers (PCF), enable multiplex CARS. In this study, we employed two scanning imaging methods. In one, the detection is performed by a photo-multiplier tube (PMT) attached to the spectrometer. The acquisition of a series of images, while tuning the wavelengths between images, allows for subsequent reconstruction of spectra at each image point. The second method detects CARS spectrum in each point by a cooled coupled charged detector (CCD) camera. Coupled with point-by-point scanning, it allows for a hyperspectral microscopic imaging. We applied this CARS imaging system to study biological samples such as oocytes.

  18. Complete elliptical ring geometry provides energy and instrument calibration for synchrotron-based two-dimensional X-ray diffraction

    PubMed Central

    Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas

    2013-01-01

    A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840

  19. A comparison of four porewater sampling methods for metal mixtures and dissolved organic carbon and the implications for sediment toxicity evaluations.

    PubMed

    Cleveland, Danielle; Brumbaugh, William G; MacDonald, Donald D

    2017-11-01

    Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater. Environ Toxicol Chem 2017;36:2906-2915. Published 2017 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2017 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.

  20. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  1. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  2. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  3. Analysis of titanium content in titanium tetrachloride solution

    NASA Astrophysics Data System (ADS)

    Bi, Xiaoguo; Dong, Yingnan; Li, Shanshan; Guan, Duojiao; Wang, Jianyu; Tang, Meiling

    2018-03-01

    Strontium titanate, barium titan and lead titanate are new type of functional ceramic materials with good prospect, and titanium tetrachloride is a commonly in the production such products. Which excellent electrochemical performance of ferroelectric tempreature coefficient effect.In this article, three methods are used to calibrate the samples of titanium tetrachloride solution by back titration method, replacement titration method and gravimetric analysis method. The results show that the back titration method has many good points, for example, relatively simple operation, easy to judgment the titration end point, better accuracy and precision of analytical results, the relative standard deviation not less than 0.2%. So, it is the ideal of conventional analysis methods in the mass production.

  4. A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer

    NASA Astrophysics Data System (ADS)

    Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.

    2014-07-01

    The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.

  5. Sampling methods for microbiological analysis of red meat and poultry carcasses.

    PubMed

    Capita, Rosa; Prieto, Miguel; Alonso-Calleja, Carlos

    2004-06-01

    Microbiological analysis of carcasses at slaughterhouses is required in the European Union for evaluating the hygienic performance of carcass production processes as required for effective hazard analysis critical control point implementation. The European Union microbial performance standards refer exclusively to the excision method, even though swabbing using the wet/dry technique is also permitted when correlation between both destructive and nondestructive methods can be established. For practical and economic reasons, the swab technique is the most extensively used carcass surface-sampling method. The main characteristics, advantages, and limitations of the common excision and swabbing methods are described here.

  6. Method and apparatus for analyzing the internal chemistry and compositional variations of materials and devices

    DOEpatents

    Kazmerski, Lawrence L.

    1989-01-01

    A method and apparatus is disclosed for obtaining and mapping chemical compositional data for solid devices. It includes a SIMS mass analyzer or similar system capable of being rastered over a surface of the solid to sample the material at a pattern of selected points, as the surface is being eroded away by sputtering or a similar process. The data for each point sampled in a volume of the solid is digitally processed and indexed by element or molecule type, exact spacial location within the volume, and the concentration levels of the detected element or molecule types. This data can then be recalled and displayed for any desired planar view in the volume.

  7. Method and apparatus for analyzing the internal chemistry and compositional variations of materials and devices

    DOEpatents

    Kazmerski, L.L.

    1985-04-30

    A method and apparatus is disclosed for obtaining and mapping chemical compositional data for solid devices. It includes a SIMS mass analyzer or similar system capable of being rastered over a surface of the solid to sample the material at a pattern of selected points, as the surface is being eroded away by sputtering or a similar process. The data for each point sampled in a volume of the solid is digitally processed and indexed by element or molecule type, exact spacial location within the volume, and the concentration levels of the detected element or molecule types. This data can then be recalled and displayed for any desired planar view in the volume.

  8. Landslide activity as a threat to infrastructure in river valleys - An example from outer Western Carpathians (Poland)

    NASA Astrophysics Data System (ADS)

    Łuszczyńska, Katarzyna; Wistuba, Małgorzata; Malik, Ireneusz

    2017-11-01

    Intensive development of the area of Polish Carpathians increases the scale of landslide risk. Thus detecting landslide hazards and risks became important issue for spatial planning in the area. We applied dendrochronological methods and GIS analysis for better understanding of landslide activity and related hazards in the test area (3,75 km2): Salomonka valley and nearby slopes in the Beskid Żywiecki Mts., Outer Western Carpathians, southern Poland. We applied eccentricity index of radial growth of trees to date past landslide events. Dendrochronological results allowed us to determine the mean frequency of landsliding at each sampling point which were next interpolated into a map of landslide hazard. In total we took samples at 46 points. In each point we sampled 3 coniferous trees. Landslide hazard map shows a medium (23 sampling points) and low (20 sampling points) level of landslide activity for most of the area. The highest level of activity was recorded for the largest landslide. Results of the dendrochronological study suggest that all landslides reaching downslope to Salomonka valley floor are active. LiDAR-based analysis of relief shows that there is an active coupling between those landslides and river channel. Thus channel damming and formation of an episodic lake are probable. The hazard of flooding valley floor upstream of active landslides should be included in the local spatial planning system and crisis management system.

  9. Characterization of zirconium carbides using electron microscopy, optical anisotropy, Auger depth profiles, X-ray diffraction, and electron density calculated by charge flipping method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthaka Silva, G.W., E-mail: chinthaka.silva@gmail.com; Kercher, Andrew A., E-mail: rokparent@comcast.net; Hunn, John D., E-mail: hunnjd@ornl.gov

    2012-10-15

    Samples with five different zirconium carbide compositions (C/Zr molar ratio=0.84, 0.89, 0.95, 1.05, and 1.17) have been fabricated and studied using a variety of experimental techniques. Each sample was zone refined to ensure that the end product was polycrystalline with a grain size of 10-100 {mu}m. It was found that the lattice parameter was largest for the x=0.89 composition and smallest for the x=1.17 total C/Zr composition, but was not linear; this nonlinearity is possibly explained using electron densities calculated using charge flipping technique. Among the five samples, the unit cell of the ZrC{sub 0.89} sample showed the highest electronmore » density, corresponding to the highest carbon incorporation and the largest lattice parameter. The ZrC{sub 0.84} sample showed the lowest carbon incorporation, resulting in a larger number of carbon vacancies and resultant strain. Samples with larger carbon ratios (x=0.95, 1.05, and 1.17) showed a slight decrease in lattice parameter, due to a decrease in electron density. Optical anisotropy measurements suggest that these three samples contained significant amounts of a graphitic carbon phase, not bonded to the Zr atoms. - Graphical abstract: Characterization of zirconium carbides using electron microscopy, optical anisotropy, Auger depth profiles, X-ray diffraction, and electron density calculated by the charge flipping method. Highlights: Black-Right-Pointing-Pointer The lattice parameter variation: ZrC{sub 0.89}>ZrC{sub 0.84}>ZrC{sub 0.95}>ZrC{sub 1.05}>ZrC{sub 1.17}. Black-Right-Pointing-Pointer Surface oxygen with no correlation to the lattice parameter variation. Black-Right-Pointing-Pointer ZrC{sub 0.89} had highest electron densities correspond to highest carbon incorporation. Black-Right-Pointing-Pointer Second highest lattice parameter in ZrC{sub 0.84} due to strain. Black-Right-Pointing-Pointer Unit cell electron density order: ZrC{sub 0.95}>ZrC{sub 1.05}>ZrC{sub 1.17}.« less

  10. Recruitment for Occupational Research: Using Injured Workers as the Point of Entry into Workplaces

    PubMed Central

    Koehoorn, Mieke; Trask, Catherine M.; Teschke, Kay

    2013-01-01

    Objective To investigate the feasibility, costs and sample representativeness of a recruitment method that used workers with back injuries as the point of entry into diverse working environments. Methods Workers' compensation claims were used to randomly sample workers from five heavy industries and to recruit their employers for ergonomic assessments of the injured worker and up to 2 co-workers. Results The final study sample included 54 workers from the workers’ compensation registry and 72 co-workers. This sample of 126 workers was based on an initial random sample of 822 workers with a compensation claim, or a ratio of 1 recruited worker to approximately 7 sampled workers. The average recruitment cost was CND$262/injured worker and CND$240/participating worksite including co-workers. The sample was representative of the heavy industry workforce, and was successful in recruiting the self-employed (8.2%), workers from small employers (<20 workers, 38.7%), and workers from diverse working environments (49 worksites, 29 worksite types, and 51 occupations). Conclusions The recruitment rate was low but the cost per participant reasonable and the sample representative of workers in small worksites. Small worksites represent a significant portion of the workforce but are typically underrepresented in occupational research despite having distinct working conditions, exposures and health risks worthy of investigation. PMID:23826387

  11. Assessment of ambient background concentrations of elements in soil using combined survey and open-source data.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2017-02-15

    Understanding ambient background concentrations in soil, at a local scale, is an essential part of environmental risk assessment. Where high resolution geochemical soil surveys have not been undertaken, soil data from alternative sources, such as environmental site assessment reports, can be used to support an understanding of ambient background conditions. Concentrations of metals/metalloids (As, Mn, Ni, Pb and Zn) were extracted from open-source environmental site assessment reports, for soils derived from the Newer Volcanics basalt, of Melbourne, Victoria, Australia. A manual screening method was applied to remove samples that were indicated to be contaminated by point sources and hence not representative of ambient background conditions. The manual screening approach was validated by comparison to data from a targeted background soil survey. Statistical methods for exclusion of contaminated samples from background soil datasets were compared to the manual screening method. The statistical methods tested included the Median plus Two Median Absolute Deviations, the upper whisker of a normal and log transformed Tukey boxplot, the point of inflection on a cumulative frequency plot and the 95th percentile. We have demonstrated that where anomalous sample results cannot be screened using site information, the Median plus Two Median Absolute Deviations is a conservative method for derivation of ambient background upper concentration limits (i.e. expected maximums). The upper whisker of a boxplot and the point of inflection on a cumulative frequency plot, were also considered adequate methods for deriving ambient background upper concentration limits, where the percentage of contaminated samples is <25%. Median ambient background concentrations of metals/metalloids in the Newer Volcanic soils of Melbourne were comparable to ambient background concentrations in Europe and the United States, except for Ni, which was naturally enriched in the basalt-derived soils of Melbourne. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Viscoelastic Properties of Advanced Polymer Composites for Ballistic Protective Applications

    DTIC Science & Technology

    1994-09-01

    ofthe Damaged Sample 78 Figure 69: Fracture Surface of Damage Area Near the Point of Penetration 79 Figure 70. Closer View ofthe Damaged Area...LIST OF TABLES Table 1. Basic Mechanical Properties of the Materials 6 Table 2. Initial DMA Test Results 23 Table 3. Flexural Three Point Bend... point bend testing was conducted using an Instron 1127 Universal Tester to verify the DMA test method and specimen clamping configuration. Interfacial

  13. Alternative methods for CYP2D6 phenotyping: comparison of dextromethorphan metabolic ratios from AUC, single point plasma, and urine.

    PubMed

    Chen, Rui; Wang, Haotian; Shi, Jun; Hu, Pei

    2016-05-01

    CYP2D6 is a high polymorphic enzyme. Determining its phenotype before CYP2D6 substrate treatment can avoid dose-dependent adverse events or therapeutic failures. Alternative phenotyping methods of CYP2D6 were compared to aluate the appropriate and precise time points for phenotyping after single-dose and ultiple-dose of 30-mg controlled-release (CR) dextromethorphan (DM) and to explore the antimodes for potential sampling methods. This was an open-label, single and multiple-dose study. 21 subjects were assigned to receive a single dose of CR DM 30 mg orally, followed by a 3-day washout period prior to oral administration of CR DM 30 mg every 12 hours for 6 days. Metabolic ratios (MRs) from AUC∞ after single dosing and from AUC0-12h at steady state were taken as the gold standard. The correlations of metabolic ratios of DM to dextrorphan (MRDM/DX) values based on different phenotyping methods were assessed. Linear regression formulas were derived to calculate the antimodes for potential sample methods. In the single-dose part of the study statistically significant correlations were found between MRDM/DX from AUC∞ and from serial plasma points from 1 to 30 hours or from urine (all p-values < 0.001). In the multiple-dose part, statistically significant correlations were found between MRDM/DX from AUC0-12h on day 6 and MRDM/DX from serial plasma points from 0 to 36 hours after the last dosing (all p-values < 0.001). Based on reported urinary antimode and linear regression analysis, the antimodes of AUC and plasma points were derived to profile the trend of antimodes as the drug concentrations changed. MRDM/DX from plasma points had good correlations with MRDM/DX from AUC. Plasma points from 1 to 30 hours after single dose of 30-mg CR DM and any plasma point at steady state after multiple doses of CR DM could potentially be used for phenotyping of CYP2D6.

  14. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  15. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  16. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  17. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  18. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  19. Local Sampling of the Wigner Function at Telecom Wavelength with Loss-Tolerant Detection of Photon Statistics.

    PubMed

    Harder, G; Silberhorn, Ch; Rehacek, J; Hradil, Z; Motka, L; Stoklasa, B; Sánchez-Soto, L L

    2016-04-01

    We report the experimental point-by-point sampling of the Wigner function for nonclassical states created in an ultrafast pulsed type-II parametric down-conversion source. We use a loss-tolerant time-multiplexed detector based on a fiber-optical setup and a pair of photon-number-resolving avalanche photodiodes. By capitalizing on an expedient data-pattern tomography, we assess the properties of the light states with outstanding accuracy. The method allows us to reliably infer the squeezing of genuine two-mode states without any phase reference.

  20. A fast and reliable readout method for quantitative analysis of surface-enhanced Raman scattering nanoprobes on chip surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Hyejin; Jeong, Sinyoung; Ko, Eunbyeol

    2015-05-15

    Surface-enhanced Raman scattering techniques have been widely used for bioanalysis due to its high sensitivity and multiplex capacity. However, the point-scanning method using a micro-Raman system, which is the most common method in the literature, has a disadvantage of extremely long measurement time for on-chip immunoassay adopting a large chip area of approximately 1-mm scale and confocal beam point of ca. 1-μm size. Alternative methods such as sampled spot scan with high confocality and large-area scan method with enlarged field of view and low confocality have been utilized in order to minimize the measurement time practically. In this study, wemore » analyzed the two methods in respect of signal-to-noise ratio and sampling-led signal fluctuations to obtain insights into a fast and reliable readout strategy. On this basis, we proposed a methodology for fast and reliable quantitative measurement of the whole chip area. The proposed method adopted a raster scan covering a full area of 100 μm × 100 μm region as a proof-of-concept experiment while accumulating signals in the CCD detector for single spectrum per frame. One single scan with 10 s over 100 μm × 100 μm area yielded much higher sensitivity compared to sampled spot scanning measurements and no signal fluctuations attributed to sampled spot scan. This readout method is able to serve as one of key technologies that will bring quantitative multiplexed detection and analysis into practice.« less

  1. Detection of Bordetella pertussis from Clinical Samples by Culture and End-Point PCR in Malaysian Patients.

    PubMed

    Ting, Tan Xue; Hashim, Rohaidah; Ahmad, Norazah; Abdullah, Khairul Hafizi

    2013-01-01

    Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1%) (P < 0.001). There was no significant association between type of samples collected and end-point PCR results (P > 0.05). Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method.

  2. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    PubMed

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  3. Quantifying the fate of agricultural nitrogen in an unconfined aquifer: Stream-based observations at three measurement scales

    NASA Astrophysics Data System (ADS)

    Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip; Solder, John E.; Kimball, Briant A.; Mitasova, Helena; Birgand, François

    2016-03-01

    We compared three stream-based sampling methods to study the fate of nitrate in groundwater in a coastal plain watershed: point measurements beneath the streambed, seepage blankets (novel seepage-meter design), and reach mass-balance. The methods gave similar mean groundwater seepage rates into the stream (0.3-0.6 m/d) during two 3-4 day field campaigns despite an order of magnitude difference in stream discharge between the campaigns. At low flow, estimates of flow-weighted mean nitrate concentrations in groundwater discharge ([NO3-]FWM) and nitrate flux from groundwater to the stream decreased with increasing degree of channel influence and measurement scale, i.e., [NO3-]FWM was 654, 561, and 451 µM for point, blanket, and reach mass-balance sampling, respectively. At high flow the trend was reversed, likely because reach mass-balance captured inputs from shallow transient high-nitrate flow paths while point and blanket measurements did not. Point sampling may be better suited to estimating aquifer discharge of nitrate, while reach mass-balance reflects full nitrate inputs into the channel (which at high flow may be more than aquifer discharge due to transient flow paths, and at low flow may be less than aquifer discharge due to channel-based nitrate removal). Modeling dissolved N2 from streambed samples suggested (1) about half of groundwater nitrate was denitrified prior to discharge from the aquifer, and (2) both extent of denitrification and initial nitrate concentration in groundwater (700-1300 µM) were related to land use, suggesting these forms of streambed sampling for groundwater can reveal watershed spatial relations relevant to nitrate contamination and fate in the aquifer.

  4. Options for Robust Airfoil Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Li, Wu

    2002-01-01

    A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.

  5. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  6. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  7. Rare cancer cell analyzer for whole blood applications: automated nucleic acid purification in a microfluidic disposable card.

    PubMed

    Kokoris, M; Nabavi, M; Lancaster, C; Clemmens, J; Maloney, P; Capadanno, J; Gerdes, J; Battrell, C F

    2005-09-01

    One current challenge facing point-of-care cancer detection is that existing methods make it difficult, time consuming and too costly to (1) collect relevant cell types directly from a patient sample, such as blood and (2) rapidly assay those cell types to determine the presence or absence of a particular type of cancer. We present a proof of principle method for an integrated, sample-to-result, point-of-care detection device that employs microfluidics technology, accepted assays, and a silica membrane for total RNA purification on a disposable, credit card sized laboratory-on-card ('lab card") device in which results are obtained in minutes. Both yield and quality of on-card purified total RNA, as determined by both LightCycler and standard reverse transcriptase amplification of G6PDH and BCR-ABL transcripts, were found to be better than or equal to accepted standard purification methods.

  8. Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.

    PubMed

    Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J

    2015-02-01

    In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.

  9. Impact of Different Creatinine Measurement Methods on Liver Transplant Allocation

    PubMed Central

    Kaiser, Thorsten; Kinny-Köster, Benedict; Bartels, Michael; Parthaune, Tanja; Schmidt, Michael; Thiery, Joachim

    2014-01-01

    Introduction The model for end-stage liver disease (MELD) score is used in many countries to prioritize organ allocation for the majority of patients who require orthotopic liver transplantation. This score is calculated based on the following laboratory parameters: creatinine, bilirubin and the international normalized ratio (INR). Consequently, high measurement accuracy is essential for equitable and fair organ allocation. For serum creatinine measurements, the Jaffé method and enzymatic detection are well-established routine diagnostic tests. Methods A total of 1,013 samples from 445 patients on the waiting list or in evaluation for liver transplantation were measured using both creatinine methods from November 2012 to September 2013 at the university hospital Leipzig, Germany. The measurements were performed in parallel according to the manufacturer’s instructions after the samples arrived at the institute of laboratory medicine. Patients who had required renal replacement therapy twice in the previous week were excluded from analyses. Results Despite the good correlation between the results of both creatinine quantification methods, relevant differences were observed, which led to different MELD scores. The Jaffé measurement led to greater MELD score in 163/1,013 (16.1%) samples with differences of up to 4 points in one patient, whereas differences of up to 2 points were identified in 15/1,013 (1.5%) samples using the enzymatic assay. Overall, 50/152 (32.9%) patients with MELD scores >20 had higher scores when the Jaffé method was used. Discussion Using the Jaffé method to measure creatinine levels in samples from patients who require liver transplantation may lead to a systematic preference in organ allocation. In this study, the differences were particularly pronounced in samples with MELD scores >20, which has clinical relevance in the context of urgency of transplantation. These data suggest that official recommendations are needed to determine which laboratory diagnostic methods should be used when calculating MELD scores. PMID:24587188

  10. Ultra-thin resin embedding method for scanning electron microscopy of individual cells on high and low aspect ratio 3D nanostructures.

    PubMed

    Belu, A; Schnitker, J; Bertazzo, S; Neumann, E; Mayer, D; Offenhäusser, A; Santoro, F

    2016-07-01

    The preparation of biological cells for either scanning or transmission electron microscopy requires a complex process of fixation, dehydration and drying. Critical point drying is commonly used for samples investigated with a scanning electron beam, whereas resin-infiltration is typically used for transmission electron microscopy. Critical point drying may cause cracks at the cellular surface and a sponge-like morphology of nondistinguishable intracellular compartments. Resin-infiltrated biological samples result in a solid block of resin, which can be further processed by mechanical sectioning, however that does not allow a top view examination of small cell-cell and cell-surface contacts. Here, we propose a method for removing resin excess on biological samples before effective polymerization. In this way the cells result to be embedded in an ultra-thin layer of epoxy resin. This novel method highlights in contrast to standard methods the imaging of individual cells not only on nanostructured planar surfaces but also on topologically challenging substrates with high aspect ratio three-dimensional features by scanning electron microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  11. Measurement of Density, Sound Velocity, Surface Tension, and Viscosity of Freely Suspended Supercooled Liquids

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.

    1995-01-01

    Non-contact methods have been implemented in conjunction with levitation techniques to carry out the measurement of the macroscopic properties of liquids significantly cooled below their nominal melting point. Free suspension of the sample and remote methods allow the deep excursion into the metastable liquid state and the determination of its thermophysical properties. We used this approach to investigate common substances such as water, o-terphenyl, succinonitrile, as well as higher temperature melts such as molten indium, aluminum and other metals. Although these techniques have thus far involved ultrasonic, electromagnetic, and more recently electrostatic levitation, we restrict our attention to ultrasonic methods in this paper. The resulting magnitude of maximum thermal supercooling achieved have ranged between 10 and 15% of the absolute temperature of the melting point for the materials mentioned above. The physical properties measurement methods have been mostly novel approaches, and the typical accuracy achieved have not yet matched their standard equivalent techniques involving contained samples and invasive probing. They are currently being refined, however, as the levitation techniques become more widespread, and as we gain a better understanding of the physics of levitated liquid samples.

  12. Measurement of density, sound velocity, surface tension, and viscosity of freely suspended supercooled liquids

    NASA Astrophysics Data System (ADS)

    Trinh, E. H.; Ohsaka, K.

    1995-03-01

    Noncontact methods have been implemented in conjunction with levitation techniques to carry out the measurement of the macroscopic properties of liquids significantly cooled below their nominal melting point. Free suspension of the sample and remote methods allow the deep excursion into the metastable liquid state and the determination of its thermophysical properties. We used this approach to investigate common substances such as water, v-terphenyl. succinonitrile, as well as higher temperature melts such as molten indium, aluminum, and other metals. Although these techniques have thus far involved ultrasonic, eletromagnetic, and more recently electrostatic levitation, we restrict our attention to ultrasonic methods in this paper. The resulting magnitude of maximum thermal supercooling achieved has ranged between 10% and 15% of the absolute temperature of the melting point for the materials mentioned above. The methods for measuring the physical properties have been mostly novel approaches, and the typical accuracy achieved has not yet matched the standard equivalent techniques involving contained samples and invasive probing. They are currently being refined, however, as the levitation techniques become more widespread and as we gain a better understanding of the physics of levitated liquid samples.

  13. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  14. 40 CFR Appendix E to Subpart E of... - Interim Method of the Determination of Asbestos in Bulk Insulation Samples

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... characteristics of anisotropic particles. Quantitative analysis involves the use of point counting. Point counting... 0.004. • Refractive Index Liquids for Dispersion Staining: high-dispersion series, 1.550, 1.605, 1... hand. Repeat the series. Collect the dispersed solids by centrifugation at 1000 rpm for 5 minutes. Wash...

  15. 40 CFR Appendix E to Subpart E of... - Interim Method of the Determination of Asbestos in Bulk Insulation Samples

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... characteristics of anisotropic particles. Quantitative analysis involves the use of point counting. Point counting... 0.004. • Refractive Index Liquids for Dispersion Staining: high-dispersion series, 1.550, 1.605, 1... hand. Repeat the series. Collect the dispersed solids by centrifugation at 1000 rpm for 5 minutes. Wash...

  16. 40 CFR Appendix E to Subpart E of... - Interim Method of the Determination of Asbestos in Bulk Insulation Samples

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... characteristics of anisotropic particles. Quantitative analysis involves the use of point counting. Point counting... 0.004. • Refractive Index Liquids for Dispersion Staining: high-dispersion series, 1.550, 1.605, 1... hand. Repeat the series. Collect the dispersed solids by centrifugation at 1000 rpm for 5 minutes. Wash...

  17. 40 CFR Appendix E to Subpart E of... - Interim Method of the Determination of Asbestos in Bulk Insulation Samples

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... characteristics of anisotropic particles. Quantitative analysis involves the use of point counting. Point counting... 0.004. • Refractive Index Liquids for Dispersion Staining: high-dispersion series, 1.550, 1.605, 1... hand. Repeat the series. Collect the dispersed solids by centrifugation at 1000 rpm for 5 minutes. Wash...

  18. 40 CFR Appendix E to Subpart E of... - Interim Method of the Determination of Asbestos in Bulk Insulation Samples

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... characteristics of anisotropic particles. Quantitative analysis involves the use of point counting. Point counting... 0.004. • Refractive Index Liquids for Dispersion Staining: high-dispersion series, 1.550, 1.605, 1... hand. Repeat the series. Collect the dispersed solids by centrifugation at 1000 rpm for 5 minutes. Wash...

  19. Radar Image Simulation: Validation of the Point Scattering Method. Volume 2

    DTIC Science & Technology

    1977-09-01

    the Engineer Topographic Labor - atory (ETL), Fort Belvoir, Virginia. This Radar Simulation Study was performed to validate the point tcattering radar...e.n For radar, the number of Independent samples in a given re.-olution cell is given by 5 ,: N L 2w (16) L Acoso where: 0 Radar incidence angle; w

  20. High resolution analysis of soil elements with laser-induced breakdown

    DOEpatents

    Ebinger, Michael H [Santa Fe, NM; Harris, Ronny D [Los Alamos, NM

    2010-04-06

    The invention is a system and method of detecting a concentration of an element in a soil sample wherein an opening or slot is formed in a container that supports a soil sample that was extracted from the ground whereupon at least a length of the soil sample is exposed via the opening. At each of a plurality of points along the exposed length thereof, the soil sample is ablated whereupon a plasma is formed that emits light characteristic of the elemental composition of the ablated soil sample. Each instance of emitted light is separated according to its wavelength and for at least one of the wavelengths a corresponding data value related to the intensity of the light is determined. As a function of each data value a concentration of an element at the corresponding point along the length of the soil core sample is determined.

  1. Alternative Methods for Estimating Plane Parameters Based on a Point Cloud

    NASA Astrophysics Data System (ADS)

    Stryczek, Roman

    2017-12-01

    Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.

  2. Microwave-assisted hydrothermal synthesis of marigold-like ZnIn{sub 2}S{sub 4} microspheres and their visible light photocatalytic activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Zhixin, E-mail: czx@fzu.edu.cn; Analysis and Test Center, Fuzhou University, Fuzhou 350002; Li Danzhen

    Marigold-like ZnIn{sub 2}S{sub 4} microspheres were synthesized by a microwave-assisted hydrothermal method with the temperature ranging from 80 to 195 Degree-Sign C. X-ray diffraction, X-ray photoelectron spectroscopy, nitrogen sorption analysis, UV-visible spectroscopy, scanning electron microscopy and transmission electron microscopy were used to characterize the products. It was found that the crystallographic structure and optical property of the products synthesized at different temperatures were almost the same. The degradation of methyl orange (MO) under the visible light irradiation has been used as a probe reaction to investigate the photocatalytic activity of as-prepared ZnIn{sub 2}S{sub 4}, which shows that the ZnIn{sub 2}S{submore » 4} sample synthesized at 195 Degree-Sign C shows the best photocatalytic activity for MO degradation. In addition, the photocatalytic activities of all the samples prepared by the microwave-assisted hydrothermal method are better than those prepared by a normal hydrothermal method, which could be attributed to the formation of more defect sites during the microwave-assisted hydrothermal treatment. - Graphical abstract: Marigold-like ZnIn{sub 2}S{sub 4} microspheres were synthesized by a fast microwave-assisted hydrothermal method at 80-195 Degree-Sign C with a very short reaction time of 10 min. The as-prepared ZnIn{sub 2}S{sub 4} sample can be used as visible light photocatalyst for degradation of organic dyes. Highlights: Black-Right-Pointing-Pointer ZnIn{sub 2}S{sub 4} microspheres were synthesized by microwave-assisted hydrothermal method. Black-Right-Pointing-Pointer The crystal structure and optical property of the products were almost the same. Black-Right-Pointing-Pointer Increment of the temperature renders high surface area due to the bubbling effect. Black-Right-Pointing-Pointer The ZnIn{sub 2}S{sub 4} synthesized at 195 Degree-Sign C shows the best visible catalytic activity for MO.« less

  3. Digital ac monitor

    DOEpatents

    Hart, George W.; Kern, Jr., Edward C.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.

  4. Digital ac monitor

    DOEpatents

    Hart, G.W.; Kern, E.C. Jr.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.

  5. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection.

    PubMed

    Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz

    2015-01-01

    In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  7. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  8. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  9. Physiological and Pathological Impact of Blood Sampling by Retro-Bulbar Sinus Puncture and Facial Vein Phlebotomy in Laboratory Mice

    PubMed Central

    Holst, Birgitte; Hau, Jann; Rozell, Björn; Abelson, Klas Stig Peter

    2014-01-01

    Retro-bulbar sinus puncture and facial vein phlebotomy are two widely used methods for blood sampling in laboratory mice. However, the animal welfare implications associated with these techniques are currently debated, and the possible physiological and pathological implications of blood sampling using these methods have been sparsely investigated. Therefore, this study was conducted to assess and compare the impacts of blood sampling by retro-bulbar sinus puncture and facial vein phlebotomy. Blood was obtained from either the retro-bulbar sinus or the facial vein from male C57BL/6J mice at two time points, and the samples were analyzed for plasma corticosterone. Body weights were measured at the day of blood sampling and the day after blood sampling, and the food consumption was recorded automatically during the 24 hours post-procedure. At the end of study, cheeks and orbital regions were collected for histopathological analysis to assess the degree of tissue trauma. Mice subjected to facial vein phlebotomy had significantly elevated plasma corticosterone levels at both time points in contrast to mice subjected to retro-bulbar sinus puncture, which did not. Both groups of sampled mice lost weight following blood sampling, but the body weight loss was higher in mice subjected to facial vein phlebotomy. The food consumption was not significantly different between the two groups. At gross necropsy, subcutaneous hematomas were found in both groups and the histopathological analyses revealed extensive tissue trauma after both facial vein phlebotomy and retro-bulbar sinus puncture. This study demonstrates that both blood sampling methods have a considerable impact on the animals' physiological condition, which should be considered whenever blood samples are obtained. PMID:25426941

  10. Evaluating a hybrid three-dimensional metrology system: merging data from optical and touch probe devices

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2011-08-01

    In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements. Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM). That surface similarly is defined by point cloud data. Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame. Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional area calculations to determine the percentage of materials present. With a draft method in place, and first-level method validation underway, we examine the transformation of the two dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified potential error factors versus the method process. This paper reports our on-going work and discusses our findings to date.

  11. New method of paired thyrotropin assay as a screening test for neonatal hypothyroidism. [/sup 125/I tracer technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyai, K.; Oura, T.; Kawashima, M.

    1978-11-01

    A simple and reliable method of paired TSH assay was developed and used in screening for neonatal primary hypothyroidism. In this method, a paired assay is first done. Equal parts of the extracts of dried blood spots on filter paper (9 mm diameter) from two infants 4 to 7 days old are combined and assayed for TSH by double antibody RIA. If the value obtained is over the cut-off point, the extracts are assayed separately for TSH in a second assay to identify the abnormal sample. Two systems, A and B, with different cut-off points were tested. On the basismore » of reference blood samples (serum levels of TSH, 80 ..mu..U/ml in system A and 40 ..mu..U/ml in system B), the cut-off point was selected as follows: upper 5 (A) or 4 (B) percentile in the paired assay and values of reference blood samples in the second individual assay. Four cases (2 in A and 2 in B) of neonatal primary hypothyroidism were found among 25 infants (23 in A and 2 in B) who were recalled from a general population of 41,400 infants (24,200 in A and 17,200 in B) by 22,700 assays. This paired TSH assay system saves labor and expense for screening neonatal hypothyroidism.« less

  12. Application of dual-cloud point extraction for the trace levels of copper in serum of different viral hepatitis patients by flame atomic absorption spectrometry: A multivariate study

    NASA Astrophysics Data System (ADS)

    Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal

    2014-12-01

    An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.

  13. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.

    PubMed

    Li, Jilong; Cheng, Jianlin

    2016-05-10

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.

  14. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    PubMed Central

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  15. Reviving common standards in point-count surveys for broad inference across studies

    USGS Publications Warehouse

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.

  16. Cloud point extraction thermospray flame quartz furnace atomic absorption spectrometry for determination of ultratrace cadmium in water and urine

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Zhang, Yunchang; Lv, Yi; Hou, Xiandeng

    2006-12-01

    A simple, low cost and highly sensitive method based on cloud point extraction (CPE) for separation/preconcentration and thermospray flame quartz furnace atomic absorption spectrometry was proposed for the determination of ultratrace cadmium in water and urine samples. The analytical procedure involved the formation of analyte-entrapped surfactant micelles by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution and a Triton X-114 solution. When the temperature of the system was higher than the cloud point of Triton X-114, the complex of cadmium-PDC entered the surfactant-rich phase and thus separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, the limit of detection was 0.04 μg/L for cadmium with a sample volume of 10 mL. The analytical results of cadmium in water and urine samples agreed well with those by ICP-MS.

  17. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples

    NASA Astrophysics Data System (ADS)

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-01

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.

  18. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples.

    PubMed

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-05

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60mg/L, 0.10-0.80mg/L, and 0.03-0.30mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1μg/L. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  20. Application of the Optimized Summed Scored Attributes Method to Sex Estimation in Asian Crania.

    PubMed

    Tallman, Sean D; Go, Matthew C

    2018-05-01

    The optimized summed scored attributes (OSSA) method was recently introduced and validated for nonmetric ancestry estimation between American Black and White individuals. The method proceeds by scoring, dichotomizing, and subsequently summing ordinal morphoscopic trait scores to maximize between-group differences. This study tests the applicability of the OSSA method for sex estimation using five cranial traits given the methodological similarities between classifying sex and ancestry. A large sample of documented crania from Japan and Thailand (n = 744 males, 320 females) are used to develop a heuristically selected OSSA sectioning point of ≤1 separating males and females. This sectioning point is validated using a holdout sample of Japanese, Thai, and Filipino (n = 178 males, 82 females) individuals. The results indicate a general correct classification rate of 82% using all five traits, and 81% when excluding the mental eminence. Designating an OSSA score of 2 as indeterminate is recommended. © 2017 American Academy of Forensic Sciences.

  1. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  2. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method gives a more varied distribution of SM than those derived from TDR measurements. It should be noted that reducing the number of samples in the measuring grid leads to flattening the distribution of SM from both methods and increasing the estimation error at the same time. Grid of sensors for permanent measurement points should include points that have similar distributions of SM in the vicinity. Results of the analysis including number, the maximum correlation ranges and the acceptable estimation error should be taken into account when choosing of the measurement points. Adoption or possible adjustment of the distribution of the measurement points should be verified by performing additional measuring campaigns during the dry and wet periods. Presented approach seems to be appropriate for creation of regional-scale test (super) sites, to validate products of satellites equipped with SAR (Synthetic Aperture Radar), operating in C-band, with spatial resolution suited to single field scale, as for example: ERS-1, ERS-2, Radarsat and Sentinel-1, which is going to be launched in next few months. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.

  3. Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis

    NASA Astrophysics Data System (ADS)

    Li, Y.

    2013-05-01

    The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.

  4. 3D sensitivity encoded ellipsoidal MR spectroscopic imaging of gliomas at 3T☆

    PubMed Central

    Ozturk-Isik, Esin; Chen, Albert P.; Crane, Jason C.; Bian, Wei; Xu, Duan; Han, Eric T.; Chang, Susan M.; Vigneron, Daniel B.; Nelson, Sarah J.

    2010-01-01

    Purpose The goal of this study was to implement time efficient data acquisition and reconstruction methods for 3D magnetic resonance spectroscopic imaging (MRSI) of gliomas at a field strength of 3T using parallel imaging techniques. Methods The point spread functions, signal to noise ratio (SNR), spatial resolution, metabolite intensity distributions and Cho:NAA ratio of 3D ellipsoidal, 3D sensitivity encoding (SENSE) and 3D combined ellipsoidal and SENSE (e-SENSE) k-space sampling schemes were compared with conventional k-space data acquisition methods. Results The 3D SENSE and e-SENSE methods resulted in similar spectral patterns as the conventional MRSI methods. The Cho:NAA ratios were highly correlated (P<.05 for SENSE and P<.001 for e-SENSE) with the ellipsoidal method and all methods exhibited significantly different spectral patterns in tumor regions compared to normal appearing white matter. The geometry factors ranged between 1.2 and 1.3 for both the SENSE and e-SENSE spectra. When corrected for these factors and for differences in data acquisition times, the empirical SNRs were similar to values expected based upon theoretical grounds. The effective spatial resolution of the SENSE spectra was estimated to be same as the corresponding fully sampled k-space data, while the spectra acquired with ellipsoidal and e-SENSE k-space samplings were estimated to have a 2.36–2.47-fold loss in spatial resolution due to the differences in their point spread functions. Conclusion The 3D SENSE method retained the same spatial resolution as full k-space sampling but with a 4-fold reduction in scan time and an acquisition time of 9.28 min. The 3D e-SENSE method had a similar spatial resolution as the corresponding ellipsoidal sampling with a scan time of 4:36 min. Both parallel imaging methods provided clinically interpretable spectra with volumetric coverage and adequate SNR for evaluating Cho, Cr and NAA. PMID:19766422

  5. Method and apparatus for determination of material residual stress

    NASA Technical Reports Server (NTRS)

    Chern, Engmin J. (Inventor); Flom, Yury (Inventor)

    1993-01-01

    A device for the determination of residual stress in a material sample consisting of a sensor coil, adjacent to the material sample, whose resistance varies according to the amount of stress within the material sample, a mechanical push-pull machine for imparting a gradually increasing compressional and tensional force on the material sample, and an impedance gain/phase analyzer and personal computer (PC) for sending an input signal to and receiving an input signal from the sensor coil is presented. The PC will measure and record the change in resistance of the sensor coil and the corresponding amount of strain of the sample. The PC will then determine, from the measurements of change of resistance and corresponding strain of the sample, the point at which the resistance of the sensor coil is at a minimum and the corresponding value and type of strain of the sample at that minimum resistance point, thereby, enabling a calculation of the residual stress in the sample.

  6. 40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...

  7. 40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...

  8. 40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...

  9. 40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...

  10. 40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...

  11. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  12. Analysis of intraosseous samples using point of care technology--an experimental study in the anaesthetised pig.

    PubMed

    Strandberg, Gunnar; Eriksson, Mats; Gustafsson, Mats G; Lipcsey, Miklós; Larsson, Anders

    2012-11-01

    Intraosseous access is an essential method in emergency medicine when other forms of vascular access are unavailable and there is an urgent need for fluid or drug therapy. A number of publications have discussed the suitability of using intraosseous access for laboratory testing. We aimed to further evaluate this issue and to study the accuracy and precision of intraosseous measurements. Five healthy, anaesthetised pigs were instrumented with bilateral tibial intraosseous cannulae and an arterial catheter. Samples were collected hourly for 6h and analysed for blood gases, acid base status, haemoglobin and electrolytes using an I-Stat point of care analyser. There was no clinically relevant difference between results from left and right intraosseous sites. The variability of the intraosseous sample values, measured as the coefficient of variance (CV), was maximally 11%, and smaller than for the arterial sample values for all variables except SO2. For most variables, there seems to be some degree of systematic difference between intraosseous and arterial results. However, the direction of this difference seems to be predictable. Based on our findings in this animal model, cartridge based point of care instruments appear suitable for the analysis of intraosseous samples. The agreement between intraosseous and arterial analysis seems to be good enough for the method to be clinically useful. The precision, quantified in terms of CV, is at least as good for intraosseous as for arterial analysis. There is no clinically important difference between samples from left and right tibia, indicating a good reproducibility. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Time-integrated passive sampling as a complement to conventional point-in-time sampling for investigating drinking-water quality, McKenzie River Basin, Oregon, 2007 and 2010-11

    USGS Publications Warehouse

    McCarthy, Kathleen A.; Alvarez, David A.

    2014-01-01

    The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.

  14. Mass discharge in a tracer plume: Evaluation of the Theissen Polygon Method

    PubMed Central

    Mackay, Douglas M.; Einarson, Murray D.; Kaiser, Phil M.; Nozawa-Inoue, Mamie; Goyal, Sham; Chakraborty, Irina; Rasa, Ehsan; Scow, Kate M.

    2013-01-01

    A tracer plume was created within a thin aquifer by injection for 299 days of two adjacent “sub-plumes” to represent one type of plume heterogeneity encountered in practice. The plume was monitored by snapshot sampling of transects of fully screened wells. The mass injection rate and total mass injected were known. Using all wells in each transect (0.77 m well spacing, 1.4 points/m2 sampling density), the Theissen Polygon Method (TPM) yielded apparently accurate mass discharge (Md) estimates at 3 transects for 12 snapshots. When applied to hypothetical sparser transects using subsets of the wells with average spacing and sampling density from 1.55 to 5.39 m and 0.70 to 0.20 points/m2, respectively, the TPM accuracy depended on well spacing and location of the wells in the hypothesized transect with respect to the sub-plumes. Potential error was relatively low when the well spacing was less than the widths of the sub-plumes (> 0.35 points/m2). Potential error increased for well spacing similar to or greater than the sub-plume widths, or when less than 1% of the plume area was sampled. For low density sampling of laterally heterogeneous plumes, small changes in groundwater flow direction can lead to wide fluctuations in Md estimates by the TPM. However, sampling conducted when flow is known or likely to be in a preferred direction can potentially allow more useful comparisons of Md over multiyear time frames, such as required for performance evaluation of natural attenuation or engineered remediation systems. PMID:22324777

  15. Multi-point estimation of total energy expenditure: a comparison between zinc-reduction and platinum-equilibration methodologies.

    PubMed

    Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V

    2003-12-15

    Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.

  16. Air-assisted liquid-liquid microextraction by solidifying the floating organic droplets for the rapid determination of seven fungicide residues in juice samples.

    PubMed

    You, Xiangwei; Xing, Zhuokan; Liu, Fengmao; Zhang, Xu

    2015-05-22

    A novel air assisted liquid-liquid microextraction using the solidification of a floating organic droplet method (AALLME-SFO) was developed for the rapid and simple determination of seven fungicide residues in juice samples, using the gas chromatography with electron capture detector (GC-ECD). This method combines the advantages of AALLME and dispersive liquid-liquid microextraction based on the solidification of floating organic droplets (DLLME-SFO) for the first time. In this method, a low-density solvent with a melting point near room temperature was used as the extraction solvent, and the emulsion was rapidly formed by pulling in and pushing out the mixture of aqueous sample solution and extraction solvent for ten times repeatedly using a 10-mL glass syringe. After centrifugation, the extractant droplet could be easily collected from the top of the aqueous samples by solidifying it at a temperature lower than the melting point. Under the optimized conditions, good linearities with the correlation coefficients (γ) higher than 0.9959 were obtained and the limits of detection (LOD) varied between 0.02 and 0.25 μgL(-1). The proposed method was applied to determine the target fungicides in juice samples and acceptable recoveries ranged from 72.6% to 114.0% with the relative standard deviations (RSDs) of 2.3-13.0% were achieved. Compared with the conventional DLLME method, the newly proposed method will neither require a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Quantification of Estrogen Receptor-Alpha Expression in Human Breast Carcinomas With a Miniaturized, Low-Cost Digital Microscope: A Comparison with a High-End Whole Slide-Scanner

    PubMed Central

    Holmström, Oscar; Linder, Nina; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Turkki, Riku; Joensuu, Heikki; Isola, Jorma; Diwan, Vinod; Lundin, Johan

    2015-01-01

    Introduction A significant barrier to medical diagnostics in low-resource environments is the lack of medical care and equipment. Here we present a low-cost, cloud-connected digital microscope for applications at the point-of-care. We evaluate the performance of the device in the digital assessment of estrogen receptor-alpha (ER) expression in breast cancer samples. Studies suggest computer-assisted analysis of tumor samples digitized with whole slide-scanners may be comparable to manual scoring, here we study whether similar results can be obtained with the device presented. Materials and Methods A total of 170 samples of human breast carcinoma, immunostained for ER expression, were digitized with a high-end slide-scanner and the point-of-care microscope. Corresponding regions from the samples were extracted, and ER status was determined visually and digitally. Samples were classified as ER negative (<1% ER positivity) or positive, and further into weakly (1–10% positivity) and strongly positive. Interobserver agreement (Cohen’s kappa) was measured and correlation coefficients (Pearson’s product-momentum) were calculated for comparison of the methods. Results Correlation and interobserver agreement (r = 0.98, p < 0.001, kappa = 0.84, CI95% = 0.75–0.94) were strong in the results from both devices. Concordance of the point-of-care microscope and the manual scoring was good (r = 0.94, p < 0.001, kappa = 0.71, CI95% = 0.61–0.80), and comparable to the concordance between the slide scanner and manual scoring (r = 0.93, p < 0.001, kappa = 0.69, CI95% = 0.60–0.78). Fourteen (8%) discrepant cases between manual and device-based scoring were present with the slide scanner, and 16 (9%) with the point-of-care microscope, all representing samples of low ER expression. Conclusions Tumor ER status can be accurately quantified with a low-cost imaging device and digital image-analysis, with results comparable to conventional computer-assisted or manual scoring. This technology could potentially be expanded for other histopathological applications at the point-of-care. PMID:26659386

  18. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  19. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  20. Lidar Based Emissions Measurement at the Whole Facility Scale: Method and Error Analysis

    USDA-ARS?s Scientific Manuscript database

    Particulate emissions from agricultural sources vary from dust created by operations and animal movement to the fine secondary particulates generated from ammonia and other emitted gases. The development of reliable facility emission data using point sampling methods designed to characterize regiona...

  1. Methods for measuring populations of small, diurnal forest birds.

    Treesearch

    D.A. Manuwal; A.B. Carey

    1991-01-01

    Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...

  2. Sex-Specific Associations between Umbilical Cord Blood Testosterone Levels and Language Delay in Early Childhood

    ERIC Educational Resources Information Center

    Whitehouse, Andrew J. O.; Mattes, Eugen; Maybery, Murray T.; Sawyer, Michael G.; Jacoby, Peter; Keelan, Jeffrey A.; Hickey, Martha

    2012-01-01

    Background: Preliminary evidence suggests that prenatal testosterone exposure may be associated with language delay. However, no study has examined a large sample of children at multiple time-points. Methods: Umbilical cord blood samples were obtained at 861 births and analysed for bioavailable testosterone (BioT) concentrations. When…

  3. 40 CFR 60.54 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sample CO2 concentrations at all traverse points. (ii) If sampling is conducted after a wet scrubber, an... volumetric flow rates at the inlet and outlet of the wet scrubber and the inlet CO2 concentration may be used... concentration measured before the scrubber, percent dry basis. Qdi=volumetric flow rate of effluent gas before...

  4. Further improvement of hydrostatic pressure sample injection for microchip electrophoresis.

    PubMed

    Luo, Yong; Zhang, Qingquan; Qin, Jianhua; Lin, Bingcheng

    2007-12-01

    Hydrostatic pressure sample injection method is able to minimize the number of electrodes needed for a microchip electrophoresis process; however, it neither can be applied for electrophoretic DNA sizing, nor can be implemented on the widely used single-cross microchip. This paper presents an injector design that makes the hydrostatic pressure sample injection method suitable for DNA sizing. By introducing an assistant channel into the normal double-cross injector, a rugged DNA sample plug suitable for sizing can be successfully formed within the cross area during the sample loading. This paper also demonstrates that the hydrostatic pressure sample injection can be performed in the single-cross microchip by controlling the radial position of the detection point in the separation channel. Rhodamine 123 and its derivative as model sample were successfully separated.

  5. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  6. Physical properties of nanoparticles Nd added Bi1.7Pb0.3Sr2Ca2Cu3Oy superconductors

    NASA Astrophysics Data System (ADS)

    Abbas, Muna; Abdulridha, Ali; Jassim, Amal; Hashim, Fouad

    2018-05-01

    Bi1.7Pb0.3Sr2Ca2Cu3Oy bulks were synthesized, with the addition of Nd2O3 nanoparticles, by the solid state reaction method. The concentrations of Nd were varied from 0.1 to 0.6. The superconducting properties of the samples were investigated and studied to determine the influence of Nd2O3 addition on superconducting properties and microstructural development. The structural characteristics of the synthesized superconductor samples were carried out through X-ray diffractions. DC Four point probe method was used to study the electrical resistivity behavior and to evaluate the transition temperature (TC) for all samples. It was found that: 0.2 weight percentage of Nd2O3 yield the highest TC 123 K for highest volume fraction of 2223-phase, while excessive addition decreased both of them. The results point to compelling indications of correlations between charge carriers and superconductivity. Energy-dispersive X-ray spectroscopy (EDX) analysis for Bi1.7Pb0.3Nd0.2Sr2Ca2Cu3Oy superconductor shows that Nd may be substituted at Ca sites creating point defects, which act as flux pinning centers. Scanning electron microscopy (SEM) was employed to examine the microstructure of some samples. Their results showed precipitation of Nd nanoparticles on the surface as plate-like grains.

  7. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    PubMed

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  8. Elongation measurement using 1-dimensional image correlation method

    NASA Astrophysics Data System (ADS)

    Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan

    2016-11-01

    Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.

  9. Spectrophotometric determination of low levels arsenic species in beverages after ion-pairing vortex-assisted cloud-point extraction with acridine red.

    PubMed

    Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk

    2016-01-01

    A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.

  10. Liquid paraffin as new dilution medium for the analysis of high boiling point residual solvents with static headspace-gas chromatography.

    PubMed

    D'Autry, Ward; Zheng, Chao; Bugalama, John; Wolfs, Kris; Hoogmartens, Jos; Adams, Erwin; Wang, Bochu; Van Schepdael, Ann

    2011-07-15

    Residual solvents are volatile organic compounds which can be present in pharmaceutical substances. A generic static headspace-gas chromatography analysis method for the identification and control of residual solvents is described in the European Pharmacopoeia. Although this method is proved to be suitable for the majority of samples and residual solvents, the method may lack sensitivity for high boiling point residual solvents such as N,N-dimethylformamide, N,N-dimethylacetamide, dimethyl sulfoxide and benzyl alcohol. In this study, liquid paraffin was investigated as new dilution medium for the analysis of these residual solvents. The headspace-gas chromatography method was developed and optimized taking the official Pharmacopoeia method as a starting point. The optimized method was validated according to ICH criteria. It was found that the detection limits were below 1μg/vial for each compound, indicating a drastically increased sensitivity compared to the Pharmacopoeia method, which failed to detect the compounds at their respective limit concentrations. Linearity was evaluated based on the R(2) values, which were above 0.997 for all compounds, and inspection of residual plots. Instrument and method precision were examined by calculating the relative standard deviations (RSD) of repeated analyses within the linearity and accuracy experiments, respectively. It was found that all RSD values were below 10%. Accuracy was checked by a recovery experiment at three different levels. Mean recovery values were all in the range 95-105%. Finally, the optimized method was applied to residual DMSO analysis in four different Kollicoat(®) sample batches. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. LiDAR point classification based on sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Nan; Pfeifer, Norbert; Liu, Chun

    2017-04-01

    In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.

  12. Determination of ultra-trace aluminum in human albumin by cloud point extraction and graphite furnace atomic absorption spectrometry.

    PubMed

    Sun, Mei; Wu, Qianghua

    2010-04-15

    A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL(-1). The relative standard deviation (n=7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin. 2009 Elsevier B.V. All rights reserved.

  13. Cloud point extraction and diffuse reflectance-Fourier transform infrared spectroscopic determination of chromium(VI): A probe to adulteration in food stuffs.

    PubMed

    Tiwari, Swapnil; Deb, Manas Kanti; Sen, Bhupendra K

    2017-04-15

    A new cloud point extraction (CPE) method for the determination of hexavalent chromium i.e. Cr(VI) in food samples is established with subsequent diffuse reflectance-Fourier transform infrared (DRS-FTIR) analysis. The method demonstrates enrichment of Cr(VI) after its complexation with 1,5-diphenylcarbazide. The reddish-violet complex formed showed λ max at 540nm. Micellar phase separation at cloud point temperature of non-ionic surfactant, Triton X-100 occurred and complex was entrapped in surfactant and analyzed using DRS-FTIR. Under optimized conditions, the limit of detection (LOD) and quantification (LOQ) were 1.22 and 4.02μgmL -1 , respectively. Excellent linearity with correlation coefficient value of 0.94 was found for the concentration range of 1-100μgmL -1 . At 10μgmL -1 the standard deviation for 7 replicate measurements was found to be 0.11μgmL -1 . The method was successfully applied to commercially marketed food stuffs, and good recoveries (81-112%) were obtained by spiking the real samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  15. Using expired air carbon monoxide to determine smoking status during pregnancy: preliminary identification of an appropriately sensitive and specific cut-point.

    PubMed

    Bailey, Beth A

    2013-10-01

    Measurement of carbon monoxide in expired air samples (ECO) is a non-invasive, cost-effective biochemical marker for smoking. Cut points of 6ppm-10ppm have been established, though appropriate cut-points for pregnant woman have been debated due to metabolic changes. This study assessed whether an ECO cut-point identifying at least 90% of pregnant smokers, and misidentifying fewer than 10% of non-smokers, could be established. Pregnant women (N=167) completed a validated self-report smoking assessment, a urine drug screen for cotinine (UDS), and provided an expired air sample twice during pregnancy. Half of women reported non-smoking status early (51%) and late (53%) in pregnancy, confirmed by UDS. Using a traditional 8ppm+cut-point for the early pregnancy reading, only 1% of non-smokers were incorrectly identified as smokers, but only 56% of all smokers, and 67% who smoked 5+ cigarettes in the previous 24h, were identified. However, at 4ppm+, only 8% of non-smokers were misclassified as smokers, and 90% of all smokers and 96% who smoked 5+ cigarettes in the previous 24h were identified. False positives were explained by heavy second hand smoke exposure and marijuana use. Results were similar for late pregnancy ECO, with ROC analysis revealing an area under the curve of .95 for early pregnancy, and .94 for late pregnancy readings. A lower 4ppm ECO cut-point may be necessary to identify pregnant smokers using expired air samples, and this cut-point appears valid throughout pregnancy. Work is ongoing to validate findings in larger samples, but it appears if an appropriate cut-point is used, ECO is a valid method for determining smoking status in pregnancy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.

    PubMed

    Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T

    2008-09-15

    Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.

  17. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  18. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  19. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  20. Simultaneous determination of nickel and copper by H-point standard addition method-first-order derivative spectrophotometry in plant samples after separation and preconcentration on modified natural clinoptilolite as a new sorbent.

    PubMed

    Roohparvar, Rasool; Taher, Mohammad Ali; Mohadesi, Alireza

    2008-01-01

    For the simultaneous determination of nickel(ll) and copper(ll) in plant samples, a rapid and accurate method was developed. In this method, solid-phase extraction (SPE) and first-order derivative spectrophotometry (FDS) are combined, and the result is coupled with the H-point standard addition method (HPSAM). Compared with normal spectrophotometry, derivative spectrophotometry offers the advantages of increased selectivity and sensitivity. As there is no need for carrying out any pretreatment of the sample, the spectrophotometry method is easy, but because of a high detection limit, it is not so practical. In order to decrease the detection limit, it is suggested to combine spectrophotometry with a preconcentration method such as SPE. In the present work, after separation and preconcentration of Ni(ll) and Cu(ll) on modified clinoptilolite zeolite that is loaded with 2-[1-(2-hydroxy-5-sulforphenyl)-3-phenyl-5-formaza-no]-benzoic acid monosodium salt (zincon) as a selective chromogenic reagent, FDS-HPSAM, which is a simple and selective spectrophotometric method, has been applied for simultaneous determination of these ions. With optimum conditions, the detection limit in original solutions is 0.7 and 0.5 ng/mL, respectively, for nickel and copper. The linear concentration ranges in the proposed method for nickel and copper ions in original solutions are 1.1 to 3.0 x 10(3) and 0.9 to 2.0 x 10(3) ng/mL, respectively. The recommended procedure is applied to successful determination of Cu(ll) and Ni(ll) in standard and real samples.

  1. Topochemical Analysis of Cell Wall Components by TOF-SIMS.

    PubMed

    Aoki, Dan; Fukushima, Kazuhiko

    2017-01-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a recently developing analytical tool and a type of imaging mass spectrometry. TOF-SIMS provides mass spectral information with a lateral resolution on the order of submicrons, with widespread applicability. Sometimes, it is described as a surface analysis method without the requirement for sample pretreatment; however, several points need to be taken into account for the complete utilization of the capabilities of TOF-SIMS. In this chapter, we introduce methods for TOF-SIMS sample treatments, as well as basic knowledge of wood samples TOF-SIMS spectral and image data analysis.

  2. Cloud point extraction of vanadium in pharmaceutical formulations, dialysate and parenteral solutions using 8-hydroxyquinoline and nonionic surfactant.

    PubMed

    Khan, Sumaira; Kazi, Tasneem G; Baig, Jameel A; Kolachi, Nida F; Afridi, Hassan I; Wadhwa, Sham Kumar; Shah, Abdul Q; Kandhro, Ghulam A; Shah, Faheem

    2010-10-15

    A cloud point extraction (CPE) method has been developed for the determination of trace quantity of vanadium ions in pharmaceutical formulations (PF), dialysate (DS) and parenteral solutions (PS). The CPE of vanadium (V) using 8-hydroxyquinoline (oxine) as complexing reagent and mediated by nonionic surfactant (Triton X-114) was investigated. The parameters that affect the extraction efficiency of CPE, such as pH of sample solution, concentration of oxine and Triton X-114, equilibration temperature and time period for shaking were investigated in detail. The validity of CPE of V was checked by standard addition method in real samples. The extracted surfactant-rich phase was diluted with nitric acid in ethanol, prior to subjecting electrothermal atomic absorption spectrometry. Under these conditions, the preconcentration of 50 mL sample solutions, allowed raising an enrichment factor of 125-fold. The lower limit of detection obtained under the optimal conditions was 42 ng/L. The proposed method has been successfully applied to the determination of trace quantity of V in various pharmaceutical preparations with satisfactory results. The concentration ranges of V in PF, DS and PS samples were found in the range of 10.5-15.2, 0.65-1.32 and 1.76-6.93 microg/L, respectively. 2010 Elsevier B.V. All rights reserved.

  3. Impulse excitation scanning acoustic microscopy for local quantification of Rayleigh surface wave velocity using B-scan analysis

    NASA Astrophysics Data System (ADS)

    Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.

    2018-01-01

    A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.

  4. Evaluation of point mutations in dystrophin gene in Iranian Duchenne and Becker muscular dystrophy patients: introducing three novel variants.

    PubMed

    Haghshenas, Maryam; Akbari, Mohammad Taghi; Karizi, Shohreh Zare; Deilamani, Faravareh Khordadpoor; Nafissi, Shahriar; Salehi, Zivar

    2016-06-01

    Duchenne and Becker muscular dystrophies (DMD and BMD) are X-linked neuromuscular diseases characterized by progressive muscular weakness and degeneration of skeletal muscles. Approximately two-thirds of the patients have large deletions or duplications in the dystrophin gene and the remaining one-third have point mutations. This study was performed to evaluate point mutations in Iranian DMD/BMD male patients. A total of 29 DNA samples from patients who did not show any large deletion/duplication mutations following multiplex polymerase chain reaction (PCR) and multiplex ligation-dependent probe amplification (MLPA) screening were sequenced for detection of point mutations in exons 50-79. Also exon 44 was sequenced in one sample in which a false positive deletion was detected by MLPA method. Cycle sequencing revealed four nonsense, one frameshift and two splice site mutations as well as two missense variants.

  5. Study on the stability of adrenaline and on the determination of its acidity constants

    NASA Astrophysics Data System (ADS)

    Corona-Avendaño, S.; Alarcón-Angeles, G.; Rojas-Hernández, A.; Romero-Romo, M. A.; Ramírez-Silva, M. T.

    2005-01-01

    In this work, the results are presented concerning the influence of time on the spectral behaviour of adrenaline (C 9H 13NO 3) (AD) and of the determination of its acidity constants by means of spectrophotometry titrations and point-by-point analysis, using for the latter freshly prepared samples for each analysis at every single pH. As the catecholamines are sensitive to light, all samples were protected against it during the course of the experiments. Each method rendered four acidity constants corresponding each to the four acid protons belonging to the functional groups present in the molecule; for the point-by-point analysis the values found were: log β 1=38.25±0.21 , log β 2=29.65±0.17 , log β 3=21.01±0.14 , log β 4=11.34±0.071 .

  6. Integration of Stable Droplet Formation on a CD Microfluidic Device for Extreme Point of Care Applications

    NASA Astrophysics Data System (ADS)

    Ganesh, Shruthi Vatsyayani

    With the advent of microfluidic technologies for molecular diagnostics, a lot of emphasis has been placed on developing diagnostic tools for resource poor regions in the form of Extreme Point of Care devices. To ensure commercial viability of such a device there is a need to develop an accurate sample to answer system, which is robust, portable, isolated yet highly sensitive and cost effective. This need has been a driving force for research involving integration of different microsystems like droplet microfluidics, Compact-disc (CD)microfluidics along with sample preparation and detection modules on a single platform. This work attempts to develop a proof of concept prototype of one such device using existing CD microfluidics tools to generate stable droplets used in point of care diagnostics (POC diagnostics). Apart from using a fairly newer technique for droplet generation and stabilization, the work aims to develop this method focused towards diagnostics for rural healthcare. The motivation for this work is first described with an emphasis on the current need for diagnostic testing in rural health-care and the general guidelines prescribed by WHO for such a sample to answer system. Furthermore, a background on CD and droplet microfluidics is presented to understand the merits and de-merits of each system and the need for integrating the two. This phase of the thesis also includes different methods employed/demonstrated to generate droplets on a spinning platform. An overview on the detection platforms is also presented to understand the challenges involved in building an extreme point of care device. In the third phase of the thesis, general manufacturing techniques and materials used to accomplish this work is presented. Lastly, design trials for droplet generation is presented. The shortcomings of these trials are solved by investigating mechanisms pertaining to design modification and use of agarose based droplet generation to ensure a more robust sample processing method. This method is further characterized and compared with non-agarose based system and the results are analyzed. In conclusion, future prospects of this work are discussed in relation to extreme POC applications.

  7. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  8. Evaluation of the Effectiveness of Chemical Dependency Counseling Course Based on Patrick and Partners

    PubMed Central

    Keshavarz, Yousef; Ghaedi, Sina; Rahimi-Kashani, Mansure

    2012-01-01

    Background The twelve step program is one of the programs that are administered for overcoming abuse of drugs. In this study, the effectiveness of chemical dependency counseling course was investigated using a hybrid model. Methods In a survey with sample size of 243, participants were selected using stratified random sampling method. A questionnaire was used for collecting data and one sample t-test employed for data analysis. Findings Chemical dependency counseling courses was effective from the point of view of graduates, chiefs of rehabilitation centers, rescuers and their families and ultimately managers of rebirth society, but it was not effective from the point of view of professors and lecturers. The last group evaluated the effectiveness of chemical dependency counseling courses only in performance level. Conclusion It seems that the chemical dependency counseling courses had appropriate effectiveness and led to change in attitudes, increase awareness, knowledge and experience combination and ultimately increased the efficiency of counseling. PMID:24494132

  9. Updating a preoperative surface model with information from real-time tracked 2D ultrasound using a Poisson surface reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.

    2014-03-01

    In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.

  10. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  11. Environmental monitoring of phenolic pollutants in water by cloud point extraction prior to micellar electrokinetic chromatography.

    PubMed

    Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F

    2009-05-01

    Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.

  12. Acid Rain Analysis by Standard Addition Titration.

    ERIC Educational Resources Information Center

    Ophardt, Charles E.

    1985-01-01

    The standard addition titration is a precise and rapid method for the determination of the acidity in rain or snow samples. The method requires use of a standard buret, a pH meter, and Gran's plot to determine the equivalence point. Experimental procedures used and typical results obtained are presented. (JN)

  13. Study on Raman spectral imaging method for simultaneous estimation of ingredients concentration in food powder

    USDA-ARS?s Scientific Manuscript database

    This study investigated the potential of point scan Raman spectral imaging method for estimation of different ingredients and chemical contaminant concentration in food powder. Food powder sample was prepared by mixing sugar, vanillin, melamine and non-dairy cream at 5 different concentrations in a ...

  14. Validation of a quantitative Eimeria spp. PCR for fresh droppings of broiler chickens.

    PubMed

    Peek, H W; Ter Veen, C; Dijkman, R; Landman, W J M

    2017-12-01

    A quantitative Polymerase Chain Reaction (qPCR) for the seven chicken Eimeria spp. was modified and validated for direct use on fresh droppings. The analytical specificity of the qPCR on droppings was 100%. Its analytical sensitivity (non-sporulated oocysts/g droppings) was 41 for E. acervulina, ≤2900 for E. brunetti, 710 for E. praecox, 1500 for E. necatrix, 190 for E. tenella, 640 for E. maxima, and 1100 for E. mitis. Field validation of the qPCR was done using droppings with non-sporulated oocysts from 19 broiler flocks. To reduce the number of qPCR tests five grams of each pooled sample (consisting of ten fresh droppings) per time point were blended into one mixed sample. Comparison of the oocysts per gram (OPG)-counting method with the qPCR using pooled samples (n = 1180) yielded a Pearson's correlation coefficient of 0.78 (95% CI: 0.76-0.80) and a Pearson's correlation coefficient of 0.76 (95% CI: 0.70-0.81) using mixed samples (n = 236). Comparison of the average of the OPG-counts of the five pooled samples with the mixed sample per time point (n = 236) showed a Pearson's correlation coefficient (R) of 0.94 (95% CI: 0.92-0.95) for the OPG-counting method and 0.87 (95% CI: 0.84-0.90) for the qPCR. This indicates that mixed samples are practically equivalent to the mean of five pooled samples. The good correlation between the OPG-counting method and the qPCR was further confirmed by the visual agreement between the total oocyst/g shedding patterns measured with both techniques in the 19 broiler flocks using the mixed samples.

  15. Artificial testing targets with controllable blur for adaptive optics microscopes

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Tamada, Yosuke; Murata, Takashi; Oya, Shin; Hasebe, Mitsuyasu; Hayano, Yutaka; Kamei, Yasuhiro

    2017-08-01

    This letter proposes a method of configuring a testing target to evaluate the performance of adaptive optics microscopes. In this method, a testing slide with fluorescent beads is used to simultaneously determine the point spread function and the field of view. The point spread function is reproduced to simulate actual biological samples by etching a microstructure on the cover glass. The fabrication process is simplified to facilitate an onsite preparation. The artificial tissue consists of solid materials and silicone oil and is stable for use in repetitive experiments.

  16. The Development of an Officer Training School Board Score Prediction Method Using a Multi-Board Approach

    DTIC Science & Technology

    1991-03-01

    forms: ". ..application blanks, biographical inventories , interviews, work sample tests, and intelligence, aptitude, and personality tests" (1:11...the grouping method, 3) the task method, and 4) the knowledge , skills, abilities (KSA) method. The point method of measuring training/experience assigns... knowledge , skills, abilities, and other characteristics which relate specifically to each job element (3:131). Interview. According to N. Schmitt

  17. Incorporating availability for detection in estimates of bird abundance

    USGS Publications Warehouse

    Diefenbach, D.R.; Marshall, M.R.; Mattice, J.A.; Brauning, D.W.

    2007-01-01

    Several bird-survey methods have been proposed that provide an estimated detection probability so that bird-count statistics can be used to estimate bird abundance. However, some of these estimators adjust counts of birds observed by the probability that a bird is detected and assume that all birds are available to be detected at the time of the survey. We marked male Henslow's Sparrows (Ammodramus henslowii) and Grasshopper Sparrows (A. savannarum) and monitored their behavior during May-July 2002 and 2003 to estimate the proportion of time they were available for detection. We found that the availability of Henslow's Sparrows declined in late June to <10% for 5- or 10-min point counts when a male had to sing and be visible to the observer; but during 20 May-19 June, males were available for detection 39.1% (SD = 27.3) of the time for 5-min point counts and 43.9% (SD = 28.9) of the time for 10-min point counts (n = 54). We detected no temporal changes in availability for Grasshopper Sparrows, but estimated availability to be much lower for 5-min point counts (10.3%, SD = 12.2) than for 10-min point counts (19.2%, SD = 22.3) when males had to be visible and sing during the sampling period (n = 80). For distance sampling, we estimated the availability of Henslow's Sparrows to be 44.2% (SD = 29.0) and the availability of Grasshopper Sparrows to be 20.6% (SD = 23.5). We show how our estimates of availability can be incorporated in the abundance and variance estimators for distance sampling and modify the abundance and variance estimators for the double-observer method. Methods that directly estimate availability from bird counts but also incorporate detection probabilities need further development and will be important for obtaining unbiased estimates of abundance for these species.

  18. Performance of quantitative vegetation sampling methods across gradients of cover in Great Basin plant communities

    USGS Publications Warehouse

    Pilliod, David S.; Arkle, Robert S.

    2013-01-01

    Resource managers and scientists need efficient, reliable methods for quantifying vegetation to conduct basic research, evaluate land management actions, and monitor trends in habitat conditions. We examined three methods for quantifying vegetation in 1-ha plots among different plant communities in the northern Great Basin: photography-based grid-point intercept (GPI), line-point intercept (LPI), and point-quarter (PQ). We also evaluated each method for within-plot subsampling adequacy and effort requirements relative to information gain. We found that, for most functional groups, percent cover measurements collected with the use of LPI, GPI, and PQ methods were strongly correlated. These correlations were even stronger when we used data from the upper canopy only (i.e., top “hit” of pin flags) in LPI to estimate cover. PQ was best at quantifying cover of sparse plants such as shrubs in early successional habitats. As cover of a given functional group decreased within plots, the variance of the cover estimate increased substantially, which required more subsamples per plot (i.e., transect lines, quadrats) to achieve reliable precision. For GPI, we found that that six–nine quadrats per hectare were sufficient to characterize the vegetation in most of the plant communities sampled. All three methods reasonably characterized the vegetation in our plots, and each has advantages depending on characteristics of the vegetation, such as cover or heterogeneity, study goals, precision of measurements required, and efficiency needed.

  19. Tests of a High Temperature Sample Conditioner for the Waste Treatment Plant LV-S2, LV-S3, HV-S3A and HV-S3B Exhaust Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaherty, Julia E.; Glissmeyer, John A.

    2015-03-18

    Tests were performed to evaluate a sample conditioning unit for stack monitoring at Hanford Tank Waste Treatment and Immobilization Plant (WTP) exhaust stacks with elevated air temperatures. The LV-S2, LV-S3, HV-S3A and HV-S3B exhaust stacks are expected to have elevated air temperature and dew point. At these emission points, exhaust temperatures are too high to deliver the air sample directly to the required stack monitoring equipment. As a result, a sample conditioning system is considered to cool and dry the air prior to its delivery to the stack monitoring system. The method proposed for the sample conditioning is a dilutionmore » system that will introduce cooler, dry air to the air sample stream. This method of sample conditioning is meant to reduce the sample temperature while avoiding condensation of moisture in the sample stream. An additional constraint is that the ANSI/HPS N13.1-1999 standard states that at least 50% of the 10 μm aerodynamic diameter (AD) particles present in the stack free stream must be delivered to the sample collector. In other words, depositional loss of particles should be limited to 50% in the sampling, transport, and conditioning systems. Based on estimates of particle penetration through the LV-S3 sampling system, the diluter should perform with about 80% penetration or better to ensure that the total sampling system passes the 50% or greater penetration criterion.« less

  20. Preconcentration and Determination of Trace Vanadium(V) in Beverages by Combination of Ultrasound Assisted-cloud Point Extraction with Spectrophotometry.

    PubMed

    Kartal Temel, Nuket; Gürkan, Ramazan

    2018-03-01

    A novel ultrasound assisted-cloud point extraction method was developed for preconcentration and determination of V(V) in beverage samples. After complexation by pyrogallol in presence of safranin T at pH 6.0, V(V) ions as ternary complex are extracted into the micellar phase of Triton X-114. The complex was monitored at 533 nm by spectrophotometry. The matrix effect on the recovery of V(V) from the spiked samples at 50 μg L-1 was evaluated. In optimized conditions, the limits of detection and quantification of the method, respectively, was 0.58 and 1.93 μg L-1 in linear range of 2-500 μg L-1 with sensitivity enhancement and preconcentration factors of 47.7 and 40 for preconcentration from 15 mL of sample solution. The recoveries from spiked samples were in range of 93.8-103.2% with a relative standard deviation ranging from 2.6% to 4.1% (25, 100 and 250 μg L-1, n: 5). The accuracy was verified by analysis of two certified samples, and the results were in a good agreement with the certified values. The intra-day and inter-day precision were tested by reproducibility (as 3.3-3.4%) and repeatability (as 3.4-4.1%) analysis for five replicate measurements of V(V) in quality control samples spiked with 5, 10 and 15 μg L-1. Trace V(V) contents of the selected beverage samples by the developed method were successfully determined.

  1. Evaluation of Two Surface Sampling Methods for Microbiological and Chemical Analyses To Assess the Presence of Biofilms in Food Companies.

    PubMed

    Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De

    2017-12-01

    Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.

  2. Selecting the most appropriate time points to profile in high-throughput studies

    PubMed Central

    Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv

    2017-01-01

    Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972

  3. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  4. Standard error of estimated average timber volume per acre under point sampling when trees are measured for volume on a subsample of all points.

    Treesearch

    Floyd A. Johnson

    1961-01-01

    This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...

  5. [Research on fast classification based on LIBS technology and principle component analyses].

    PubMed

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  6. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  7. A novel image registration approach via combining local features and geometric invariants

    PubMed Central

    Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa

    2018-01-01

    Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595

  8. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  9. Effect of black point on accuracy of LCD displays colorimetric characterization

    NASA Astrophysics Data System (ADS)

    Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan

    2018-03-01

    Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.

  10. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  11. Can the Afinion HbA1c Point-of-Care instrument be an alternative method for the Tosoh G8 in the case of Hb-Tacoma?

    PubMed

    Lenters-Westra, Erna; Strunk, Annuska; Campbell, Paul; Slingerland, Robbert J

    2017-02-01

    Hb-variant interference when reporting HbA1c has been an ongoing challenge since HbA1c was introduced to monitor patients with diabetes mellitus. Most Hb-variants show an abnormal chromatogram when cation-exchange HPLC is used for the determination of HbA1c. Unfortunately, the Tosoh G8 generates what appears to be normal chromatogram in the presence of Hb-Tacoma, yielding a falsely high HbA1c value. The primary aim of the study was to investigate if the Afinion HbA1c point-of-care (POC) instrument could be used as an alternative method for the Tosoh G8 when testing for HbA1c in the presence of Hb-Tacoma. Whole blood samples were collected in K 2 EDTA tubes from individuals homozygous for HbA (n = 40) and heterozygous for Hb-Tacoma (n = 20). Samples were then immediately analyzed with the Afinion POC instrument. After analysis, aliquots of each sample were frozen at -80 °C. The frozen samples were shipped on dry ice to the European Reference Laboratory for Glycohemoglobin (ERL) and analyzed with three International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and National Glycohemoglobin Standardization Program (NGSP) Secondary Reference Measurement Procedures (SRMPs). The Premier Hb9210 was used as the reference method. When compared to the reference method, samples with Hb-Tacoma yielded mean relative differences of 31.8% on the Tosoh G8, 21.5% on the Roche Tina-quant Gen. 2 and 16.8% on the Afinion. The Afinion cannot be used as an alternative method for the Tosoh G8 when testing for HbA1c in the presence of Hb-Tacoma.

  12. Pulse-Echo Ultrasonic Imaging Method for Eliminating Sample Thickness Variation Effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1997-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material which accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer and automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjustments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  13. Application of dual-cloud point extraction for the trace levels of copper in serum of different viral hepatitis patients by flame atomic absorption spectrometry: a multivariate study.

    PubMed

    Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal

    2014-12-10

    An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. High temperature pressurized high frequency testing rig and test method

    DOEpatents

    De La Cruz, Jose; Lacey, Paul

    2003-04-15

    An apparatus is described which permits the lubricity of fuel compositions at or near temperatures and pressures experienced by compression ignition fuel injector components during operation in a running engine. The apparatus consists of means to apply a measured force between two surfaces and oscillate them at high frequency while wetted with a sample of the fuel composition heated to an operator selected temperature. Provision is made to permit operation at or near the flash point of the fuel compositions. Additionally a method of using the subject apparatus to simulate ASTM Testing Method D6079 is disclosed, said method involving using the disclosed apparatus to contact the faces of prepared workpieces under a measured load, sealing the workface contact point into the disclosed apparatus while immersing said contact point between said workfaces in a lubricating media to be tested, pressurizing and heating the chamber and thereby the fluid and workfaces therewithin, using the disclosed apparatus to impart a differential linear motion between the workpieces at their contact point until a measurable scar is imparted to at least one workpiece workface, and then evaluating the workface scar.

  15. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  16. Optimal model-based sensorless adaptive optics for epifluorescence microscopy.

    PubMed

    Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel

    2018-01-01

    We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.

  17. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  18. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  19. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  20. A new dispersive liquid-liquid microextraction using ionic liquid based microemulsion coupled with cloud point extraction for determination of copper in serum and water samples.

    PubMed

    Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem

    2016-04-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. A novel flow injection chemiluminescence method for automated and miniaturized determination of phenols in smoked food samples.

    PubMed

    Vakh, Christina; Evdokimova, Ekaterina; Pochivalov, Aleksei; Moskvin, Leonid; Bulatov, Andrey

    2017-12-15

    An easily performed fully automated and miniaturized flow injection chemiluminescence (CL) method for determination of phenols in smoked food samples has been proposed. This method includes the ultrasound assisted solid-liquid extraction coupled with gas-diffusion separation of phenols from smoked food sample and analytes absorption into a NaOH solution in a specially designed gas-diffusion cell. The flow system was designed to focus on automation and miniaturization with minimal sample and reagent consumption by inexpensive instrumentation. The luminol - N-bromosuccinimide system in an alkaline medium was used for the CL determination of phenols. The limit of detection of the proposed procedure was 3·10 -8 ·molL -1 (0.01mgkg -1 ) in terms of phenol. The presented method demonstrated to be a good tool for easy, rapid and cost-effective point-of-need screening phenols in smoked food samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  3. A New Method for Calculating Counts in Cells

    NASA Astrophysics Data System (ADS)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  4. Recovery of intrinsic fluorescence from single-point interstitial measurements for quantification of doxorubicin concentration

    PubMed Central

    Baran, Timothy M.; Foster, Thomas H.

    2014-01-01

    Background and Objective We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Materials and Methods Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. Results We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. Conclusion This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. PMID:24037853

  5. Development of a simple, sensitive and inexpensive ion-pairing cloud point extraction approach for the determination of trace inorganic arsenic species in spring water, beverage and rice samples by UV-Vis spectrophotometry.

    PubMed

    Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail

    2015-08-01

    The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. CUTOFF POINT OF THE PHASE ANGLE IN PRE-RADIOTHERAPY CANCER PATIENTS.

    PubMed

    Souza Thompson Motta, Rachel; Alves Castanho, Ivany; Guillermo Coca Velarde, Luis

    2015-11-01

    malnutrition is a common complication for cancer patients. The phase angle (PA), direct measurement of bioelectrical impedance analysis (BIA), has been considered a predictor of body cell mass and prognostic indicator. Cutoff points for phase angle (PA) associated with nutritional risk in cancer patients have not been determined yet. assess the possibility of determining the cutoff point for PA to identify nutritional risk in pre-radiotherapy cancer patients. sample group: Patients from both genders diagnosed with cancer and sent for ambulatory radiotherapy. body mass index (BMI), percentage of weight loss (% WL), mid-arm circumference (MAC), triceps skinfold thickness (TST), mid-arm muscle circumference (MAMC), mid-arm muscle area (MAMA), score and categorical assessment obtained using the Patient-Generated Subjective Global Assessment (PG-SGA) form, PA and standardized phase angle (SPA). Kappa coefficient was used to test the degree of agreement between the diagnoses of nutritional risk obtained from several different methods of nutritional assessment. Cutoff points for the PA through anthropometric indicators and PG-SGA were determined by using Receiver Operating Characteristic (ROC) curves, and patient survival was analyzed with the Cox regression method. the cutoff points with the greatest discriminatory power were those obtained from BMI (5.2) and the categorical assessment of PG-SGA (5.4). The diagnosis obtained using these cutoff points showed a significant association with risk of death for the patients in the sample group. we recommend using the cutoff point 5.2 for the PA as a criterion for identifying nutritional risk in pre-radiotherapy cancer patients. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  7. Assessment of the trace element distribution in soils in the parks of the city of Zagreb (Croatia).

    PubMed

    Roje, Vibor; Orešković, Marko; Rončević, Juraj; Bakšić, Darko; Pernar, Nikola; Perković, Ivan

    2018-02-07

    This paper presents the results of the preliminary testing of the selected trace elements in the soils of several parks in the city of Zagreb, Republic of Croatia. In each park, the samples were taken from several points-at various distances from the roads. The samples were taken at two different depths: 0-5 and 30-45 cm. Composite samples were done for each sampling point. Microwave-assisted wet digestion of the soil samples was performed and the determination by ICP-AES technique was done. Results obtained for Al, As, Ba, Mn, Ti, V, and K are in a good agreement with the results published in the scientific literature so far. The mass fraction values of Cd, Cr, Cu, Ni, Pb, and Zn are somewhat higher than the maximum values given in the Croatian Directive on agricultural land protection against pollution. Be, Mo, Sb, Se, and Tl in the samples were present in the concentrations that are lower than their method detection limit values.

  8. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  9. Artificial odor discrimination system using electronic nose and neural networks for the identification of urinary tract infection.

    PubMed

    Kodogiannis, Vassilis S; Lygouras, John N; Tarczynski, Andrzej; Chowdrey, Hardial S

    2008-11-01

    Current clinical diagnostics are based on biochemical, immunological, or microbiological methods. However, these methods are operator dependent, time-consuming, expensive, and require special skills, and are therefore, not suitable for point-of-care testing. Recent developments in gas-sensing technology and pattern recognition methods make electronic nose technology an interesting alternative for medical point-of-care devices. An electronic nose has been used to detect urinary tract infection from 45 suspected cases that were sent for analysis in a U.K. Public Health Registry. These samples were analyzed by incubation in a volatile generation test tube system for 4-5 h. Two issues are being addressed, including the implementation of an advanced neural network, based on a modified expectation maximization scheme that incorporates a dynamic structure methodology and the concept of a fusion of multiple classifiers dedicated to specific feature parameters. This study has shown the potential for early detection of microbial contaminants in urine samples using electronic nose technology.

  10. Quadtree of TIN: a new algorithm of dynamic LOD

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  11. New Cost-Effective Method for Long-Term Groundwater Monitoring Programs

    DTIC Science & Technology

    2013-05-01

    with a small-volume, gas -tight syringe (< 1 mL) and injected directly into the field-portable GC. Alternatively, the well headspace sample can be...according to manufacturers’ protocols. Isobutylene was used as the calibration standard for the PID. The standard gas mixtures were used for 3-point...monitoring wells are being evaluated: 1) direct headspace sampling, 2) sampling tube with gas permeable membrane, and 3) gas -filled passive vapor

  12. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  13. [Spatial variation of soil properties and quality evaluation for arable Ustic Cambosols in central Henan Province].

    PubMed

    Zhang, Xue-Lei; Feng, Wan-Wan; Zhong, Guo-Min

    2011-01-01

    A GIS-based 500 m x 500 m soil sampling point arrangement was set on 248 points at Wenshu Town of Yuzhou County in central Henan Province, where the typical Ustic Cambosols locates. By using soil digital data, the spatial database was established, from which, all the needed latitude and longitude data of the sampling points were produced for the field GPS guide. Soil samples (0-20 cm) were collected from 202 points, of which, bulk density measurement were conducted for randomly selected 34 points, and the ten soil property items used as the factors for soil quality assessment, including organic matter, available K, available P, pH, total N, total P, soil texture, cation exchange capacity (CEC), slowly available K, and bulk density, were analyzed for the other points. The soil property items were checked by statistic tools, and then, classified with standard criteria at home and abroad. The factor weight was given by analytic hierarchy process (AHP) method, and the spatial variation of the major 10 soil properties as well as the soil quality classes and their occupied areas were worked out by Kriging interpolation maps. The results showed that the arable Ustic Cambosols in study area was of good quality soil, over 95% of which ranked in good and medium classes and only less than 5% were in poor class.

  14. Using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    USDA-ARS?s Scientific Manuscript database

    Methods to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminant. While informative, many of these methods suffer from poor recovery rates and only provide a snapshot of the microbial load at the time of collectio...

  15. Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization

    PubMed Central

    Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin

    2017-01-01

    In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935

  16. Simultaneous spectrophotometric determination of valsartan and hydrochlorothiazide by H-point standard addition method and partial least squares regression.

    PubMed

    Lakshmi, Karunanidhi Santhana; Lakshmi, Sivasubramanian

    2011-03-01

    Simultaneous determination of valsartan and hydrochlorothiazide by the H-point standard additions method (HPSAM) and partial least squares (PLS) calibration is described. Absorbances at a pair of wavelengths, 216 and 228 nm, were monitored with the addition of standard solutions of valsartan. Results of applying HPSAM showed that valsartan and hydrochlorothiazide can be determined simultaneously at concentration ratios varying from 20:1 to 1:15 in a mixed sample. The proposed PLS method does not require chemical separation and spectral graphical procedures for quantitative resolution of mixtures containing the titled compounds. The calibration model was based on absorption spectra in the 200-350 nm range for 25 different mixtures of valsartan and hydrochlorothiazide. Calibration matrices contained 0.5-3 μg mL-1 of both valsartan and hydrochlorothiazide. The standard error of prediction (SEP) for valsartan and hydrochlorothiazide was 0.020 and 0.038 μg mL-1, respectively. Both proposed methods were successfully applied to the determination of valsartan and hydrochlorothiazide in several synthetic and real matrix samples.

  17. Estimating population trends with a linear model

    USGS Publications Warehouse

    Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.

    2003-01-01

    We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.

  18. The influence of point defects on the thermal conductivity of AlN crystals

    NASA Astrophysics Data System (ADS)

    Rounds, Robert; Sarkar, Biplab; Alden, Dorian; Guo, Qiang; Klump, Andrew; Hartmann, Carsten; Nagashima, Toru; Kirste, Ronny; Franke, Alexander; Bickermann, Matthias; Kumagai, Yoshinao; Sitar, Zlatko; Collazo, Ramón

    2018-05-01

    The average bulk thermal conductivity of free-standing physical vapor transport and hydride vapor phase epitaxy single crystal AlN samples with different impurity concentrations is analyzed using the 3ω method in the temperature range of 30-325 K. AlN wafers grown by physical vapor transport show significant variation in thermal conductivity at room temperature with values ranging between 268 W/m K and 339 W/m K. AlN crystals grown by hydride vapor phase epitaxy yield values between 298 W/m K and 341 W/m K at room temperature, suggesting that the same fundamental mechanisms limit the thermal conductivity of AlN grown by both techniques. All samples in this work show phonon resonance behavior resulting from incorporated point defects. Samples shown by optical analysis to contain carbon-silicon complexes exhibit higher thermal conductivity above 100 K. Phonon scattering by point defects is determined to be the main limiting factor for thermal conductivity of AlN within the investigated temperature range.

  19. Comparison of polyacrylamide and agarose gel thin-layer isoelectric focusing for the characterization of beta-lactamases.

    PubMed

    Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P

    1983-08-01

    Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases.

  20. Comparison of polyacrylamide and agarose gel thin-layer isoelectric focusing for the characterization of beta-lactamases.

    PubMed Central

    Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P

    1983-01-01

    Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases. Images PMID:6605714

  1. Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.

    PubMed

    Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M

    2015-10-05

    The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.

  2. [Validation of the nutritional index in Mexican pre-teens with the sensitivity and specificity method].

    PubMed

    Saucedo-Molina, T J; Gómez-Peresmitré, G

    1998-01-01

    To determine the diagnostic validity of the nutritional index (NI) in a sample of Mexican preadolescents. A total of 256 preadolescents, between 10 and 12 years old, male and female, students from Mexico City, were used to establish the diagnostic validity of NI using the sensitivity and specificity method. The findings show that the conventional NI cut-off points showed good sensitivity and specificity for the diagnosis of low weight, normality and obesity but not for overweight. When the cut-off points of NI were normalized, the sensitivity, specificity and prediction potency values were more suitable in all categories. When working with preadolescents, it is better to use the new cut-off points of NI, to obtain more reliable diagnosis.

  3. Phase aberration compensation of digital holographic microscopy based on least squares surface fitting

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo

    2009-10-01

    Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.

  4. Fuzzy support vector machine for microarray imbalanced data classification

    NASA Astrophysics Data System (ADS)

    Ladayya, Faroh; Purnami, Santi Wulan; Irhamah

    2017-11-01

    DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.

  5. Spatially explicit models for inference about density in unmarked or partially marked populations

    USGS Publications Warehouse

    Chandler, Richard B.; Royle, J. Andrew

    2013-01-01

    Recently developed spatial capture–recapture (SCR) models represent a major advance over traditional capture–recapture (CR) models because they yield explicit estimates of animal density instead of population size within an unknown area. Furthermore, unlike nonspatial CR methods, SCR models account for heterogeneity in capture probability arising from the juxtaposition of animal activity centers and sample locations. Although the utility of SCR methods is gaining recognition, the requirement that all individuals can be uniquely identified excludes their use in many contexts. In this paper, we develop models for situations in which individual recognition is not possible, thereby allowing SCR concepts to be applied in studies of unmarked or partially marked populations. The data required for our model are spatially referenced counts made on one or more sample occasions at a collection of closely spaced sample units such that individuals can be encountered at multiple locations. Our approach includes a spatial point process for the animal activity centers and uses the spatial correlation in counts as information about the number and location of the activity centers. Camera-traps, hair snares, track plates, sound recordings, and even point counts can yield spatially correlated count data, and thus our model is widely applicable. A simulation study demonstrated that while the posterior mean exhibits frequentist bias on the order of 5–10% in small samples, the posterior mode is an accurate point estimator as long as adequate spatial correlation is present. Marking a subset of the population substantially increases posterior precision and is recommended whenever possible. We applied our model to avian point count data collected on an unmarked population of the northern parula (Parula americana) and obtained a density estimate (posterior mode) of 0.38 (95% CI: 0.19–1.64) birds/ha. Our paper challenges sampling and analytical conventions in ecology by demonstrating that neither spatial independence nor individual recognition is needed to estimate population density—rather, spatial dependence can be informative about individual distribution and density.

  6. Active AirCore Sampling: Constraining Point Sources of Methane and Other Gases with Fixed Wing Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Bent, J. D.; Sweeney, C.; Tans, P. P.; Newberger, T.; Higgs, J. A.; Wolter, S.

    2017-12-01

    Accurate estimates of point source gas emissions are essential for reconciling top-down and bottom-up greenhouse gas measurements, but sampling such sources is challenging. Remote sensing methods are limited by resolution and cloud cover; aircraft methods are limited by air traffic control clearances, and the need to properly determine boundary layer height. A new sampling approach leverages the ability of unmanned aerial systems (UAS) to measure all the way to the surface near the source of emissions, improving sample resolution, and reducing the need to characterize a wide downstream swath, or measure to the full height of the planetary boundary layer (PBL). The "Active-AirCore" sampler, currently under development, will fly on a fixed wing UAS in Class G airspace, spiraling from the surface to 1200 ft AGL around point sources such as leaking oil wells to measure methane, carbon dioxide and carbon monoxide. The sampler collects a 100-meter long sample "core" of air in an 1/8" passivated stainless steel tube. This "core" is run on a high-precision instrument shortly after the UAS is recovered. Sample values are mapped to a specific geographic location by cross-referencing GPS and flow/pressure metadata, and fluxes are quantified by applying Gauss's theorem to the data, mapped onto the spatial "cylinder" circumscribed by the UAS. The AirCore-Active builds off the sampling ability and analytical approach of the related AirCore sampler, which profiles the atmosphere passively using a balloon launch platform, but will add an active pumping capability needed for near-surface horizontal sampling applications. Here, we show design elements, laboratory and field test results for methane, describe the overall goals of the mission, and discuss how the platform can be adapted, with minimal effort, to measure other gas species.

  7. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  8. Efficient exploration of chemical space by fragment-based screening.

    PubMed

    Hall, Richard J; Mortenson, Paul N; Murray, Christopher W

    2014-01-01

    Screening methods seek to sample a vast chemical space in order to identify starting points for further chemical optimisation. Fragment based drug discovery exploits the superior sampling of chemical space that can be achieved when the molecular weight is restricted. Here we show that commercially available fragment space is still relatively poorly sampled and argue for highly sensitive screening methods to allow the detection of smaller fragments. We analyse the properties of our fragment library versus the properties of X-ray hits derived from the library. We particularly consider properties related to the degree of planarity of the fragments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Total 25-Hydroxyvitamin D Determination by an Entry Level Triple Quadrupole Instrument: Comparison between Two Commercial Kits

    PubMed Central

    Cocci, Andrea; Zuppi, Cecilia; Persichilli, Silvia

    2013-01-01

    Objective. 25-hydroxyvitamin D2/D3 (25-OHD2/D3) determination is a reliable biomarker for vitamin D status. Liquid chromatography-tandem mass spectrometry was recently proposed as a reference method for vitamin D status evaluation. The aim of this work is to compare two commercial kits (Chromsystems and PerkinElmer) for 25-OHD2/D3 determination by our entry level LC-MS/MS. Design and Methods. Chromsystems kit adds an online trap column to an HPLC column and provides atmospheric pressure chemical ionization, isotopically labeled internal standard, and 4 calibrator points. PerkinElmer kit uses a solvent extraction and protein precipitation method. This kit can be used with or without derivatization with, respectively, electrospray and atmospheric pressure chemical ionization. For each analyte, there are isotopically labeled internal standards and 7 deuterated calibrator points. Results. Performance characteristics are acceptable for both methods. Mean bias between methods calculated on 70 samples was 1.9 ng/mL. Linear regression analysis gave an R 2 of 0.94. 25-OHD2 is detectable only with PerkinElmer kit in derivatized assay option. Conclusion. Both methods are suitable for routine. Chromsystems kit minimizes manual sample preparation, requiring only protein precipitation, but, with our system, 25-OHD2 is not detectable. PerkinElmer kit without derivatization does not guarantee acceptable performance with our LC-MS/MS system, as sample is not purified online. Derivatization provides sufficient sensitivity for 25-OHD2 detection. PMID:23555079

  10. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    NASA Astrophysics Data System (ADS)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.

  11. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  12. [Environmental geochemical baseline of heavy metals in soils of the Ili river basin and pollution evaluation].

    PubMed

    Zhao, Xin-Ru; Nasier, Telajin; Cheng, Yong-Yi; Zhan, Jiang-Yu; Yang, Jian-Hong

    2014-06-01

    Environmental geochemical baseline models of Cu, Zn, Pb, As, Hg were established by standardized method in the ehernozem, chestnut soil, sierozem and saline soil from the Ili river valley region. The theoretical baseline values were calculated. Baseline factor pollution index evaluation method, environmental background value evaluation method and heavy metal cleanliness evaluation method were used to compare soil pollution degrees. The baseline factor pollution index evaluation showed that As pollution was the most prominent among the four typical types of soils within the river basin, with 7.14%, 9.76%, 7.50% of sampling points in chernozem, chestnut soil and sierozem reached the heavy pollution, respectively. 7.32% of sampling points of chestnut soil reached the permitted heavy metal Pb pollution index in the chestnut soil. The variation extent of As and Pb was the largest, indicating large human disturbance. Environmental background value evaluation showed that As was the main pollution element, followed by Cu, Zn and Pb. Heavy metal cleanliness evaluation showed that Cu, Zn and Pb were better than cleanliness level 2 and Hg was the of cleanliness level 1 in all four types of soils. As showed moderate pollution in sierozem, and it was of cleanliness level 2 or better in chernozem, chestnut soil and saline-alkali soil. Comparing the three evaluation systems, the baseline factor pollution index evaluation more comprehensively reflected the geochemical migration characteristics of elements and the soil formation processes, and the pollution assessment could be specific to the sampling points. The environmental background value evaluation neglected the natural migration of heavy metals and the deposition process in the soil since it was established on the regional background values. The main purpose of the heavy metal cleanliness evaluation was to evaluate the safety degree of soil environment.

  13. The usefulness of Skylab/EREP S-190 and S-192 imagery in multistage forest surveys

    NASA Technical Reports Server (NTRS)

    Langley, P. G.; Vanroessel, J. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. The RMSE of point location achieved with the annotation system on S190A imagery was 100 m and 90 m in the x and y direction, respectively. Potential gains in sampling precision attributable to space derived imagery ranged from 4.9 to 43.3 percent depending on the image type, interpretation method, time of year, and sampling method applied. Seasonal variation was significant. S190A products obtained in September yielded higher gains than those obtained in June. Using 100 primary sample units as a base under simple random sampling, the revenue made available for incorporating space acquired data into the sample design to estimate timber volume was as high as $39,400.00.

  14. Continuity of Functional-Somatic Symptoms from Late Childhood to Young Adulthood in a Community Sample

    ERIC Educational Resources Information Center

    Steinhausen, Hans-Christoph; Metzke, Christa Winkler

    2007-01-01

    Background: The goal of this study was to assess the course of functional-somatic symptoms from late childhood to young adulthood and the associations of these symptoms with young adult psychopathology. Methods: Data were collected in a large community sample at three different points in time (1994, 1997, and 2001). Functional-somatic symptoms…

  15. Quantitative multiplex quantum dot in-situ hybridisation based gene expression profiling in tissue microarrays identifies prognostic genes in acute myeloid leukaemia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tholouli, Eleni; MacDermott, Sarah; Hoyland, Judith

    2012-08-24

    Highlights: Black-Right-Pointing-Pointer Development of a quantitative high throughput in situ expression profiling method. Black-Right-Pointing-Pointer Application to a tissue microarray of 242 AML bone marrow samples. Black-Right-Pointing-Pointer Identification of HOXA4, HOXA9, Meis1 and DNMT3A as prognostic markers in AML. -- Abstract: Measurement and validation of microarray gene signatures in routine clinical samples is problematic and a rate limiting step in translational research. In order to facilitate measurement of microarray identified gene signatures in routine clinical tissue a novel method combining quantum dot based oligonucleotide in situ hybridisation (QD-ISH) and post-hybridisation spectral image analysis was used for multiplex in-situ transcript detection inmore » archival bone marrow trephine samples from patients with acute myeloid leukaemia (AML). Tissue-microarrays were prepared into which white cell pellets were spiked as a standard. Tissue microarrays were made using routinely processed bone marrow trephines from 242 patients with AML. QD-ISH was performed for six candidate prognostic genes using triplex QD-ISH for DNMT1, DNMT3A, DNMT3B, and for HOXA4, HOXA9, Meis1. Scrambled oligonucleotides were used to correct for background staining followed by normalisation of expression against the expression values for the white cell pellet standard. Survival analysis demonstrated that low expression of HOXA4 was associated with poorer overall survival (p = 0.009), whilst high expression of HOXA9 (p < 0.0001), Meis1 (p = 0.005) and DNMT3A (p = 0.04) were associated with early treatment failure. These results demonstrate application of a standardised, quantitative multiplex QD-ISH method for identification of prognostic markers in formalin-fixed paraffin-embedded clinical samples, facilitating measurement of gene expression signatures in routine clinical samples.« less

  16. A new method for estimating the demographic history from DNA sequences: an importance sampling approach

    PubMed Central

    Ait Kaci Azzou, Sadoune; Larribe, Fabrice; Froda, Sorana

    2015-01-01

    The effective population size over time (demographic history) can be retraced from a sample of contemporary DNA sequences. In this paper, we propose a novel methodology based on importance sampling (IS) for exploring such demographic histories. Our starting point is the generalized skyline plot with the main difference being that our procedure, skywis plot, uses a large number of genealogies. The information provided by these genealogies is combined according to the IS weights. Thus, we compute a weighted average of the effective population sizes on specific time intervals (epochs), where the genealogies that agree more with the data are given more weight. We illustrate by a simulation study that the skywis plot correctly reconstructs the recent demographic history under the scenarios most commonly considered in the literature. In particular, our method can capture a change point in the effective population size, and its overall performance is comparable with the one of the bayesian skyline plot. We also introduce the case of serially sampled sequences and illustrate that it is possible to improve the performance of the skywis plot in the case of an exponential expansion of the effective population size. PMID:26300910

  17. A statistical model investigating the prevalence of tuberculosis in New York City using counting processes with two change-points

    PubMed Central

    ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.

    2008-01-01

    SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287

  18. Kinetic titration with differential thermometric determination of the end-point.

    PubMed

    Sajó, I

    1968-06-01

    A method has been described for the determination of concentrations below 10(-4)M by applying catalytic reactions and using thermometric end-point determination. A reference solution, identical with the sample solution except for catalyst, is titrated with catalyst solution until the rates of reaction become the same, as shown by a null deflection on a galvanometer connected via bridge circuits to two opposed thermistors placed in the solutions.

  19. Establishment of a nested-ASP-PCR method to determine the clarithromycin resistance of Helicobacter pylori

    PubMed Central

    Luo, Xiao-Feng; Jiao, Jian-Hua; Zhang, Wen-Yue; Pu, Han-Ming; Qu, Bao-Jin; Yang, Bing-Ya; Hou, Min; Ji, Min-Jun

    2016-01-01

    AIM: To investigate clarithromycin resistance positions 2142, 2143 and 2144 of the 23SrRNA gene in Helicobacter pylori (H. pylori) by nested-allele specific primer-polymerase chain reaction (nested-ASP-PCR). METHODS: The gastric tissue and saliva samples from 99 patients with positive results of the rapid urease test (RUT) were collected. The nested-ASP-PCR method was carried out with the external primers and inner allele-specific primers corresponding to the reference strain and clinical strains. Thirty gastric tissue and saliva samples were tested to determine the sensitivity of nested-ASP-PCR and ASP-PCR methods. Then, clarithromycin resistance was detected for 99 clinical samples by using different methods, including nested-ASP-PCR, bacterial culture and disk diffusion. RESULTS: The nested-ASP-PCR method was successfully established to test the resistance mutation points 2142, 2143 and 2144 of the 23SrRNA gene of H. pylori. Among 30 samples of gastric tissue and saliva, the H. pylori detection rate of nested-ASP-PCR was 90% and 83.33%, while the detection rate of ASP-PCR was just 63% and 56.67%. Especially in the saliva samples, nested-ASP-PCR showed much higher sensitivity in H. pylori detection and resistance mutation rates than ASP-PCR. In the 99 RUT-positive gastric tissue and saliva samples, the H. pylori-positive detection rate by nested-ASP-PCR was 87 (87.88%) and 67 (67.68%), in which there were 30 wild-type and 57 mutated strains in gastric tissue and 22 wild-type and 45 mutated strains in saliva. Genotype analysis showed that three-points mixed mutations were quite common, but different resistant strains were present in gastric mucosa and saliva. Compared to the high sensitivity shown by nested-ASP-PCR, the positive detection of bacterial culture with gastric tissue samples was 50 cases, in which only 26 drug-resistant strains were found through analyzing minimum inhibitory zone of clarithromycin. CONCLUSION: The nested-ASP-PCR assay showed higher detection sensitivity than ASP-PCR and drug sensitivity testing, which could be performed to evaluate clarithromycin resistance of H. pylori. PMID:27433095

  20. Multiscale study on stochastic reconstructions of shale samples

    NASA Astrophysics Data System (ADS)

    Lili, J.; Lin, M.; Jiang, W. B.

    2016-12-01

    Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).

  1. Comparison of sampling procedures and microbiological and non-microbiological parameters to evaluate cleaning and disinfection in broiler houses.

    PubMed

    Luyckx, K; Dewulf, J; Van Weyenberg, S; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K

    2015-04-01

    Cleaning and disinfection of the broiler stable environment is an essential part of farm hygiene management. Adequate cleaning and disinfection is essential for prevention and control of animal diseases and zoonoses. The goal of this study was to shed light on the dynamics of microbiological and non-microbiological parameters during the successive steps of cleaning and disinfection and to select the most suitable sampling methods and parameters to evaluate cleaning and disinfection in broiler houses. The effectiveness of cleaning and disinfection protocols was measured in six broiler houses on two farms through visual inspection, adenosine triphosphate hygiene monitoring and microbiological analyses. Samples were taken at three time points: 1) before cleaning, 2) after cleaning, and 3) after disinfection. Before cleaning and after disinfection, air samples were taken in addition to agar contact plates and swab samples taken from various sampling points for enumeration of total aerobic flora, Enterococcus spp., and Escherichia coli and the detection of E. coli and Salmonella. After cleaning, air samples, swab samples, and adenosine triphosphate swabs were taken and a visual score was also assigned for each sampling point. The mean total aerobic flora determined by swab samples decreased from 7.7±1.4 to 5.7±1.2 log CFU/625 cm2 after cleaning and to 4.2±1.6 log CFU/625 cm2 after disinfection. Agar contact plates were used as the standard for evaluating cleaning and disinfection, but in this study they were found to be less suitable than swabs for enumeration. In addition to measuring total aerobic flora, Enterococcus spp. seemed to be a better hygiene indicator to evaluate cleaning and disinfection protocols than E. coli. All stables were Salmonella negative, but the detection of its indicator organism E. coli provided additional information for evaluating cleaning and disinfection protocols. Adenosine triphosphate analyses gave additional information about the hygiene level of the different sampling points. © 2015 Poultry Science Association Inc.

  2. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation.

    PubMed

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.

  3. Succinonitrile Purification Facility

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Succinonitrile (SCN) Purification Facility provides succinonitrile and succinonitrile alloys to several NRA selected investigations for flight and ground research at various levels of purity. The purification process employed includes both distillation and zone refining. Once the appropriate purification process is completed, samples are characterized to determine the liquidus and/or solidus temperature, which is then related to sample purity. The lab has various methods for measuring these temperatures with accuracies in the milliKelvin to tenths of milliKelvin range. The ultra-pure SCN produced in our facility is indistinguishable from the standard material provided by NIST to well within the stated +/- 1.5mK of the NIST triple point cells. In addition to delivering material to various investigations, our current activities include process improvement, characterization of impurities and triple point cell design and development. The purification process is being evaluated for each of the four vendors to determine the efficacy of each purification step. We are also collecting samples of the remainder from distillation and zone refining for analysis of the constituent impurities. The large triple point cells developed will contain SCN with a melting point of 58.0642 C +/- 1.5mK for use as a calibration standard for Standard Platinum Resistance Thermometers (SPRTs).

  4. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  5. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  6. Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zhu, Tao; Zheng, Hua; Kuang, Yang; Liu, Min; Huang, Wei

    2017-04-01

    The round trip time of the light pulse limits the maximum detectable frequency response range of vibration in phase-sensitive optical time domain reflectometry (φ-OTDR). We propose a method to break the frequency response range restriction of φ-OTDR system by modulating the light pulse interval randomly which enables a random sampling for every vibration point in a long sensing fiber. This sub-Nyquist randomized sampling method is suits for detecting sparse-wideband- frequency vibration signals. Up to MHz resonance vibration signal with over dozens of frequency components and 1.153MHz single frequency vibration signal are clearly identified for a sensing range of 9.6km with 10kHz maximum sampling rate.

  7. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  8. Gaussian windows: A tool for exploring multivariate data

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1990-01-01

    Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.

  9. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  10. Pulse-echo ultrasonic imaging method for eliminating sample thickness variation effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1995-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material is discussed. It accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer, automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjusments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  11. Method and apparatus for maintaining multi-component sample gas constituents in vapor phase during sample extraction and cooling

    DOEpatents

    Farthing, William Earl [Pinson, AL; Felix, Larry Gordon [Pelham, AL; Snyder, Todd Robert [Birmingham, AL

    2008-02-12

    An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.

  12. Method and apparatus maintaining multi-component sample gas constituents in vapor phase during sample extraction and cooling

    DOEpatents

    Farthing, William Earl; Felix, Larry Gordon; Snyder, Todd Robert

    2009-12-15

    An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.

  13. Trace-metal contamination in the glacierized Rio Santa watershed, Peru.

    PubMed

    Guittard, Alexandre; Baraer, Michel; McKenzie, Jeffrey M; Mark, Bryan G; Wigmore, Oliver; Fernandez, Alfonso; Rapre, Alejo C; Walsh, Elizabeth; Bury, Jeffrey; Carey, Mark; French, Adam; Young, Kenneth R

    2017-11-25

    The objective of this research is to characterize the variability of trace metals in the Rio Santa watershed based on synoptic sampling applied at a large scale. To that end, we propose a combination of methods based on the collection of water, suspended sediments, and riverbed sediments at different points of the watershed within a very limited period. Forty points within the Rio Santa watershed were sampled between June 21 and July 8, 2013. Forty water samples, 36 suspended sediments, and 34 riverbed sediments were analyzed for seven trace metals. The results, which were normalized using the USEPA guideline for water and sediments, show that the Rio Santa water exhibits Mn concentrations higher than the guideline at more than 50% of the sampling points. As is the second highest contaminating element in the water, with approximately 10% of the samples containing concentrations above the guideline. Sediments collected in the Rio Santa riverbed were heavily contaminated by at least four of the tested elements at nearly 85% of the sample points, with As presenting the highest normalized concentration, at more than ten times the guideline. As, Cd, Fe, Pb, and Zn present similar concentration trends in the sediment all along the Rio Santa.The findings indicate that care should be taken in using the Rio Santa water and sediments for purposes that could affect the health of humans or the ecosystem. The situation is worse in some tributaries in the southern part of the watershed that host both active and abandoned mines and ore-processing plants.

  14. Shear wave speed estimation by adaptive random sample consensus method.

    PubMed

    Lin, Haoming; Wang, Tianfu; Chen, Siping

    2014-01-01

    This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.

  15. Apportioning riverine DIN load to export coefficients of land uses in an urbanized watershed.

    PubMed

    Shih, Yu-Ting; Lee, Tsung-Yu; Huang, Jr-Chuan; Kao, Shuh-Ji; Chang

    2016-08-01

    The apportionment of riverine dissolved inorganic nitrogen (DIN) load to individual land use on a watershed scale demands the support of accurate DIN load estimation and differentiation of point and non-point sources, but both of them are rarely quantitatively determined in small montane watersheds. We introduced the Danshui River watershed of Taiwan, a mountainous urbanized watershed, to determine the export coefficients via a reverse Monte Carlo approach from riverine DIN load. The results showed that the dynamics of N fluctuation determines the load estimation method and sampling frequency. On a monthly sampling frequency basis, the average load estimation of the methods (GM, FW, and LI) outperformed that of individual method. Export coefficient analysis showed that the forest DIN yield of 521.5kg-Nkm(-2)yr(-1) was ~2.7-fold higher than the global riverine DIN yield (mainly from temperate large rivers with various land use compositions). Such a high yield was attributable to high rainfall and atmospheric N deposition. The export coefficient of agriculture was disproportionately larger than forest suggesting that a small replacement of forest to agriculture could lead to considerable change of DIN load. The analysis of differentiation between point and non-point sources showed that the untreated wastewater (non-point source), accounting for ~93% of the total human-associated wastewater, resulted in a high export coefficient of urban. The inclusion of the treated and untreated wastewater completes the N budget of wastewater. The export coefficient approach serves well to assess the riverine DIN load and to improve the understanding of N cascade. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.

    PubMed

    Kappel, S; Boulyga, S F; Dorta, L; Günther, D; Hattendorf, B; Koffler, D; Laaha, G; Leisch, F; Prohaska, T

    2013-03-01

    Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e. 'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated. The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U isotope ratios of particles deposited on the NUSIMEP-7 test samples.

  17. The development and implementation of a method using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    EPA Science Inventory

    It is estimated that protozoan parasites still account for greater than one third of waterborne disease outbreaks reported. Methods used to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminan...

  18. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  19. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  20. Calculation of a fluctuating entropic force by phase space sampling.

    PubMed

    Waters, James T; Kim, Harold D

    2015-07-01

    A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.

  1. Application of mixed cloud point extraction for the analysis of six flavonoids in Apocynum venetum leaf samples by high performance liquid chromatography.

    PubMed

    Zhou, Jun; Sun, Jiang Bing; Xu, Xin Yu; Cheng, Zhao Hui; Zeng, Ping; Wang, Feng Qiao; Zhang, Qiong

    2015-03-25

    A simple, inexpensive and efficient method based on the mixed cloud point extraction (MCPE) combined with high performance liquid chromatography was developed for the simultaneous separation and determination of six flavonoids (rutin, hyperoside, quercetin-3-O-sophoroside, isoquercitrin, astragalin and quercetin) in Apocynum venetum leaf samples. The non-ionic surfactant Genapol X-080 and cetyl-trimethyl ammonium bromide (CTAB) was chosen as the mixed extracting solvent. Parameters that affect the MCPE processes, such as the content of Genapol X-080 and CTAB, pH, salt content, extraction temperature and time were investigated and optimized. Under the optimized conditions, the calibration curve for six flavonoids were all linear with the correlation coefficients greater than 0.9994. The intra-day and inter-day precision (RSD) were below 8.1% and the limits of detection (LOD) for the six flavonoids were 1.2-5.0 ng mL(-1) (S/N=3). The proposed method was successfully used to separate and determine the six flavonoids in A. venetum leaf samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Black-pigmented anaerobic rods in closed periapical lesions.

    PubMed

    Bogen, G; Slots, J

    1999-05-01

    This study determined the frequency of Porphyromonas endodontalis, Porphyromonas gingivalis, Prevotella intermedia and Prevotella nigrescens in 20 closed periapical lesions associated with symptomatic and asymptomatic refractory endodontic disease. To deliniate possible oral sources of P. endodontalis, the presence of the organism was assessed in selected subgingival sites and saliva in the same study patients. Periapical samples were obtained by paper points during surgical endodontic procedures using methods designed to minimize contamination by non-endodontic microorganisms. Subgingival plaque samples were obtained by paper points from three periodontal pockets and from the pocket of the tooth associated with the closed periapical lesion. Unstimulated saliva was collected from the surface of the soft palate. Bacterial identification was performed using a species-specific polymerase chain reaction (PCR) detection method. P. endodontalis was not identified in any periapical lesion, even though subgingival samples from eight patients (40%) revealed the P. endodontalis-specific amplicon. P. gingivalis occurred in one periapical lesion that was associated with moderate pain. P. nigrescens, P. endodontalis and P. intermedia were not detected in any periapical lesion studied. Black-pigmented anaerobic rods appear to be infrequent inhabitants of the closed periapical lesion.

  3. Recovery of intrinsic fluorescence from single-point interstitial measurements for quantification of doxorubicin concentration.

    PubMed

    Baran, Timothy M; Foster, Thomas H

    2013-10-01

    We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. © 2013 Wiley Periodicals, Inc.

  4. Investigation of cloud point extraction for the analysis of metallic nanoparticles in a soil matrix

    PubMed Central

    Hadri, Hind El; Hackley, Vincent A.

    2017-01-01

    The characterization of manufactured nanoparticles (MNPs) in environmental samples is necessary to assess their behavior, fate and potential toxicity. Several techniques are available, but the limit of detection (LOD) is often too high for environmentally relevant concentrations. Therefore, pre-concentration of MNPs is an important component in the sample preparation step, in order to apply analytical tools with a LOD higher than the ng kg−1 level. The objective of this study was to explore cloud point extraction (CPE) as a viable method to pre-concentrate gold nanoparticles (AuNPs), as a model MNP, spiked into a soil extract matrix. To that end, different extraction conditions and surface coatings were evaluated in a simple matrix. The CPE method was then applied to soil extract samples spiked with AuNPs. Total gold, determined by inductively coupled plasma mass spectrometry (ICP-MS) following acid digestion, yielded a recovery greater than 90 %. The first known application of single particle ICP-MS and asymmetric flow field-flow fractionation to evaluate the preservation of the AuNP physical state following CPE extraction is demonstrated. PMID:28507763

  5. A stochastic convolution/superposition method with isocenter sampling to evaluate intrafraction motion effects in IMRT.

    PubMed

    Naqvi, Shahid A; D'Souza, Warren D

    2005-04-01

    Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.

  6. A scalable self-priming fractal branching microchannel net chip for digital PCR.

    PubMed

    Zhu, Qiangyuan; Xu, Yanan; Qiu, Lin; Ma, Congcong; Yu, Bingwen; Song, Qi; Jin, Wei; Jin, Qinhan; Liu, Jinyu; Mu, Ying

    2017-05-02

    As an absolute quantification method at the single-molecule level, digital PCR has been widely used in many bioresearch fields, such as next generation sequencing, single cell analysis, gene editing detection and so on. However, existing digital PCR methods still have some disadvantages, including high cost, sample loss, and complicated operation. In this work, we develop an exquisite scalable self-priming fractal branching microchannel net digital PCR chip. This chip with a special design inspired by natural fractal-tree systems has an even distribution and 100% compartmentalization of the sample without any sample loss, which is not available in existing chip-based digital PCR methods. A special 10 nm nano-waterproof layer was created to prevent the solution from evaporating. A vacuum pre-packaging method called self-priming reagent introduction is used to passively drive the reagent flow into the microchannel nets, so that this chip can realize sequential reagent loading and isolation within a couple of minutes, which is very suitable for point-of-care detection. When the number of positive microwells stays in the range of 100 to 4000, the relative uncertainty is below 5%, which means that one panel can detect an average of 101 to 15 374 molecules by the Poisson distribution. This chip is proved to have an excellent ability for single molecule detection and quantification of low expression of hHF-MSC stem cell markers. Due to its potential for high throughput, high density, low cost, lack of sample and reagent loss, self-priming even compartmentalization and simple operation, we envision that this device will significantly expand and extend the application range of digital PCR involving rare samples, liquid biopsy detection and point-of-care detection with higher sensitivity and accuracy.

  7. Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.

    PubMed

    Piastra, Marco

    2013-05-01

    Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    PubMed

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  9. A novel gamma-fitting statistical method for anti-drug antibody assays to establish assay cut points for data with non-normal distribution.

    PubMed

    Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena

    2010-01-31

    In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.

  10. Effect of the substrate temperature on the physical properties of molybdenum tri-oxide thin films obtained through the spray pyrolysis technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez, H.M.; Torres, J., E-mail: njtorress@unal.edu.co; Lopez Carreno, L.D.

    2013-01-15

    Polycrystalline molybdenum tri-oxide thin films were prepared using the spray pyrolysis technique; a 0.1 M solution of ammonium molybdate tetra-hydrated was used as a precursor. The samples were prepared on Corning glass substrates maintained at temperatures ranging between 423 and 673 K. The samples were characterized through micro Raman, X-ray diffraction, optical transmittance and DC electrical conductivity. The species MoO{sub 3} (H{sub 2}O){sub 2} was found in the sample prepared at a substrate temperature of 423 K. As the substrate temperature rises, the water disappears and the samples crystallize into {alpha}-MoO{sub 3}. The optical gap diminishes as the substrate temperaturemore » rises. Two electrical transport mechanisms were found: hopping under 200 K and intrinsic conduction over 200 K. The MoO{sub 3} films' sensitivity was analyzed for CO and H{sub 2}O in the temperature range 160 to 360 K; the results indicate that CO and H{sub 2}O have a reduction character. In all cases, it was found that the sensitivity to CO is lower than that to H{sub 2}O. - Highlights: Black-Right-Pointing-Pointer A low cost technique is used which produces good material. Black-Right-Pointing-Pointer Thin films are prepared using ammonium molybdate tetra hydrated. Black-Right-Pointing-Pointer The control of the physical properties of the samples could be done. Black-Right-Pointing-Pointer A calculation method is proposed to determine the material optical properties. Black-Right-Pointing-Pointer The MoO{sub 3} thin films prepared by spray pyrolysis could be used as gas sensor.« less

  11. An improved initialization center k-means clustering algorithm based on distance and density

    NASA Astrophysics Data System (ADS)

    Duan, Yanling; Liu, Qun; Xia, Shuyin

    2018-04-01

    Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.

  12. A one-way shooting algorithm for transition path sampling of asymmetric barriers

    NASA Astrophysics Data System (ADS)

    Brotzakis, Z. Faidon; Bolhuis, Peter G.

    2016-10-01

    We present a novel transition path sampling shooting algorithm for the efficient sampling of complex (biomolecular) activated processes with asymmetric free energy barriers. The method employs a fictitious potential that biases the shooting point toward the transition state. The method is similar in spirit to the aimless shooting technique by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)], but is targeted for use with the one-way shooting approach, which has been shown to be more effective than two-way shooting algorithms in systems dominated by diffusive dynamics. We illustrate the method on a 2D Langevin toy model, the association of two peptides and the initial step in dissociation of a β-lactoglobulin dimer. In all cases we show a significant increase in efficiency.

  13. Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface

    PubMed Central

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467

  14. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    PubMed

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  15. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  16. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  17. Standards of Research.

    ERIC Educational Resources Information Center

    Crain, Robert L.; Hawley, Willis D.

    1982-01-01

    Criticizes James Coleman's study, "Public and Private Schools," and points out methodological weaknesses in sampling, testing, data reliability, and statistical methods. Questions assumptions which have led to conclusions justifying federal support, especially tuition tax credits, to private schools. Raises the issue of ethical standards…

  18. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  19. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  20. Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.

    PubMed

    Li, Yuan-Xin; Yang, Guang-Hong

    2018-04-01

    This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.

  1. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  2. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  3. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  4. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  5. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  6. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    PubMed

    Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E

    2017-01-01

    Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial autocorrelation in an SDM context and, by taking account of random effects, produce outputs that can better elucidate the role of covariates in predicting species occurrence. Given that it is often unclear what the drivers are behind data clumping in an empirical occurrence dataset, or indeed how geographically restricted these data are, spatially-explicit Bayesian SDMs may be the better choice when modelling the spatial distribution of target species.

  7. Bird biodiversity assessments in temperate forest: the value of point count versus acoustic monitoring protocols.

    PubMed

    Klingbeil, Brian T; Willig, Michael R

    2015-01-01

    Effective monitoring programs for biodiversity are needed to assess trends in biodiversity and evaluate the consequences of management. This is particularly true for birds and faunas that occupy interior forest and other areas of low human population density, as these are frequently under-sampled compared to other habitats. For birds, Autonomous Recording Units (ARUs) have been proposed as a supplement or alternative to point counts made by human observers to enhance monitoring efforts. We employed two strategies (i.e., simultaneous-collection and same-season) to compare point count and ARU methods for quantifying species richness and composition of birds in temperate interior forests. The simultaneous-collection strategy compares surveys by ARUs and point counts, with methods matched in time, location, and survey duration such that the person and machine simultaneously collect data. The same-season strategy compares surveys from ARUs and point counts conducted at the same locations throughout the breeding season, but methods differ in the number, duration, and frequency of surveys. This second strategy more closely follows the ways in which monitoring programs are likely to be implemented. Site-specific estimates of richness (but not species composition) differed between methods; however, the nature of the relationship was dependent on the assessment strategy. Estimates of richness from point counts were greater than estimates from ARUs in the simultaneous-collection strategy. Woodpeckers in particular, were less frequently identified from ARUs than point counts with this strategy. Conversely, estimates of richness were lower from point counts than ARUs in the same-season strategy. Moreover, in the same-season strategy, ARUs detected the occurrence of passerines at a higher frequency than did point counts. Differences between ARU and point count methods were only detected in site-level comparisons. Importantly, both methods provide similar estimates of species richness and composition for the region. Consequently, if single visits to sites or short-term monitoring are the goal, point counts will likely perform better than ARUs, especially if species are rare or vocalize infrequently. However, if seasonal or annual monitoring of sites is the goal, ARUs offer a viable alternative to standard point-count methods, especially in the context of large-scale or long-term monitoring of temperate forest birds.

  8. Nanoliter hemolymph sampling and analysis of individual adult Drosophila melanogaster.

    PubMed

    Piyankarage, Sujeewa C; Featherstone, David E; Shippy, Scott A

    2012-05-15

    The fruit fly (Drosophila melanogaster) is an extensively used and powerful, genetic model organism. However, chemical studies using individual flies have been limited by the animal's small size. Introduced here is a method to sample nanoliter hemolymph volumes from individual adult fruit-flies for chemical analysis. The technique results in an ability to distinguish hemolymph chemical variations with developmental stage, fly sex, and sampling conditions. Also presented is the means for two-point monitoring of hemolymph composition for individual flies.

  9. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    NASA Astrophysics Data System (ADS)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.

  10. Classification and identification of molecules through factor analysis method based on terahertz spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Jianglou; Liu, Jinsong; Wang, Kejia; Yang, Zhengang; Liu, Xiaming

    2018-06-01

    By means of factor analysis approach, a method of molecule classification is built based on the measured terahertz absorption spectra of the molecules. A data matrix can be obtained by sampling the absorption spectra at different frequency points. The data matrix is then decomposed into the product of two matrices: a weight matrix and a characteristic matrix. By using the K-means clustering to deal with the weight matrix, these molecules can be classified. A group of samples (spirobenzopyran, indole, styrene derivatives and inorganic salts) has been prepared, and measured via a terahertz time-domain spectrometer. These samples are classified with 75% accuracy compared to that directly classified via their molecular formulas.

  11. Distribution majorization of corner points by reinforcement learning for moving object detection

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang

    2018-04-01

    Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.

  12. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. A step towards standardization: A method for end-point titer determination by fluorescence index of an automated microscope. End-point titer determination by fluorescence index.

    PubMed

    Carbone, Teresa; Gilio, Michele; Padula, Maria Carmela; Tramontano, Giuseppina; D'Angelo, Salvatore; Pafundi, Vito

    2018-05-01

    Indirect Immunofluorescence (IIF) is widely considered the Gold Standard for Antinuclear Antibody (ANA) screening. However, the high inter-reader variability remains the major disadvantage associated with ANA testing and the main reason for the increasing demand of the computer-aided immunofluorescence microscope. Previous studies proposed the quantification of the fluorescence intensity as an alternative for the classical end-point titer evaluation. However, the different distribution of bright/dark light linked to the nature of the self-antigen and its location in the cells result in different mean fluorescence intensities. The aim of the present study was to correlate Fluorescence Index (F.I.) with end-point titers for each well-defined ANA pattern. Routine serum samples were screened for ANA testing on HEp-2000 cells using Immuno Concepts Image Navigator System, and positive samples were serially diluted to assign the end-point titer. A comparison between F.I. and end-point titers related to 10 different staining patterns was made. According to our analysis, good technical performance of F.I. (97% sensitivity and 94% specificity) was found. A significant correlation between quantitative reading of F.I. and end-point titer groups was observed using Spearman's test and regression analysis. A conversion scale of F.I. in end-point titers for each recognized ANA-pattern was obtained. The Image Navigator offers the opportunity to improve worldwide harmonization of ANA test results. In particular, digital F.I. allows quantifying ANA titers by using just one sample dilution. It could represent a valuable support for the routine laboratory and an effective tool to reduce inter- and intra-laboratory variability. Copyright © 2018. Published by Elsevier B.V.

  14. D Semantic Labeling of ALS Data Based on Domain Adaption by Transferring and Fusing Random Forest Models

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yao, W.; Zhang, J.; Li, Y.

    2018-04-01

    Labeling 3D point cloud data with traditional supervised learning methods requires considerable labelled samples, the collection of which is cost and time expensive. This work focuses on adopting domain adaption concept to transfer existing trained random forest classifiers (based on source domain) to new data scenes (target domain), which aims at reducing the dependence of accurate 3D semantic labeling in point clouds on training samples from the new data scene. Firstly, two random forest classifiers were firstly trained with existing samples previously collected for other data. They were different from each other by using two different decision tree construction algorithms: C4.5 with information gain ratio and CART with Gini index. Secondly, four random forest classifiers adapted to the target domain are derived through transferring each tree in the source random forest models with two types of operations: structure expansion and reduction-SER and structure transfer-STRUT. Finally, points in target domain are labelled by fusing the four newly derived random forest classifiers using weights of evidence based fusion model. To validate our method, experimental analysis was conducted using 3 datasets: one is used as the source domain data (Vaihingen data for 3D Semantic Labelling); another two are used as the target domain data from two cities in China (Jinmen city and Dunhuang city). Overall accuracies of 85.5 % and 83.3 % for 3D labelling were achieved for Jinmen city and Dunhuang city data respectively, with only 1/3 newly labelled samples compared to the cases without domain adaption.

  15. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Treesearch

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  16. An Unconditional Test for Change Point Detection in Binary Sequences with Applications to Clinical Registries.

    PubMed

    Ellenberger, David; Friede, Tim

    2016-08-05

    Methods for change point (also sometimes referred to as threshold or breakpoint) detection in binary sequences are not new and were introduced as early as 1955. Much of the research in this area has focussed on asymptotic and exact conditional methods. Here we develop an exact unconditional test. An unconditional exact test is developed which assumes the total number of events as random instead of conditioning on the number of observed events. The new test is shown to be uniformly more powerful than Worsley's exact conditional test and means for its efficient numerical calculations are given. Adaptions of methods by Berger and Boos are made to deal with the issue that the unknown event probability imposes a nuisance parameter. The methods are compared in a Monte Carlo simulation study and applied to a cohort of patients undergoing traumatic orthopaedic surgery involving external fixators where a change in pin site infections is investigated. The unconditional test controls the type I error rate at the nominal level and is uniformly more powerful than (or to be more precise uniformly at least as powerful as) Worsley's exact conditional test which is very conservative for small sample sizes. In the application a beneficial effect associated with the introduction of a new treatment procedure for pin site care could be revealed. We consider the new test an effective and easy to use exact test which is recommended in small sample size change point problems in binary sequences.

  17. Establishment of a nested-ASP-PCR method to determine the clarithromycin resistance of Helicobacter pylori.

    PubMed

    Luo, Xiao-Feng; Jiao, Jian-Hua; Zhang, Wen-Yue; Pu, Han-Ming; Qu, Bao-Jin; Yang, Bing-Ya; Hou, Min; Ji, Min-Jun

    2016-07-07

    To investigate clarithromycin resistance positions 2142, 2143 and 2144 of the 23SrRNA gene in Helicobacter pylori (H. pylori) by nested-allele specific primer-polymerase chain reaction (nested-ASP-PCR). The gastric tissue and saliva samples from 99 patients with positive results of the rapid urease test (RUT) were collected. The nested-ASP-PCR method was carried out with the external primers and inner allele-specific primers corresponding to the reference strain and clinical strains. Thirty gastric tissue and saliva samples were tested to determine the sensitivity of nested-ASP-PCR and ASP-PCR methods. Then, clarithromycin resistance was detected for 99 clinical samples by using different methods, including nested-ASP-PCR, bacterial culture and disk diffusion. The nested-ASP-PCR method was successfully established to test the resistance mutation points 2142, 2143 and 2144 of the 23SrRNA gene of H. pylori. Among 30 samples of gastric tissue and saliva, the H. pylori detection rate of nested-ASP-PCR was 90% and 83.33%, while the detection rate of ASP-PCR was just 63% and 56.67%. Especially in the saliva samples, nested-ASP-PCR showed much higher sensitivity in H. pylori detection and resistance mutation rates than ASP-PCR. In the 99 RUT-positive gastric tissue and saliva samples, the H. pylori-positive detection rate by nested-ASP-PCR was 87 (87.88%) and 67 (67.68%), in which there were 30 wild-type and 57 mutated strains in gastric tissue and 22 wild-type and 45 mutated strains in saliva. Genotype analysis showed that three-points mixed mutations were quite common, but different resistant strains were present in gastric mucosa and saliva. Compared to the high sensitivity shown by nested-ASP-PCR, the positive detection of bacterial culture with gastric tissue samples was 50 cases, in which only 26 drug-resistant strains were found through analyzing minimum inhibitory zone of clarithromycin. The nested-ASP-PCR assay showed higher detection sensitivity than ASP-PCR and drug sensitivity testing, which could be performed to evaluate clarithromycin resistance of H. pylori.

  18. Characterization of long-term elution of platinum from carboplatin-impregnated calcium sulfate hemihydrate beads in vitro by two distinct sample collection methods.

    PubMed

    Tulipan, Rachel J; Phillips, Heidi; Garrett, Laura D; Dirikolu, Levent; Mitchell, Mark A

    2017-05-01

    OBJECTIVE To characterize long-term elution of platinum from carboplatin-impregnated calcium sulfate hemihydrate (CI-CSH) beads in vitro by comparing 2 distinct sample collection methods designed to mimic 2 in vivo environments. SAMPLES 162 CI-CSH beads containing 4.6 mg of carboplatin (2.4 mg of platinum/bead). PROCEDURES For method 1, which mimicked an in vivo environment with rapid and complete fluid exchange, each of 3 plastic 10-mL conical tubes contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained by evacuation of all fluid at 1, 2, 3, 6, 9, and 12 hours and 1, 2, 3, 6, 9, 12, 15, 18, 22, 26, and 30 days. Five milliliters of fresh PBS solution was then added to each tube. For method 2, which mimicked an in vivo environment with no fluid exchange, each of 51 tubes (ie, 3 tubes/17 sample collection times) contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained from the assigned tubes for each time point. All samples were analyzed for platinum content by inductively coupled plasma-mass spectrometry. RESULTS Platinum was released from CI-CSH beads for 22 to 30 days. Significant differences were found in platinum concentration and percentage of platinum eluted from CI-CSH beads over time for each method. Platinum concentrations and elution percentages in method 2 samples were significantly higher than those of method 1 samples, except for the first hour measurements. CONCLUSIONS AND CLINICAL RELEVANCE Sample collection methods 1 and 2 may provide estimates of the minimum and maximum platinum release, respectively, from CI-CSH beads in vivo.

  19. Evaluation of the Brix refractometer to estimate immunoglobulin G concentration in bovine colostrum.

    PubMed

    Quigley, J D; Lago, A; Chapman, C; Erickson, P; Polo, J

    2013-02-01

    Refractometry using a Brix refractometer has been proposed as a means to estimate IgG concentration in bovine maternal colostrum (MC). The refractometer has advantages over other methods of estimating IgG concentration in that the Brix refractometer is inexpensive, readily available, less fragile, and less sensitive to variation in colostral temperature, season of the year and other factors. Samples of first-milking MC were collected from 7 dairy farms in Maine, New Hampshire, Vermont, and Connecticut (n=84) and 1 dairy farm in California (n=99). The MC was milked from the cow at 6.1 ± 5.6h postparturition and a sample was evaluated for Brix percentage by using an optical refractometer. Two additional samples (30 mL) were collected from the milk bucket, placed in vials, and frozen before analysis of total IgG by radial immunodiffusion (RID) using commercially available plates and by turbidimetric immunoassay (TIA). The second sample was analyzed for total bacterial counts and coliform counts at laboratories in New York (Northeast samples) and California (California samples). The Brix percentage (mean ± SD) was 23.8 ± 3.5, IgG concentration measured by RID was 73.4 ± 26.2g/L, and IgG concentration measured by TIA was 67.5 ± 25.0 g/L. The Brix percentage was highly correlated (r=0.75) with IgG analyzed by RID. The Brix percentage cut point to define high- or low-quality colostrum (50 g of IgG/L measured by RID) that classified more samples correctly given the proportion of high- (86%) and low-quality (14%) samples in this study was 21%, which is slightly lower than other recent estimates of Brix measurements. At this cut point, the test sensitivity, specificity, positive and negative predictive values, and accuracy were 92.9, 65.5, 93.5, 63.3, and 88.5%, respectively. Measurement of IgG by TIA correlated with Brix (r=0.63) and RID (r=0.87); however, TIA and RID methods of IgG measurement were not consistent throughout the range of samples tested. We conclude that Brix measurement of total solids in fresh MC is an inexpensive, rapid, and satisfactorily accurate method of estimating IgG concentration. A cut point of 21% Brix to estimate samples of MC >50 g/L was most appropriate for our data. Measurement of IgG in MC by TIA differed from measurement by RID. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Pacific Northwest National Laboratory Potential Impact Categories for Radiological Air Emission Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, Marcel Y.; Gervais, Todd L.; Barnett, J. Matthew

    2012-06-05

    In 2002, the EPA amended 40 CFR 61 Subpart H and 40 CFR 61 Appendix B Method 114 to include requirements from ANSI/HPS N13.1-1999 Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stack and Ducts of Nuclear Facilities for major emission points. Additionally, the WDOH amended the Washington Administrative Code (WAC) 246-247 Radiation protection-air emissions to include ANSI/HPS N13.1-1999 requirements for major and minor emission points when new permitting actions are approved. A result of the amended regulations is the requirement to prepare a written technical basis for the radiological air emission sampling and monitoring program. A keymore » component of the technical basis is the Potential Impact Category (PIC) assigned to an emission point. This paper discusses the PIC assignments for the Pacific Northwest National Laboratory (PNNL) Integrated Laboratory emission units; this revision includes five PIC categories.« less

  1. Design of permanent magnet synchronous motor speed control system based on SVPWM

    NASA Astrophysics Data System (ADS)

    Wu, Haibo

    2017-04-01

    The control system is designed to realize TMS320F28335 based on the permanent magnet synchronous motor speed control system, and put it to quoting all electric of injection molding machine. The system of the control method used SVPWM, through the sampling motor current and rotating transformer position information, realize speed, current double closed loop control. Through the TMS320F28335 hardware floating-point processing core, realize the application for permanent magnet synchronous motor in the floating point arithmetic, to replace the past fixed-point algorithm, and improve the efficiency of the code.

  2. Detection of bacteraemias during non-surgicalroot canal treatment.

    PubMed

    Savarrio, L; Mackenzie, D; Riggio, M; Saunders, W P; Bagg, J

    2005-04-01

    Some dental procedures initiate a bacteraemia. In certain compromised patients, this bacteraemia may lead to distant site infections, most notably infective endocarditis. To investigate whether a detectable bacteraemia was produced during non-surgical root canal therapy. Thirty patients receiving non-surgical root canal therapy were studied. Three blood samples were taken per patient: pre-operatively, peri-operatively and post-operatively. In addition, a paper point sample was collected from the root canal. The blood samples were cultured by pour plate and blood bottle methods. The isolated organisms were identified by standard techniques. Blood samples were analysed for the presence of bacterial DNA by the polymerase chain reaction (PCR). In two cases where the same species of organism was identified in the root canal and the bloodstream, the isolates were typed by pulsed field gel electrophoresis (PFGE). By conventional culturing, a detectable bacteraemia was present in 9 (30%) of the 30 patients who had no positive pre-operative control blood sample. In 7 (23.3%) patients, the same species of organism was identified in both the bloodstream and in the paper point sample from the root canal system. Overall, PCR gave lower detection rates compared with conventional culture, with 10 of 90 (11%) of the blood samples displaying bacterial DNA. PFGE typing was undertaken for two pairs of culture isolates from blood and paper points; these were found to be genetically identical. Non-surgical root canal treatment may invoke a detectable bacteraemia.

  3. 40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...

  4. 40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...

  5. 40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...

  6. 40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...

  7. Advances in locally constrained k-space-based parallel MRI.

    PubMed

    Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S

    2006-02-01

    In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.

  8. Devices, systems, and methods for microscale isoelectric fractionation

    DOEpatents

    Sommer, Gregory J.; Hatch, Anson V.; Wang, Ying-Chih; Singh, Anup K.

    2016-08-09

    Embodiments of the present invention provide devices, systems, and methods for microscale isoelectric fractionation. Analytes in a sample may be isolated according to their isoelectric point within a fractionation microchannel. A microfluidic device according to an embodiment of the invention includes a substrate at least partially defining a fractionation microchannel. The fractionation microchannel has at least one cross-sectional dimension equal to or less than 1 mm. A plurality of membranes of different pHs are disposed in the microchannel. Analytes having an isoelectric point between the pH of the membranes may be collected in a region of the fractionation channel between the first and second membranes through isoelectric fractionation.

  9. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  10. Devices, systems, and methods for microscale isoelectric fractionation

    DOEpatents

    Sommer, Gregory J; Hatch, Anson V; Wang, Ying-Chih; Singh, Anup K

    2015-04-14

    Embodiments of the present invention provide devices, systems, and methods for microscale isoelectric fractionation. Analytes in a sample may be isolated according to their isoelectric point within a fractionation microchannel. A microfluidic device according to an embodiment of the invention includes a substrate at least partially defining a fractionation microchannel. The fractionation microchannel has at least one cross-sectional dimension equal to or less than 1 mm. A plurality of membranes of different pHs are disposed in the microchannel. Analytes having an isoelectric point between the pH of the membranes may be collected in a region of the fractionation channel between the first and second membranes through isoelectric fractionation.

  11. United States Air Force 611th Air Support Group Civil Engineering Squadron, Elmendorf AFB, Alaska. Remedial investigation and feasibility study Point Lay Radar Installation, Alaska. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karmi, S.

    1996-03-04

    The United States Air Force (Air Force) has prepared this Remedial Investigation/Feasibility Study (RI/FS) report to present the results of RI/FS activities at four sites located at the Point Lay radar installation. The remedial investigation (RI) field activities were conducted at the Point Lay radar installation during the summer of 1993. The four sites at Point Lay were investigated because they were suspected of being contaminated with hazardous substances. RI activities were conducted using methods and procedures specified in the RI/FS Work Plan, Sampling and Analysis Plan (SAP), and Health and Safety Plan.

  12. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  13. Longitudinal Effects on Early Adolescent Language: A Twin Study

    PubMed Central

    DeThorne, Laura Segebart; Smith, Jamie Mahurin; Betancourt, Mariana Aparicio; Petrill, Stephen A.

    2016-01-01

    Purpose We evaluated genetic and environmental contributions to individual differences in language skills during early adolescence, measured by both language sampling and standardized tests, and examined the extent to which these genetic and environmental effects are stable across time. Method We used structural equation modeling on latent factors to estimate additive genetic, shared environmental, and nonshared environmental effects on variance in standardized language skills (i.e., Formal Language) and productive language-sample measures (i.e., Productive Language) in a sample of 527 twins across 3 time points (mean ages 10–12 years). Results Individual differences in the Formal Language factor were influenced primarily by genetic factors at each age, whereas individual differences in the Productive Language factor were primarily due to nonshared environmental influences. For the Formal Language factor, the stability of genetic effects was high across all 3 time points. For the Productive Language factor, nonshared environmental effects showed low but statistically significant stability across adjacent time points. Conclusions The etiology of language outcomes may differ substantially depending on assessment context. In addition, the potential mechanisms for nonshared environmental influences on language development warrant further investigation. PMID:27732720

  14. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  15. Proactive therapeutic drug monitoring of infliximab: a comparative study of a new point-of-care quantitative test with two established ELISA assays.

    PubMed

    Afonso, J; Lopes, S; Gonçalves, R; Caldeira, P; Lago, P; Tavares de Sousa, H; Ramos, J; Gonçalves, A R; Ministro, P; Rosa, I; Vieira, A I; Dias, C C; Magro, F

    2016-10-01

    Therapeutic drug monitoring is a powerful strategy known to improve the clinical outcomes and to optimise the healthcare resources in the treatment of autoimmune diseases. Currently, most of the methods commercially available for the quantification of infliximab (IFX) are ELISA-based, with a turnaround time of approximately 8 h, and delaying the target dosage adjustment to the following infusion. To validate the first point-of-care IFX quantification device available in the market - the Quantum Blue Infliximab assay (Buhlmann, Schonenbuch, Switzerland) - by comparing it with two well-established methods. The three methods were used to assay the IFX concentration of spiked samples and of the serum of 299 inflammatory bowel diseases (IBD) patients undergoing IFX therapy. The point-of-care assay had an average IFX recovery of 92%, being the most precise among the tested methods. The Intraclass Correlation Coefficients of the point-of-care IFX assay vs. the two ELISA-based established methods were 0.889 and 0.939. Moreover, the accuracy of the point-of-care IFX compared with each of the two reference methods was 77% and 83%, and the kappa statistics revealed a substantial agreement (0.648 and 0.738). The Quantum Blue IFX assay can successfully replace the commonly used ELISA-based IFX quantification kits. This point-of-care IFX assay is able to deliver the results within 15 min makes it ideal for an immediate target concentration adjusted dosing. Moreover, it is a user-friendly desktop device that does not require specific laboratory facilities or highly specialised personnel. © 2016 John Wiley & Sons Ltd.

  16. Biogas production from pineapple core - A preliminary study

    NASA Astrophysics Data System (ADS)

    Jehan, O. S.; Sanusi, S. N. A.; Sukor, M. Z.; Noraini, M.; Buddin, M. M. H. S.; Hamid, K. H. K.

    2017-09-01

    Anaerobic digestion of pineapple waste was investigated by using pineapple core as the sole substrate. Pineapple core was chosen due to its high total sugar content thus, indicating high amount of fermentable sugar. As digestion process requires the involvement of microorganisms, wastewater from the same industry was added in the current study at ratio of 1:1 by weight. Two different sources of wastewater (Point 1 and Point 2) were used in this study to distinguish the performance of microorganism consortia in both samples. The experiment was conducted by using a lab scale batch anaerobic digester made up from 5L container with separate gas collecting system. The biogas produced was collected by using water displacement method. The experiment was conducted for 30 days and the biogas produced was collected and its volume was recorded at 3 days interval. Based on the data available, wastewater from the first point recorded higher volume of biogas with the total accumulated biogas volume is 216.1 mL. Meanwhile, wastewater sample from Point 2 produced a total of 140.5 mL of biogas, by volume. The data shows that the origin and type of microorganism undeniably play significant role in biogas production. In fact, other factors; pH of wastewater and temperature were also known to affect biogas production. The anaerobic digestion is seen as the promising and sustainable alternatives to current disposal method.

  17. Determination of rhodium in metallic alloy and water samples using cloud point extraction coupled with spectrophotometric technique

    NASA Astrophysics Data System (ADS)

    Kassem, Mohammed A.; Amin, Alaa S.

    2015-02-01

    A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.

  18. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  19. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  20. Determination of Chlorinity of Water without the Use of Chromate Indicator

    PubMed Central

    Hong, Tae-Kee; Kim, Myung-Hoon; Czae, Myung-Zoon

    2010-01-01

    A new method for determining chlorinity of water was developed in order to improve the old method by alleviating the environmental problems associated with the toxic chromate. The method utilizes a mediator, a weak acid that can form an insoluble salt with the titrant. The mediator triggers a sudden change in pH at an equivalence point in a titration. Thus, the equivalence point can be determined either potentiometrically (using a pH meter) or simply with an acid-base indicator. Three nontoxic mediators (phosphate, EDTA, and sulfite) were tested, and optimal conditions for the sharpest pH changes were sought. A combination of phosphate (a mediator) and phenolphthalein (an indicator) was found to be the most successful. The choices of the initial pH and the concentration of the mediator are critical in this approach. The optimum concentration of the mediator is ca. 1~2 mM, and the optimum value of the initial pH is ca. 9 for phosphate/phenolphthalein system. The method was applied to a sample of sea water, and the results are compared with those from the conventional Mohr-Knudsen method. The new method yielded chlorinity of a sample of sea water of (17.58 ± 0.22) g/kg, which is about 2.5% higher than the value (17.12 ± 0.22) g/kg from the old method. PMID:21461358

  1. A digital image-based method for determining of total acidity in red wines using acid-base titration without indicator.

    PubMed

    Tôrres, Adamastor Rodrigues; Lyra, Wellington da Silva; de Andrade, Stéfani Iury Evangelista; Andrade, Renato Allan Navarro; da Silva, Edvan Cirino; Araújo, Mário César Ugulino; Gaião, Edvaldo da Nóbrega

    2011-05-15

    This work proposes the use of digital image-based method for determination of total acidity in red wines by means of acid-base titration without using an external indicator or any pre-treatment of the sample. Digital images present the colour of the emergent radiation which is complementary to the radiation absorbed by anthocyanines present in wines. Anthocyanines change colour depending on the pH of the medium, and from the variation of colour in the images obtained during titration, the end point can be localized with accuracy and precision. RGB-based values were employed to build titration curves, and end points were localized by second derivative curves. The official method recommends potentiometric titration with a NaOH standard solution, and sample dilution until the pH reaches 8.2-8.4. In order to illustrate the feasibility of the proposed method, titrations of ten red wines were carried out. Results were compared with the reference method, and no statistically significant difference was observed between the results by applying the paired t-test at the 95% confidence level. The proposed method yielded more precise results than the official method. This is due to the trivariate nature of the measurements (RGB), associated with digital images. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Diffusion and drive-point sampling to detect ordnance-related compounds in shallow ground water beneath Snake Pond, Cape Cod, Massachusetts, 2001-02

    USGS Publications Warehouse

    LeBlanc, Denis R.

    2003-01-01

    Diffusion samplers and temporary drive points were used to test for ordnance-related compounds in ground water discharging to Snake Pond near Camp Edwards at the Massachusetts Military Reservation, Cape Cod, MA. The contamination resulted from artillery use and weapons testing at various ranges upgradient of the pond.The diffusion samplers were constructed with a high-grade cellulose membrane that allowed diffusion of explosive compounds, such as RDX (Hexahydro-1,3,5-trinitro-1,3,5-triazine) and HMX (Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine), into deionized water inside the samplers. Laboratory tests confirmed that the cellulose membrane was permeable to RDX and HMX. One transect of 22 diffusion samplers was installed and retrieved in August-September 2001, and 12 transects with a total of 108 samplers were installed and retrieved in September-October 2001. The diffusion samplers were buried about 0.5 feet into the pond-bottom sediments by scuba divers and allowed to equilibrate with the ground water beneath the pond bottom for 13 to 27 days before retrieval. Water samples were collected from temporary well points driven about 2-4 feet into the pond bottom at 21 sites in December 2001 and March 2002 for analysis of explosives and perchlorate to confirm the diffusion-sampling results. The water samples from the diffusion samplers exhibited numerous chromatographic peaks, but evaluation of the photo-diode-array spectra indicated that most of the peaks did not represent the target compounds. The peaks probably are associated with natural organic compounds present in the soft, organically enriched pond-bottom sediments. The presence of four explosive compounds at five widely spaced sites was confirmed by the photo-diode-array analysis, but the compounds are not generally found in contaminated ground water near the ranges. No explosives were detected in water samples obtained from the drive points. Perchlorate was detected at less than 1 microgram per liter in two drive-point samples collected at the same site on two dates about 3 months apart. The source of the perchlorate in the samples could not be related directly to other contamination from Camp Edwards with the available information. The results from the diffusion and drive-point sampling do not indicate an area of ground-water discharge with concentrations of the ordnance-related compounds that are sufficiently elevated to be detected by these sampling methods. The diffusion and drive-point sampling data cannot be interpreted further without additional information concerning the pattern of ground-water flow at Snake Pond and the distributions of RDX, HMX, and perchlorate in ground water in the aquifer near the pond.

  3. Application of Acoustic and Optic Methods for Estimating Suspended-Solids Concentrations in the St. Lucie River Estuary, Florida

    USGS Publications Warehouse

    Patino, Eduardo; Byrne, Michael J.

    2004-01-01

    Acoustic and optic methods were applied to estimate suspended-solids concentrations in the St. Lucie River Estuary, southeastern Florida. Acoustic Doppler velocity meters were installed at the North Fork, Speedy Point, and Steele Point sites within the estuary. These sites provide varying flow, salinity, water-quality, and channel cross-sectional characteristics. The monitoring site at Steele Point was not used in the analyses because repeated instrument relocations (due to bridge construction) prevented a sufficient number of samples from being collected at the various locations. Acoustic and optic instruments were installed to collect water velocity, acoustic backscatter strength (ABS), and turbidity data that were used to assess the feasibility of estimating suspended-solids concentrations in the estuary. Other data collected at the monitoring sites include tidal stage, salinity, temperature, and periodic discharge measurements. Regression analyses were used to determine the relations of suspended-solids concentration to ABS and suspended-solids concentration to turbidity at the North Fork and Speedy Point sites. For samples used in regression analyses, measured suspended-solids concentrations at the North Fork and Speedy Point sites ranged from 3 to 37 milligrams per liter, and organic content ranged from 50 to 83 percent. Corresponding salinity for these samples ranged from 0.12 to 22.7 parts per thousand, and corresponding temperature ranged from 19.4 to 31.8 ?C. Relations determined using this technique are site specific and only describe suspended-solids concentrations at locations where data were collected. The suspended-solids concentration to ABS relation resulted in correlation coefficients of 0.78 and 0.63 at the North Fork and Speedy Point sites, respectively. The suspended-solids concentration to turbidity relation resulted in correlation coefficients of 0.73 and 0.89 at the North Fork and Speedy Point sites, respectively. The adequacy of the empirical equations seems to be limited by the number and distribution of suspended-solids samples collected throughout the expected concentration range at the North Fork and Speedy Point sites. Additionally, the ABS relations for both sites seem to overestimate at the low end and underestimate at the high end of the concentration range. Based on the sensitivity analysis, temperature had a greater effect than salinity on estimated suspended-solids concentrations. Temperature also appeared to affect ABS data, perhaps by changing the absorptive and reflective characteristics of the suspended material. Salinity and temperature had no observed effects on the turbidity relation at the North Fork and Speedy Point sites. Estimates of suspended-solids concentrations using ABS data were less 'erratic' than estimates using turbidity data. Combining ABS and turbidity data into one equation did not improve the accuracy of results, and therefore, was not considered.

  4. [Study on the experimental application of floating-reference method to noninvasive blood glucose sensing].

    PubMed

    Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie

    2012-03-01

    Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.

  5. MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Wu, D; Rutel, I

    2015-06-15

    Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less

  6. Integral imaging based light field display with enhanced viewing resolution using holographic diffuser

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun

    2017-11-01

    An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.

  7. For geological investigations with airborne thermal infrared multispectral images: Transfer of calibration from laboratory spectrometer to TIMS as alternative for removing atmospheric effects

    NASA Technical Reports Server (NTRS)

    Edgett, Kenneth S.; Anderson, Donald L.

    1995-01-01

    This paper describes an empirical method to correct TIMS (Thermal Infrared Multispectral Scanner) data for atmospheric effects by transferring calibration from a laboratory thermal emission spectrometer to the TIMS multispectral image. The method does so by comparing the laboratory spectra of samples gathered in the field with TIMS 6-point spectra for pixels at the location of field sampling sites. The transference of calibration also makes it possible to use spectra from the laboratory as endmembers in unmixing studies of TIMS data.

  8. Method for determining surface coverage by materials exhibiting different fluorescent properties

    NASA Technical Reports Server (NTRS)

    Chappelle, Emmett W. (Inventor); Daughtry, Craig S. T. (Inventor); Mcmurtrey, James E., III (Inventor)

    1995-01-01

    An improved method for detecting, measuring, and distinguishing crop residue, live vegetation, and mineral soil is presented. By measuring fluorescence in multiple bands, live and dead vegetation are distinguished. The surface of the ground is illuminated with ultraviolet radiation, inducing fluorescence in certain molecules. The emitted fluorescent emission induced by the ultraviolet radiation is measured by means of a fluorescence detector, consisting of a photodetector or video camera and filters. The spectral content of the emitted fluorescent emission is characterized at each point sampled, and the proportion of the sampled area covered by residue or vegetation is calculated.

  9. Static versus dynamic sampling for data mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John, G.H.; Langley, P.

    1996-12-31

    As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less

  10. Multivariate survivorship analysis using two cross-sectional samples.

    PubMed

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  11. Improving the collection of knowledge, attitude and practice data with community surveys: a comparison of two second-stage sampling methods.

    PubMed

    Davis, Rosemary H; Valadez, Joseph J

    2014-12-01

    Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  12. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  13. Pure phase encode magnetic field gradient monitor.

    PubMed

    Han, Hui; MacGregor, Rodney P; Balcom, Bruce J

    2009-12-01

    Numerous methods have been developed to measure MRI gradient waveforms and k-space trajectories. The most promising new strategy appears to be magnetic field monitoring with RF microprobes. Multiple RF microprobes may record the magnetic field evolution associated with a wide variety of imaging pulse sequences. The method involves exciting one or more test samples and measuring the time evolution of magnetization through the FIDs. Two critical problems remain. The gradient waveform duration is limited by the sample T(2)*, while the k-space maxima are limited by gradient dephasing. The method presented is based on pure phase encode FIDs and solves the above two problems in addition to permitting high strength gradient measurement. A small doped water phantom (1-3 mm droplet, T(1), T(2), T(2)* < 100 micros) within a microprobe is excited by a series of closely spaced broadband RF pulses each followed by FID single point acquisition. Two trial gradient waveforms have been chosen to illustrate the technique, neither of which could be measured by the conventional RF microprobe measurement. The first is an extended duration gradient waveform while the other illustrates the new method's ability to measure gradient waveforms with large net area and/or high amplitude. The new method is a point monitor with simple implementation and low cost hardware requirements.

  14. Uncertainties Associated with Flux Measurements Due to Heterogeneous Contaminant Distributions

    EPA Science Inventory

    Mass flux and mass discharge measurements at contaminated sites have been applied to assist with remedial management, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Extrapolation across un-sampled space is necessary when usi...

  15. Method And Apparatus For High Resolution Ex-Situ Nmr Spectroscopy

    DOEpatents

    Pines, Alexander; Meriles, Carlos A.; Heise, Henrike; Sakellariou, Dimitrios; Moule, Adam

    2004-01-06

    A method and apparatus for ex-situ nuclear magnetic resonance spectroscopy for use on samples outside the physical limits of the magnets in inhomogeneous static and radio-frequency fields. Chemical shift spectra can be resolved with the method using sequences of correlated, composite z-rotation pulses in the presence of spatially matched static and radio frequency field gradients producing nutation echoes. The amplitude of the echoes is modulated by the chemical shift interaction and an inhomogeneity free FID may be recovered by stroboscopically sampling the maxima of the echoes. In an alternative embodiment, full-passage adiabatic pulses are consecutively applied. One embodiment of the apparatus generates a static magnetic field that has a variable saddle point.

  16. New method for stock-tank oil compositional analysis.

    PubMed

    McAndrews, Kristine; Nighswander, John; Kotzakoulakis, Konstantin; Ross, Paul; Schroeder, Helmut

    2009-01-01

    A new method for accurately determining stock-tank oil composition to normal pentatriacontane using gas chromatography is developed and validated. The new method addresses the potential errors associated with the traditional equipment and technique employed for extended hydrocarbon gas chromatography outside a controlled laboratory environment, such as on an offshore oil platform. In particular, the experimental measurement of stock-tank oil molecular weight with the freezing point depression technique and the use of an internal standard to find the unrecovered sample fraction are replaced with correlations for estimating these properties. The use of correlations reduces the number of necessary experimental steps in completing the required sample preparation and analysis, resulting in reduced uncertainty in the analysis.

  17. Rapid and effective processing of blood specimens for diagnostic PCR using filter paper and Chelex-100.

    PubMed Central

    Polski, J M; Kimzey, S; Percival, R W; Grosso, L E

    1998-01-01

    AIM: To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. METHODS: The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. RESULTS: In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. CONCLUSION: In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition. PMID:9893748

  18. Direct sampling from N dimensions to N dimensions applied to porous media

    NASA Astrophysics Data System (ADS)

    Adler, Pierre; Nguyen, Thang; Coelho, Daniel; Robinet, Jean Charles; Wendling, Jacques

    2014-05-01

    The reconstruction of porous media starting from some experimental data is still a very challenging problem in terms of random geometry and a very attractive one because of its innumerable industrial applications. The developments of Computed Microtomography (CMT) have not diminished the need for reconstruction methods and the availability of three dimensional data has considerably facilitated the reconstruction of porous media. In the past, several techniques were used such as thresholded Gaussian fields [1], simulated annealing [2] and Boolean models where polydisperse and penetrable spheres are generated randomly (see [3] for a combination with correlation function). Recently, [4] developed the Direct Sampling method (DSM) as an alternative to multiple-point simulations. The purpose of the present work is to develop DSM and to apply it to the reconstruction of porous media made of one or several minerals [5]. Application of this method only necessitates a sample of the medium to reproduce called Training Image (TI). The main feature of DSM can be summarized as follows. Suppose that n points (x1,…,xn) are already known in the Simulated Medium (SM) and that one wants to determine the value of an extra point x; the TI is searched in order to find a configuration (y1,…,yn) where these points have the same colors and relative positions as (x1,…,xn) in the SM; then, the value of the point y in the TI which is in the same relative position with respect to (y1,…,yn) than x with respect to (x1,…,xn) is given to x in the SM. The algorithm and its main features are briefly described. Important advantages of DSM are that it can easily generate media with several phases which are spatially periodic or not. The searching process - i.e. the selected points y in the TI and the corresponding determined points x in the SM - will be illustrated by some short movies. The properties of the resulting SMs (such as the phase probabilities and the correlation functions) will be qualitatively and quantitatively compared to the ones of the TI. The major numerical parameters which influence the results and the calculation time, are the size of the TI, the radius of the selection window and the acceptance threshold. They are studied and recommendations are made for their choice. For instance, the size of the TI should be at least twice the largest correlation length found in it. Some features necessitate a special analysis such as the number of isolated points of one phase in another phase, the influence of the choice of the initial points, the influence of a modified voxel in the course of the simulation and the generation of phases with a small probability in the TI. For the real TI which were analysed, the number of isolated points was always smaller than 0.5%; they can be suppressed with a very small influence on the statistical characteristics of the SM. The choice of the initial points has no consequences in a statistical sense. Finally, some initial tests show that the permeabilities of simulated samples and of the TI are close. REFERENCES [1] Adler P.M., Jacquin. C.G. & Quiblier J.A., Int. J. Multiphase Flow, 16 (1990), 691. [2] Hazlett, R.D., Math. Geol. 29 (1997), 801. [3] J.-F.Thovert, P.M.Adler, Phys. Rev. E, 83 (2011), 031104. [4] Mariethoz,G. and Renard,P. and Straubhaar,J., Water Resour. Res., 46,10.1029/2008WR007621 (2010). [5] Nguyen Kim, T, Direct sampling applied to porous media. Ph.D. Thesis, University P. and M. Curie, Paris (2013).

  19. Sampling challenges in a study examining refugee resettlement

    PubMed Central

    2011-01-01

    Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation. PMID:21406104

  20. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

Top