Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E
2017-09-01
Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Point detection of bacterial and viral pathogens using oral samples
NASA Astrophysics Data System (ADS)
Malamud, Daniel
2008-04-01
Oral samples, including saliva, offer an attractive alternative to serum or urine for diagnostic testing. This is particularly true for point-of-use detection systems. The various types of oral samples that have been reported in the literature are presented here along with the wide variety of analytes that have been measured in saliva and other oral samples. The paper focuses on utilizing point-detection of infectious disease agents, and presents work from our group on a rapid test for multiple bacterial and viral pathogens by monitoring a series of targets. It is thus possible in a single oral sample to identify multiple pathogens based on specific antigens, nucleic acids, and host antibodies to those pathogens. The value of such a technology for detecting agents of bioterrorism at remote sites is discussed.
CMOS imager for pointing and tracking applications
NASA Technical Reports Server (NTRS)
Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)
2006-01-01
Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.
Morrill, K M; Robertson, K E; Spring, M M; Robinson, A L; Tyler, H D
2015-01-01
The objectives of this study were to (1) validate a method using refractometry to rapidly and accurately determine immunoglobulin (IgG) concentration in Jersey colostrum, (2) determine whether there should be different refractive index (nD) and %Brix cut points for Jersey colostrum, and (3) evaluate the effect of multiple freeze-thaw (FT) cycles on radial immunodiffusion (RID) and a digital refractometer to determine IgG concentration in Jersey colostrum. Samples (n=58; 3L) of colostrum were collected from a dairy in northwestern Iowa. Samples were analyzed within 2h of collection for IgG concentration by RID, %Brix, and nD by refractometer and an estimate of IgG by colostrometer. Samples were frozen, placed on dry ice, and transported to the laboratory at Iowa State University (Ames). Samples arrived frozen and were placed in a -20°C manual-defrost freezer until further analysis. On d 7 (1FT), d 14 (2FT), and 1yr (3FT) all samples were thawed, analyzed for IgG by RID, %Brix, nD by refractometer, and IgG estimate by colostrometer, and frozen until reanalysis at the next time point. Fresh colostrum had a mean (±SD) IgG concentration of 72.91 (±33.53) mg/mL, 21.24% (±4.43) Brix, and nD 1.3669 (±0.0074). Multiple FT cycles did affect IgG as determined by RID and colostrometer reading. The IgG concentrations were greater in fresh and 1FT samples as compared with 2FT and 3FT samples (72.91, 75.38, 67.20, and 67.31mg of IgG/mL, respectively). The colostrometer reading was lower in 1FT samples compared with fresh and 2FT samples. Multiple FT cycles had no effect on nD or %Brix reading. In fresh samples, IgG concentration was moderately correlated with nD (r=0.79), %Brix (r=0.79), and colostrometer reading (r=0.79). Diagnostic test characteristics using the recommended cut point of 1.35966 nD resulted in similar sensitivities for 1FT and 2 FT samples (94.87 and 94.74%, respectively). Cut points of 18 and 19% Brix resulted in the greatest sensitivities (92.31 and 84.62%) and specificity (94.74 and 94.74%, respectively). The 18% Brix cut point resulted in 94.83% of the samples being correctly classified based on IgG concentration. These data support the use of digital refractometer to accurately and rapidly determine IgG concentration in fresh Jersey colostrum. Additionally, these data suggest that IgG concentration determined by RID is affected by multiple FT cycles, whereas estimates obtained by refractometer are not affected by multiple FT cycles. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
Gene genealogies for genetic association mapping, with application to Crohn's disease
Burkett, Kelly M.; Greenwood, Celia M. T.; McNeney, Brad; Graham, Jinko
2013-01-01
A gene genealogy describes relationships among haplotypes sampled from a population. Knowledge of the gene genealogy for a set of haplotypes is useful for estimation of population genetic parameters and it also has potential application in finding disease-predisposing genetic variants. As the true gene genealogy is unknown, Markov chain Monte Carlo (MCMC) approaches have been used to sample genealogies conditional on data at multiple genetic markers. We previously implemented an MCMC algorithm to sample from an approximation to the distribution of the gene genealogy conditional on haplotype data. Our approach samples ancestral trees, recombination and mutation rates at a genomic focal point. In this work, we describe how our sampler can be used to find disease-predisposing genetic variants in samples of cases and controls. We use a tree-based association statistic that quantifies the degree to which case haplotypes are more closely related to each other around the focal point than control haplotypes, without relying on a disease model. As the ancestral tree is a latent variable, so is the tree-based association statistic. We show how the sampler can be used to estimate the posterior distribution of the latent test statistic and corresponding latent p-values, which together comprise a fuzzy p-value. We illustrate the approach on a publicly-available dataset from a study of Crohn's disease that consists of genotypes at multiple SNP markers in a small genomic region. We estimate the posterior distribution of the tree-based association statistic and the recombination rate at multiple focal points in the region. Reassuringly, the posterior mean recombination rates estimated at the different focal points are consistent with previously published estimates. The tree-based association approach finds multiple sub-regions where the case haplotypes are more genetically related than the control haplotypes, and that there may be one or multiple disease-predisposing loci. PMID:24348515
... oligoclonal bands may point to a diagnosis of multiple sclerosis. How the Test is Performed A sample of ... Performed This test helps support the diagnosis of multiple sclerosis (MS). However, it does not confirm the diagnosis. ...
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.
ERIC Educational Resources Information Center
Kratochvil, Byron
1980-01-01
Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis
ERIC Educational Resources Information Center
Logan, Jessica
2017-01-01
The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…
NASA Astrophysics Data System (ADS)
Oriani, Fabio
2017-04-01
The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002
A line-scan hyperspectral Raman system for spatially offset Raman spectroscopy
USDA-ARS?s Scientific Manuscript database
Conventional methods of spatially offset Raman spectroscopy (SORS) typically use single-fiber optical measurement probes to slowly and incrementally collect a series of spatially offset point measurements moving away from the laser excitation point on the sample surface, or arrays of multiple fiber ...
[Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].
He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo
2010-01-01
To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.
Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.
Lee, Sunbok; Lei, Man-Kit; Brody, Gene H
2015-06-01
Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Interaction of pulsating and spinning waves in condensed phase combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booty, M.R.; Margolis, S.B.; Matkowsky, B.J.
1986-10-01
The authors employ a nonlinear stability analysis in the neighborhood of a multiple bifurcation point to describe the interaction of pulsating and spinning modes of condensed phase combustion. Such phenomena occur in the synthesis of refractory materials. In particular, they consider the propagation of combustion waves in a long thermally insulated cylindrical sample and show that steady, planar combustion is stable for a modified activation energy/melting parameter less than a critical value. Above this critical value primary bifurcation states, corresponding to time-periodic pulsating and spinning modes of combustion, emanate from the steadily propagating solution. By varying the sample radius, themore » authors split a multiple bifurcation point to obtain bifurcation diagrams which exhibit secondary, tertiary, and quarternary branching to various types of quasi-periodic combustion waves.« less
Apparently abnormal Wechsler Memory Scale index score patterns in the normal population.
Carrasco, Roman Marcus; Grups, Josefine; Evans, Brittney; Simco, Edward; Mittenberg, Wiley
2015-01-01
Interpretation of the Wechsler Memory Scale-Fourth Edition may involve examination of multiple memory index score contrasts and similar comparisons with Wechsler Adult Intelligence Scale-Fourth Edition ability indexes. Standardization sample data suggest that 15-point differences between any specific pair of index scores are relatively uncommon in normal individuals, but these base rates refer to a comparison between a single pair of indexes rather than multiple simultaneous comparisons among indexes. This study provides normative data for the occurrence of multiple index score differences calculated by using Monte Carlo simulations and validated against standardization data. Differences of 15 points between any two memory indexes or between memory and ability indexes occurred in 60% and 48% of the normative sample, respectively. Wechsler index score discrepancies are normally common and therefore not clinically meaningful when numerous such comparisons are made. Explicit prior interpretive hypotheses are necessary to reduce the number of index comparisons and associated false-positive conclusions. Monte Carlo simulation accurately predicts these false-positive rates.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
More practical critical height sampling.
Thomas B. Lynch; Jeffrey H. Gove
2015-01-01
Critical Height Sampling (CHS) (Kitamura 1964) can be used to predict cubic volumes per acre without using volume tables or equations. The critical height is defined as the height at which the tree stem appears to be in borderline condition using the point-sampling angle gauge (e.g. prism). An estimate of cubic volume per acre can be obtained from multiplication of the...
The relevance of time series in molecular ecology and conservation biology.
Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E
2014-05-01
The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Quadrupole ion traps and trap arrays: geometry, material, scale, performance.
Ouyang, Z; Gao, L; Fico, M; Chappell, W J; Noll, R J; Cooks, R G
2007-01-01
Quadrupole ion traps are reviewed, emphasizing recent developments, especially the investigation of new geometries, guided by multiple particle simulations such as the ITSIM program. These geometries include linear ion traps (LITs) and the simplified rectilinear ion trap (RIT). Various methods of fabrication are described, including the use of rapid prototyping apparatus (RPA), in which 3D objects are generated through point-by-point laser polymerization. Fabrication in silicon using multilayer semi-conductor fabrication techniques has been used to construct arrays of micro-traps. The performance of instruments containing individual traps as well as arrays of traps of various sizes and geometries is reviewed. Two types of array are differentiated. In the first type, trap arrays constitute fully multiplexed mass spectrometers in which multiple samples are examined using multiple sources, analyzers and detectors, to achieve high throughput analysis. In the second, an array of individual traps acts collectively as a composite trap to increase trapping capacity and performance for a single sample. Much progress has been made in building miniaturized mass spectrometers; a specific example is a 10 kg hand-held tandem mass spectrometer based on the RIT mass analyzer. The performance of this instrument in air and water analysis, using membrane sampling, is described.
ERIC Educational Resources Information Center
Whitehouse, Andrew J. O.; Mattes, Eugen; Maybery, Murray T.; Sawyer, Michael G.; Jacoby, Peter; Keelan, Jeffrey A.; Hickey, Martha
2012-01-01
Background: Preliminary evidence suggests that prenatal testosterone exposure may be associated with language delay. However, no study has examined a large sample of children at multiple time-points. Methods: Umbilical cord blood samples were obtained at 861 births and analysed for bioavailable testosterone (BioT) concentrations. When…
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Xuefei; Zhou, S. Kevin; Rasselkorde, El Mahjoub
The study presents a data processing methodology for weld build-up using multiple scan patterns. To achieve an overall high probability of detection for flaws with different orientations, an inspection procedure with three different scan patterns is proposed. The three scan patterns are radial-tangential longitude wave pattern, axial-radial longitude wave pattern, and tangential shear wave pattern. Scientific fusion of the inspection data is implemented using volume reconstruction techniques. The idea is to perform spatial domain forward data mapping for all sampling points. A conservative scheme is employed to handle the case that multiple sampling points are mapped to one grid location.more » The scheme assigns the maximum value for the grid location to retain the largest equivalent reflector size for the location. The methodology is demonstrated and validated using a realistic ring of weld build-up. Tungsten balls and bars are embedded to the weld build-up during manufacturing process to represent natural flaws. Flat bottomed holes and side drilled holes are installed as artificial flaws. Automatic flaw identification and extraction are demonstrated. Results indicate the inspection procedure with multiple scan patterns can identify all the artificial and natural flaws.« less
NASA Astrophysics Data System (ADS)
Guan, Xuefei; Rasselkorde, El Mahjoub; Abbasi, Waheed; Zhou, S. Kevin
2015-03-01
The study presents a data processing methodology for weld build-up using multiple scan patterns. To achieve an overall high probability of detection for flaws with different orientations, an inspection procedure with three different scan patterns is proposed. The three scan patterns are radial-tangential longitude wave pattern, axial-radial longitude wave pattern, and tangential shear wave pattern. Scientific fusion of the inspection data is implemented using volume reconstruction techniques. The idea is to perform spatial domain forward data mapping for all sampling points. A conservative scheme is employed to handle the case that multiple sampling points are mapped to one grid location. The scheme assigns the maximum value for the grid location to retain the largest equivalent reflector size for the location. The methodology is demonstrated and validated using a realistic ring of weld build-up. Tungsten balls and bars are embedded to the weld build-up during manufacturing process to represent natural flaws. Flat bottomed holes and side drilled holes are installed as artificial flaws. Automatic flaw identification and extraction are demonstrated. Results indicate the inspection procedure with multiple scan patterns can identify all the artificial and natural flaws.
How Much Is Too Little to Detect Impacts? A Case Study of a Nuclear Power Plant
Széchy, Maria T. M.; Viana, Mariana S.; Curbelo-Fernandez, Maria P.; Lavrado, Helena P.; Junqueira, Andrea O. R.; Vilanova, Eduardo; Silva, Sérgio H. G.
2012-01-01
Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no ‘before’ data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general. PMID:23110117
How much is too little to detect impacts? A case study of a nuclear power plant.
Mayer-Pinto, Mariana; Ignacio, Barbara L; Széchy, Maria T M; Viana, Mariana S; Curbelo-Fernandez, Maria P; Lavrado, Helena P; Junqueira, Andrea O R; Vilanova, Eduardo; Silva, Sérgio H G
2012-01-01
Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no 'before' data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general.
NASA Astrophysics Data System (ADS)
Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia
2017-04-01
Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.
NASA Astrophysics Data System (ADS)
Niknia, I.; Trevizoli, P. V.; Govindappa, P.; Christiaanse, T. V.; Teyber, R.; Rowe, A.
2018-05-01
First order transition material (FOM) usually exhibits magnetocaloric effects in a narrow temperature range which complicates their use in an active magnetic regenerator (AMR) refrigerator. In addition, the magnetocaloric effect in first order materials can vary with field and temperature history of the material. This study examines the behavior of a MnFe(P,Si) FOM sample in an AMR cycle using a numerical model and experimental measurements. For certain operating conditions, multiple points of equilibrium (MPE) exist for a fixed hot rejection temperature. Stable and unstable points of equilibriums (PEs) are identified and the impacts of heat loads, operating conditions, and configuration losses on the number of PEs are discussed. It is shown that the existence of multiple PEs can affect the performance of an AMR significantly for certain operating conditions. In addition, the points where MPEs exist appear to be linked to the device itself, not just the material, suggesting the need to layer a regenerator in a way that avoids MPE conditions and to layer with a specific device in mind.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
Automated soil gas monitoring chamber
Edwards, Nelson T.; Riggs, Jeffery S.
2003-07-29
A chamber for trapping soil gases as they evolve from the soil without disturbance to the soil and to the natural microclimate within the chamber has been invented. The chamber opens between measurements and therefore does not alter the metabolic processes that influence soil gas efflux rates. A multiple chamber system provides for repetitive multi-point sampling, undisturbed metabolic soil processes between sampling, and an essentially airtight sampling chamber operating at ambient pressure.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Application of Handheld Laser-Induced Breakdown Spectroscopy (LIBS) to Geochemical Analysis.
Connors, Brendan; Somers, Andrew; Day, David
2016-05-01
While laser-induced breakdown spectroscopy (LIBS) has been in use for decades, only within the last two years has technology progressed to the point of enabling true handheld, self-contained instruments. Several instruments are now commercially available with a range of capabilities and features. In this paper, the SciAps Z-500 handheld LIBS instrument functionality and sub-systems are reviewed. Several assayed geochemical sample sets, including igneous rocks and soils, are investigated. Calibration data are presented for multiple elements of interest along with examples of elemental mapping in heterogeneous samples. Sample preparation and the data collection method from multiple locations and data analysis are discussed. © The Author(s) 2016.
Xiong, Nana; Fritzsche, Kurt; Wei, Jing; Hong, Xia; Leonhart, Rainer; Zhao, Xudong; Zhang, Lan; Zhu, Liming; Tian, Guoqing; Nolte, Sandra; Fischer, Felix
2015-03-15
Despite the high co-morbidity of depressive symptoms in patients with multiple somatic symptoms, the validity of the 9-item Patient Health Questionnaire (PHQ-9) has not yet been investigated in Chinese patients with multiple somatic symptoms. The multicenter cross-sectional study was conducted in ten outpatient departments located in four cities in China. The psychometric properties of the PHQ-9 were examined by confirmative factor analysis (CFA). Criterion validation was undertaken by comparing results with depression diagnoses obtained from the Mini International Neuropsychiatric Interview (MINI) as the gold standard. Overall, 491 patients were recruited of whom 237 had multiple somatic symptoms (SOM+ group, PHQ-15≥10). Cronbach׳s α of the PHQ-9 was 0.87, 0.87, and 0.90 for SOM+ patients, SOM- patients, and total sample respectively. All items and the total score were moderately correlated. The factor models of PHQ-9 tested by CFA yielded similar diagnostic performance when compared to sum score estimation. Multi-group confirmatory factor analysis based on unidimensional model showed similar psychometric properties over the groups with low and high somatic symptom burden. The optimal cut-off point to detect depression in Chinese outpatients was 10 for PHQ-9 (sensitivity=0.77, specificity=0.76) and 3 for PHQ-2 (sensitivity=0.77, specificity=0.74). Potential selection bias and nonresponse bias with applied sampling method. PHQ-9 (cut-off point=10) and PHQ-2 (cut-off point=3) were reliable and valid to detect major depression in Chinese patients with multiple somatic symptoms. Copyright © 2014 Elsevier B.V. All rights reserved.
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
Modeling abundance using hierarchical distance sampling
Royle, Andy; Kery, Marc
2016-01-01
In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.
Nursing Reference Center: a point-of-care resource.
Vardell, Emily; Paulaitis, Gediminas Geddy
2012-01-01
Nursing Reference Center is a point-of-care resource designed for the practicing nurse, as well as nursing administrators, nursing faculty, and librarians. Users can search across multiple resources, including topical Quick Lessons, evidence-based care sheets, patient education materials, practice guidelines, and more. Additional features include continuing education modules, e-books, and a new iPhone application. A sample search and comparison with similar databases were conducted.
Geomorphological and Geoelectric Techniques for Kwoi's Multiple Tremor Assessment
NASA Astrophysics Data System (ADS)
Dikedi, P. N.
2017-12-01
This work epicentres on geomorphological and geoelectric techniques for multiple tremor assessment in Kwoi, Nigeria. Earth tremor occurrences have been noted by Akpan and Yakubu (2010) within the last 70 years, in nine regions in Nigeria; on September 11,12,20,22, 23 and 24, 2016, additional earth tremors rocked the village of Kwoi eleven times. Houses cracked and collapsed, a rock split and slid and smoke evolved at N9027''5.909''', E800'44.951'', from an altitude of 798m. By employing the Ohmega Meter and Schlumberger configuration, four VES points are sounded for subsurface structure characterisation. Thereafter, a cylindrical steel ring is hammered into the ground at the first point (VES 1) and earth samples are scooped from this location; this procedure is repeated for other points (VES 2, 3 and 4). Winresist, Geo-earth, and Surfer version 12.0.626 software are employed to generate geo-sections, lithology, resistivity profile, Iso resistivity and Isopach maps, of the region. Outcome of results reveal some lithological formations of lateritic topsoil, fractured basement and fresh basement; additionally, results reveal 206.6m, 90.7m, 73.2m and 99.4m fractured basement thicknesses for four points. Scooped samples are transferred to the specimen stage of a Scanning Electron Microscope (SEM). SEM images show rounded inter-granular boundaries—the granular structures act like micro-wheels making the upper crustal mass susceptible to movement at the slightest vibration. Collapsed buildings are sited around VES1 location; samples from VES 1 are the most well fragmented sample owing to multiple microfractures—this result explains why VES 1 has the thickest fractured basement. Abrupt frictional sliding occurs between networks of fault lines; there is a likelihood that friction is most intense at the rock slide site on N9027'21.516'' and E800'44.9993'', VES 1 at N9027'5.819'' and E8005'3.1120'' and smoke sites—holo-centres are suspected below these locations. The presence of borehole facilities and quarry activities around the region serve as artificial causal factors of these tremors.
Le Pichon, Céline; Tales, Évelyne; Belliard, Jérôme; Torgersen, Christian E.
2017-01-01
Spatially intensive sampling by electrofishing is proposed as a method for quantifying spatial variation in fish assemblages at multiple scales along extensive stream sections in headwater catchments. We used this method to sample fish species at 10-m2 points spaced every 20 m throughout 5 km of a headwater stream in France. The spatially intensive sampling design provided information at a spatial resolution and extent that enabled exploration of spatial heterogeneity in fish assemblage structure and aquatic habitat at multiple scales with empirical variograms and wavelet analysis. These analyses were effective for detecting scales of periodicity, trends, and discontinuities in the distribution of species in relation to tributary junctions and obstacles to fish movement. This approach to sampling riverine fishes may be useful in fisheries research and management for evaluating stream fish responses to natural and altered habitats and for identifying sites for potential restoration.
Chen, Rui; Wang, Haotian; Shi, Jun; Hu, Pei
2016-05-01
CYP2D6 is a high polymorphic enzyme. Determining its phenotype before CYP2D6 substrate treatment can avoid dose-dependent adverse events or therapeutic failures. Alternative phenotyping methods of CYP2D6 were compared to aluate the appropriate and precise time points for phenotyping after single-dose and ultiple-dose of 30-mg controlled-release (CR) dextromethorphan (DM) and to explore the antimodes for potential sampling methods. This was an open-label, single and multiple-dose study. 21 subjects were assigned to receive a single dose of CR DM 30 mg orally, followed by a 3-day washout period prior to oral administration of CR DM 30 mg every 12 hours for 6 days. Metabolic ratios (MRs) from AUC∞ after single dosing and from AUC0-12h at steady state were taken as the gold standard. The correlations of metabolic ratios of DM to dextrorphan (MRDM/DX) values based on different phenotyping methods were assessed. Linear regression formulas were derived to calculate the antimodes for potential sample methods. In the single-dose part of the study statistically significant correlations were found between MRDM/DX from AUC∞ and from serial plasma points from 1 to 30 hours or from urine (all p-values < 0.001). In the multiple-dose part, statistically significant correlations were found between MRDM/DX from AUC0-12h on day 6 and MRDM/DX from serial plasma points from 0 to 36 hours after the last dosing (all p-values < 0.001). Based on reported urinary antimode and linear regression analysis, the antimodes of AUC and plasma points were derived to profile the trend of antimodes as the drug concentrations changed. MRDM/DX from plasma points had good correlations with MRDM/DX from AUC. Plasma points from 1 to 30 hours after single dose of 30-mg CR DM and any plasma point at steady state after multiple doses of CR DM could potentially be used for phenotyping of CYP2D6.
The USEPA Beaches Environmental Assessment and Coastal Health Act (BEACH Act) requires states to develop monitoring and notification programs for recreational waters using approved bacterial indicators. Implementation of an appropriate monitoring program can, under some circumsta...
Monitoring heavy metal Cr in soil based on hyperspectral data using regression analysis
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Xu, Fuyun; Zhuang, Shidong; He, Changwei
2016-10-01
Heavy metal pollution in soils is one of the most critical problems in the global ecology and environment safety nowadays. Hyperspectral remote sensing and its application is capable of high speed, low cost, less risk and less damage, and provides a good method for detecting heavy metals in soil. This paper proposed a new idea of applying regression analysis of stepwise multiple regression between the spectral data and monitoring the amount of heavy metal Cr by sample points in soil for environmental protection. In the measurement, a FieldSpec HandHeld spectroradiometer is used to collect reflectance spectra of sample points over the wavelength range of 325-1075 nm. Then the spectral data measured by the spectroradiometer is preprocessed to reduced the influence of the external factors, and the preprocessed methods include first-order differential equation, second-order differential equation and continuum removal method. The algorithms of stepwise multiple regression are established accordingly, and the accuracy of each equation is tested. The results showed that the accuracy of first-order differential equation works best, which makes it feasible to predict the content of heavy metal Cr by using stepwise multiple regression.
Chapman, Kent; Favaloro, Emmanuel J
2018-05-01
The Multiplate is a popular instrument that measures platelet function using whole blood. Potentially considered a point of care instrument, it is also used by hemostasis laboratories. The instrument is usually utilized to assess antiplatelet medication or as a screen of platelet function. According to the manufacturer, testing should be performed within 0.5-3 hours of blood collection, and preferably using manufacturer provided hirudin tubes. We report time-associated reduction in platelet aggregation using the Multiplate and hirudin blood collection tubes, for all the major employed agonists. Blood for Multiplate analysis was collected into manufacturer supplied hirudin tubes, and 21 consecutive samples assessed using manufacturer supplied agonists (ADP, arachidonic acid, TRAP, collagen and ristocetin), at several time-points post-sample collection within the recommended test time period. Blood was also collected into EDTA as a reference method for platelet counts, with samples collected into sodium citrate and hirudin used for comparative counts. All platelet agonists showed a diminution of response with time. Depending on the agonist, the reduction caused 5-20% and 22-47% of responses initially in the normal reference range to fall below the reference range at 120min and 180min, respectively. Considering any agonist, 35% and 67% of initially "normal" responses became 'abnormal' at 120 min and 180 min, respectively. Platelet counts showed generally minimal changes in EDTA blood, but were markedly reduced over time in both citrate and hirudin blood, with up to 40% and 60% reduction, respectively, at 240 min. The presence of platelet clumping (micro-aggregate formation) was also observed in a time dependent manner, especially for hirudin. In conclusion, considering any platelet agonist, around two-thirds of samples can, within the recommended 0.5-3 hour testing window post-blood collection, yield a reduction in platelet aggregation that may lead to a change in interpretation (i.e., normal to reduced). Thus, the stability of Multiplate testing can more realistically be considered as being between 30-120 min of blood collection for samples collected into hirudin.
Automatic initialization for 3D bone registration
NASA Astrophysics Data System (ADS)
Foroughi, Pezhman; Taylor, Russell H.; Fichtinger, Gabor
2008-03-01
In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration is manually initialized by locating the sample points close to the corresponding points on the CT model. In this paper, we present an automatic initialization method that aligns the sample points collected from the surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for all registrations and facilitates the inclusion of application-specific information into the registration process. The CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape. This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for successful registration. The standard ICP has been used for final registration of datasets.
MAGENCO: A map generalization controller for Arc/Info
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.; Cashwell, J.W.
The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyzemore » the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.« less
A non-viscous-featured fractograph in metallic glasses
NASA Astrophysics Data System (ADS)
Yang, G. N.; Shao, Y.; Yao, K. F.
2016-02-01
A fractograph of non-viscous feature but pure shear-offsets was found in three-point bending samples of a ductile Pd-Cu-Si metallic glass. A sustainable shear band multiplication with large plasticity during notch propagation was observed. Such non-viscous-featured fractograph was formed by a crack propagation manner of continual multiple shear bands formation in front of the crack-tip, instead of the conventional rapid fracture along shear bands. With a 2D model of crack propagation by multiple shear bands, we showed that such fracture process was achieved by a faster stress relaxation than shear-softening effect in the sample. This study confirmed that the viscous fracture along shear bands could be not a necessary process in ductile metallic glasses fracture, and could provide new ways to understand the plasticity in the shear-softened metallic glasses.
Xin, Li-Ping; Chai, Xin-Sheng; Hu, Hui-Chao; Barnes, Donald G
2014-09-05
This work demonstrates a novel method for rapid determination of total solid content in viscous liquid (polymer-enriched) samples. The method is based multiple headspace extraction gas chromatography (MHE-GC) on a headspace vial at a temperature above boiling point of water. Thus, the trend of water loss from the tested liquid due to evaporation can be followed. With the limited MHE-GC testing (e.g., 5 extractions) and a one-point calibration procedure (i.e., recording the weight difference before and after analysis), the total amount of water in the sample can be determined, from which the total solid contents in the liquid can be calculated. A number of black liquors were analyzed by the new method which yielded results that closely matched those of the reference method; i.e., the results of these two methods differed by no more than 2.3%. Compared with the reference method, the MHE-GC method is much simpler and more practical. Therefore, it is suitable for the rapid determination of the solid content in many polymer-containing liquid samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
CePt2In7: Shubnikov-de Haas measurements on micro-structured samples under high pressures
NASA Astrophysics Data System (ADS)
Kanter, J.; Moll, P.; Friedemann, S.; Alireza, P.; Sutherland, M.; Goh, S.; Ronning, F.; Bauer, E. D.; Batlogg, B.
2014-03-01
CePt2In7 belongs to the CemMnIn3 m + 2 n heavy fermion family, but compared to the Ce MIn5 members of this group, exhibits a more two dimensional electronic structure. At zero pressure the ground state is antiferromagnetically ordered. Under pressure the antiferromagnetic order is suppressed and a superconducting phase is induced, with a maximum Tc above a quantum critical point around 31 kbar. To investigate the changes in the Fermi Surface and effective electron masses around the quantum critical point, Shubnikov-de Haas measurements were conducted under high pressures in an anvil cell. The samples were micro-structured and contacted using a Focused Ion Beam (FIB). The Focused Ion Beam enables sample contacting and structuring down to a sub-micrometer scale, making the measurement of several samples with complex shapes and multiple contacts on a single anvil feasible.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Using dBASE II for Bibliographic Files.
ERIC Educational Resources Information Center
Sullivan, Jeanette
1985-01-01
Describes use of a database management system (dBASE II, produced by Ashton-Tate), noting best features and disadvantages. Highlights include data entry, multiple access points available, training requirements, use of dBASE for a bibliographic application, auxiliary software, and dBASE updates. Sample searches, auxiliary programs, and requirements…
Dan, Haruka; Azuma, Teruaki; Hayakawa, Fumiyo; Kohyama, Kaoru
2005-05-01
This study was designed to examine human subjects' ability to discriminate between spatially different bite pressures. We measured actual bite pressure distribution when subjects simultaneously bit two silicone rubber samples with different hardnesses using their right and left incisors. They were instructed to compare the hardness of these two rubber samples and indicate which was harder (right or left). The correct-answer rates were statistically significant at P < 0.05 for all pairs of different right and left silicone rubber hardnesses. Simultaneous bite measurements using a multiple-point sheet sensor demonstrated that the bite force, active pressure and maximum pressure point were greater for the harder silicone rubber sample. The difference between the left and right was statistically significant (P < 0.05) for all pairs with different silicone rubber hardnesses. We demonstrated for the first time that subjects could perceive and discriminate between spatially different bite pressures during a single bite with incisors. Differences of the bite force, pressure and the maximum pressure point between the right and left silicone samples should be sensory cues for spatial hardness discrimination.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
On the interpolation of volumetric water content in research catchments
NASA Astrophysics Data System (ADS)
Dlamini, Phesheya; Chaplot, Vincent
Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.
Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.
Jung, Sin-Ho
2017-07-01
In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.
Paper SERS chromatography for detection of trace analytes in complex samples
NASA Astrophysics Data System (ADS)
Yu, Wei W.; White, Ian M.
2013-05-01
We report the application of paper SERS substrates for the detection of trace quantities of multiple analytes in a complex sample in the form of paper chromatography. Paper chromatography facilitates the separation of different analytes from a complex sample into distinct sections in the chromatogram, which can then be uniquely identified using SERS. As an example, the separation and quantitative detection of heroin in a highly fluorescent mixture is demonstrated. Paper SERS chromatography has obvious applications, including law enforcement, food safety, and border protection, and facilitates the rapid detection of chemical and biological threats at the point of sample.
Higher moments of net-proton multiplicity distributions in a heavy-ion event pile-up scenario
NASA Astrophysics Data System (ADS)
Garg, P.; Mishra, D. K.
2017-10-01
High-luminosity modern accelerators, like the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN), inherently have event pile-up scenarios which significantly contribute to physics events as a background. While state-of-the-art tracking algorithms and detector concepts take care of these event pile-up scenarios, several offline analytical techniques are used to remove such events from the physics analysis. It is still difficult to identify the remaining pile-up events in an event sample for physics analysis. Since the fraction of these events is significantly small, it may not be as serious of an issue for other analyses as it would be for an event-by-event analysis. Particularly when the characteristics of the multiplicity distribution are observable, one needs to be very careful. In the present work, we demonstrate how a small fraction of residual pile-up events can change the moments and their ratios of an event-by-event net-proton multiplicity distribution, which are sensitive to the dynamical fluctuations due to the QCD critical point. For this study, we assume that the individual event-by-event proton and antiproton multiplicity distributions follow Poisson, negative binomial, or binomial distributions. We observe a significant effect in cumulants and their ratios of net-proton multiplicity distributions due to pile-up events, particularly at lower energies. It might be crucial to estimate the fraction of pile-up events in the data sample while interpreting the experimental observable for the critical point.
On-chip wavelength multiplexed detection of cancer DNA biomarkers in blood
Cai, H.; Stott, M. A.; Ozcelik, D.; Parks, J. W.; Hawkins, A. R.; Schmidt, H.
2016-01-01
We have developed an optofluidic analysis system that processes biomolecular samples starting from whole blood and then analyzes and identifies multiple targets on a silicon-based molecular detection platform. We demonstrate blood filtration, sample extraction, target enrichment, and fluorescent labeling using programmable microfluidic circuits. We detect and identify multiple targets using a spectral multiplexing technique based on wavelength-dependent multi-spot excitation on an antiresonant reflecting optical waveguide chip. Specifically, we extract two types of melanoma biomarkers, mutated cell-free nucleic acids —BRAFV600E and NRAS, from whole blood. We detect and identify these two targets simultaneously using the spectral multiplexing approach with up to a 96% success rate. These results point the way toward a full front-to-back chip-based optofluidic compact system for high-performance analysis of complex biological samples. PMID:28058082
Clinical Importance of Steps Taken per Day among Persons with Multiple Sclerosis
Motl, Robert W.; Pilutti, Lara A.; Learmonth, Yvonne C.; Goldman, Myla D.; Brown, Ted
2013-01-01
Background The number of steps taken per day (steps/day) provides a reliable and valid outcome of free-living walking behavior in persons with multiple sclerosis (MS). Objective This study examined the clinical meaningfulness of steps/day using the minimal clinically important difference (MCID) value across stages representing the developing impact of MS. Methods This study was a secondary analysis of de-identified data from 15 investigations totaling 786 persons with MS and 157 healthy controls. All participants provided demographic information and wore an accelerometer or pedometer during the waking hours of a 7-day period. Those with MS further provided real-life, health, and clinical information and completed the Multiple Sclerosis Walking Scale-12 (MSWS-12) and Patient Determined Disease Steps (PDDS) scale. MCID estimates were based on regression analyses and analysis of variance for between group differences. Results The mean MCID from self-report scales that capture subtle changes in ambulation (1-point change in PDSS scores and 10-point change in MSWS-12 scores) was 779 steps/day (14% of mean score for MS sample); the mean MCID for clinical/health outcomes (MS type, duration, weight status) was 1,455 steps/day (26% of mean score for MS sample); real-life anchors (unemployment, divorce, assistive device use) resulted in a mean MCID of 2,580 steps/day (45% of mean score for MS sample); and the MCID for the cumulative impact of MS (MS vs. control) was 2,747 steps/day (48% of mean score for MS sample). Conclusion The change in motion sensor output of ∼800 steps/day appears to represent a lower-bound estimate of clinically meaningful change in free-living walking behavior in interventions of MS. PMID:24023843
NASA Astrophysics Data System (ADS)
Rolfe, S. M.; Patel, M. R.; Gilmour, I.; Olsson-Francis, K.; Ringrose, T. J.
2016-06-01
Biomarker molecules, such as amino acids, are key to discovering whether life exists elsewhere in the Solar System. Raman spectroscopy, a technique capable of detecting biomarkers, will be on board future planetary missions including the ExoMars rover. Generally, the position of the strongest band in the spectra of amino acids is reported as the identifying band. However, for an unknown sample, it is desirable to define multiple characteristic bands for molecules to avoid any ambiguous identification. To date, there has been no definition of multiple characteristic bands for amino acids of interest to astrobiology. This study examined l-alanine, l-aspartic acid, l-cysteine, l-glutamine and glycine and defined several Raman bands per molecule for reference as characteristic identifiers. Per amino acid, 240 spectra were recorded and compared using established statistical tests including ANOVA. The number of characteristic bands defined were 10, 12, 12, 14 and 19 for l-alanine (strongest intensity band: 832 cm-1), l-aspartic acid (938 cm-1), l-cysteine (679 cm-1), l-glutamine (1090 cm-1) and glycine (875 cm-1), respectively. The intensity of bands differed by up to six times when several points on the crystal sample were rotated through 360 °; to reduce this effect when defining characteristic bands for other molecules, we find that spectra should be recorded at a statistically significant number of points per sample to remove the effect of sample rotation. It is crucial that sets of characteristic Raman bands are defined for biomarkers that are targets for future planetary missions to ensure a positive identification can be made.
Rolfe, S M; Patel, M R; Gilmour, I; Olsson-Francis, K; Ringrose, T J
2016-06-01
Biomarker molecules, such as amino acids, are key to discovering whether life exists elsewhere in the Solar System. Raman spectroscopy, a technique capable of detecting biomarkers, will be on board future planetary missions including the ExoMars rover. Generally, the position of the strongest band in the spectra of amino acids is reported as the identifying band. However, for an unknown sample, it is desirable to define multiple characteristic bands for molecules to avoid any ambiguous identification. To date, there has been no definition of multiple characteristic bands for amino acids of interest to astrobiology. This study examined L-alanine, L-aspartic acid, L-cysteine, L-glutamine and glycine and defined several Raman bands per molecule for reference as characteristic identifiers. Per amino acid, 240 spectra were recorded and compared using established statistical tests including ANOVA. The number of characteristic bands defined were 10, 12, 12, 14 and 19 for L-alanine (strongest intensity band: 832 cm(-1)), L-aspartic acid (938 cm(-1)), L-cysteine (679 cm(-1)), L-glutamine (1090 cm(-1)) and glycine (875 cm(-1)), respectively. The intensity of bands differed by up to six times when several points on the crystal sample were rotated through 360 °; to reduce this effect when defining characteristic bands for other molecules, we find that spectra should be recorded at a statistically significant number of points per sample to remove the effect of sample rotation. It is crucial that sets of characteristic Raman bands are defined for biomarkers that are targets for future planetary missions to ensure a positive identification can be made.
Donovan, John E.; Chung, Tammy
2015-01-01
Objective: Most studies of adolescent drinking focus on single alcohol use behaviors (e.g., high-volume drinking, drunkenness) and ignore the patterning of adolescents’ involvement across multiple alcohol behaviors. The present latent class analyses (LCAs) examined a procedure for empirically determining multiple cut points on the alcohol use behaviors in order to establish a typology of adolescent alcohol involvement. Method: LCA was carried out on six alcohol use behavior indicators collected from 6,504 7th through 12th graders who participated in Wave I of the National Longitudinal Study of Adolescent Health (AddHealth). To move beyond dichotomous indicators, a “progressive elaboration” strategy was used, starting with six dichotomous indicators and then evaluating a series of models testing additional cut points on the ordinal indicators at progressively higher points for one indicator at a time. Analyses were performed on one random half-sample, and confirmatory LCAs were performed on the second random half-sample and in the Wave II data. Results: The final model consisted of four latent classes (never or non–current drinkers, low-intake drinkers, non–problem drinkers, and problem drinkers). Confirmatory LCAs in the second random half-sample from Wave I and in Wave II support this four-class solution. The means on the four latent classes were also generally ordered on an array of measures reflecting psychosocial risk for problem behavior. Conclusions: These analyses suggest that there may be four different classes or types of alcohol involvement among adolescents, and, more importantly, they illustrate the utility of the progressive elaboration strategy for moving beyond dichotomous indicators in latent class models. PMID:25978828
ERIC Educational Resources Information Center
Romero, Andrea J.; Ruiz, Myrna
2007-01-01
We examined coping with risky behaviors (cigarettes, alcohol/drugs, yelling/ hitting, and anger), familism (family proximity and parental closeness) and parental monitoring (knowledge and discipline) in a sample of 56 adolescents (11-15 years old) predominantly of Mexican descent at two time points. Multiple linear regression analysis indicated…
An O-"fish"-ial Research Project
ERIC Educational Resources Information Center
Newman, James; Krustchinsky, Rick; Vanek, Karen; Nguyen, Kim-Thoa
2009-01-01
In this "O-"fish"-ial" research project, third-grade students use multiple resources to research several fish species, write a research paper and develop a PowerPoint presentation to communicate their findings. In addition, students actually examine these species up close with samples from the local market, and then conclude the project with a…
Gender/racial Differences in Jock Identity, Dating, and Adolescent Sexual Risk.
ERIC Educational Resources Information Center
Miller, Kathleen E.; Farrell, Michael P.; Barnes, Grace M.; Melnick, Merrill J.; Sabo, Don
2005-01-01
Despite recent declines in overall sexual activity, sexual risk-taking remains a substantial danger to US youth. Existing research points to athletic participation as a promising venue for reducing these risks. Linear regressions and multiple analyses of covariance were performed on a longitudinal sample of nearly 600 Western New York adolescents…
Challenges in early clinical development of adjuvanted vaccines.
Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David
2015-06-08
A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.
A new paper-based platform technology for point-of-care diagnostics.
Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad
2014-10-21
Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.
Long-term care planning and preparation among persons with multiple sclerosis.
Putnam, Michelle; Tang, Fengyan
2008-01-01
Individuals with multiple sclerosis (MS) primarily rely on informal supports such as family members and assistive technology to meet their daily needs. As they age, formal supports may become important to compliment these supports and sustain community-based living. No previous research exists exploring plans and preparations of persons with MS for future independent living and long-term care needs. We analyzed data from a random sample survey (N = 580) to assess knowledge and perceptions of future service needs using ANOVA, chi-square, correlations, and MANOVA procedures. Results indicate that overall, most respondents are not well informed and have not planned or prepared for future care needs. Persons reporting severe MS were more likely to plan and prepare. Key "entry points" for making preparations include receiving specific education and planning information, discussions with family and professional service providers, and increased age, education, and income. We recommend greater infusion of long-term care planning into these existing entry points and creation of new entry points including healthcare provides and insurers.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Rodrigues, Valdemir; Estrany, Joan; Ranzini, Mauricio; de Cicco, Valdir; Martín-Benito, José Mª Tarjuelo; Hedo, Javier; Lucas-Borja, Manuel E
2018-05-01
Stream water quality is controlled by the interaction of natural and anthropogenic factors over a range of temporal and spatial scales. Among these anthropogenic factors, land cover changes at catchment scale can affect stream water quality. This work aims to evaluate the influence of land use and seasonality on stream water quality in a representative tropical headwater catchment named as Córrego Água Limpa (Sao Paulo, Brasil), which is highly influenced by intensive agricultural activities and urban areas. Two systematic sampling approach campaigns were implemented with six sampling points along the stream of the headwater catchment to evaluate water quality during the rainy and dry seasons. Three replicates were collected at each sampling point in 2011. Electrical conductivity, nitrates, nitrites, sodium superoxide, Chemical Oxygen Demand (DQO), colour, turbidity, suspended solids, soluble solids and total solids were measured. Water quality parameters differed among sampling points, being lower at the headwater sampling point (0m above sea level), and then progressively higher until the last downstream sampling point (2500m above sea level). For the dry season, the mean discharge was 39.5ls -1 (from April to September) whereas 113.0ls -1 were averaged during the rainy season (from October to March). In addition, significant temporal and spatial differences were observed (P<0.05) for the fourteen parameters during the rainy and dry period. The study enhance significant relationships among land use and water quality and its temporal effect, showing seasonal differences between the land use and water quality connection, highlighting the importance of multiple spatial and temporal scales for understanding the impacts of human activities on catchment ecosystem services. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M
2018-05-16
Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Duadi, Hamootal; Fixler, Dror
2015-05-01
Light reflectance and transmission from soft tissue has been utilized in noninvasive clinical measurement devices such as the photoplethysmograph (PPG) and reflectance pulse oximeter. Incident light on the skin travels into the underlying layers and is in part reflected back to the surface, in part transferred and in part absorbed. Most methods of near infrared (NIR) spectroscopy focus on the volume reflectance from a semi-infinite sample, while very few measure transmission. We have previously shown that examining the full scattering profile (angular distribution of exiting photons) provides more comprehensive information when measuring from a cylindrical tissue. Furthermore, an isobaric point was found which is not dependent on changes in the reduced scattering coefficient. The angle corresponding to this isobaric point depends on the tissue diameter. We investigated the role of multiple scattering and absorption on the full scattering profile of a cylindrical tissue. First, we define the range in which multiple scattering occurs for different tissue diameters. Next, we examine the role of the absorption coefficient in the attenuation of the full scattering profile. We demonstrate that the absorption linearly influences the intensity at each angle of the full scattering profile and, more importantly, the absorption does not change the position of the isobaric point. The findings of this work demonstrate a realistic model for optical tissue measurements such as NIR spectroscopy, PPG, and pulse oximetery.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
Investigation of Near Critical Point States of Molybdenum by Pulse Heating under Launching
NASA Astrophysics Data System (ADS)
Nikolaev, Dmitriy
2005-07-01
The near critical point states (NCPS) of the liquid-vapour phase transition of molybdenum were investigated. The heating of molybdenum foil samples in 1-D geometry was carried out by multiple-shocked He from the back side of the sample under dynamically created isobaric conditions [1]. The temperature of sample was measured by fast 4-channel optical pyrometer. The pressure was obtained from shock velosity in He, measured by streak camera on the step on transparent window. Two sets of experiments with various hystory of heating were carryed out, allowed us to evaluate spinode and binode lines, and the position of critical point on P-T plane: Tc=12500±1000 K, Pc=1±0.1 GPa. Work was supported by ISTC grant 2107, RFBR grant 04-02-16790. [1] V.Ya.Ternovoi, V.E.Fortov et.al. High Temp.-High Pres. 2002, v.34, pp.73-79[2] D.N.Nikolaev, A.N.Emelyanov et.al. in: SCCM-2003, AIP conf. proc. 706, ed.by M.D.Furnish, Y.M.Gupta et.al, pp.1231-1234
Zidaric, Valerija; Pardon, Bart; dos Vultos, Tiago; Deprez, Piet; Brouwer, Michael Sebastiaan Maria; Roberts, Adam P.; Henriques, Adriano O.
2012-01-01
Clostridium difficile strains were sampled periodically from 50 animals at a single veal calf farm over a period of 6 months. At arrival, 10% of animals were C. difficile positive, and the peak incidence was determined to occur at the age of 18 days (16%). The prevalence then decreased, and at slaughter, C. difficile could not be isolated. Six different PCR ribotypes were detected, and strains within a single PCR ribotype could be differentiated further by pulsed-field gel electrophoresis (PFGE). The PCR ribotype diversity was high up to the animal age of 18 days, but at later sampling points, PCR ribotype 078 and the highly related PCR ribotype 126 predominated. Resistance to tetracycline, doxycycline, and erythromycin was detected, while all strains were susceptible to amoxicillin and metronidazole. Multiple variations of the resistance gene tet(M) were present at the same sampling point, and these changed over time. We have shown that PCR ribotypes often associated with cattle (ribotypes 078, 126, and 033) were not clonal but differed in PFGE type, sporulation properties, antibiotic sensitivities, and tetracycline resistance determinants, suggesting that multiple strains of the same PCR ribotype infected the calves and that calves were likely to be infected prior to arrival at the farm. Importantly, strains isolated at later time points were more likely to be resistant to tetracycline and erythromycin and showed higher early sporulation efficiencies in vitro, suggesting that these two properties converge to promote the persistence of C. difficile in the environment or in hosts. PMID:23001653
Optical Ptychographic Microscope for Quantitative Bio-Mechanical Imaging
NASA Astrophysics Data System (ADS)
Anthony, Nicholas; Cadenazzi, Guido; Nugent, Keith; Abbey, Brian
The role that mechanical forces play in biological processes such as cell movement and death is becoming of significant interest to further develop our understanding of the inner workings of cells. The most common method used to obtain stress information is photoelasticity which maps a samples birefringence, or its direction dependent refractive indices, using polarized light. However this method only provides qualitative data and for stress information to be useful quantitative data is required. Ptychography is a method for quantitatively determining the phase of a samples complex transmission function. The technique relies upon the collection of multiple overlapping coherent diffraction patterns from laterally displaced points on the sample. The overlap of measurement points provides complementary information that significantly aids in the reconstruction of the complex wavefield exiting the sample and allows for quantitative imaging of weakly interacting specimens. Here we describe recent advances at La Trobe University Melbourne on achieving quantitative birefringence mapping using polarized light ptychography with applications in cell mechanics. Australian Synchrotron, ARC Centre of Excellence for Advanced Molecular Imaging.
Multiple Access Points within the Online Classroom: Where Students Look for Information
ERIC Educational Resources Information Center
Steele, John; Nordin, Eric J.; Larson, Elizabeth; McIntosh, Daniel
2017-01-01
The purpose of this study is to examine the impact of information placement within the confines of the online classroom architecture. Also reviewed was the impact of other variables such as course design, teaching presence and student patterns in looking for information. The sample population included students from a major online university in…
Müller, Marco; Wasmer, Katharina; Vetter, Walter
2018-06-29
Countercurrent chromatography (CCC) is an all liquid based separation technique typically used for the isolation and purification of natural compounds. The simplicity of the method makes it easy to scale up CCC separations from analytical to preparative and even industrial scale. However, scale-up of CCC separations requires two different instruments with varying coil dimensions. Here we developed two variants of the CCC multiple injection mode as an alternative to increase the throughput and enhance productivity of a CCC separation when using only one instrument. The concept is based on the parallel injection of samples at different points in the CCC column system and the simultaneous separation using one pump only. The wiring of the CCC setup was modified by the insertion of a 6-port selection valve, multiple T-pieces and sample loops. Furthermore, the introduction of storage sample loops enabled the CCC system to be used with repeated injection cycles. Setup and advantages of both multiple injection modes were shown by the isolation of the furan fatty acid 11-(3,4-dimethyl-5-pentylfuran-2-yl)-undecanoic acid (11D5-EE) from an ethyl ester oil rich in 4,7,10,13,16,19-docosahexaenoic acid (DHA-EE). 11D5-EE was enriched in one step from 1.9% to 99% purity. The solvent consumption per isolated amount of analyte could be reduced by ∼40% compared to increased throughput CCC and by ∼5% in the repeated multiple injection mode which also facilitated the isolation of the major compound (DHA-EE) in the sample. Copyright © 2018 Elsevier B.V. All rights reserved.
Tang, Fengyan; Jang, Heejung; Lingler, Jennifer; Tamres, Lisa K; Erlen, Judith A
2015-01-01
Caring for an older adult with memory loss is stressful. Caregiver stress could produce negative outcomes such as depression. Previous research is limited in examining multiple intermediate pathways from caregiver stress to depressive symptoms. This study addresses this limitation by examining the role of self-efficacy, social support, and problem solving in mediating the relationships between caregiver stressors and depressive symptoms. Using a sample of 91 family caregivers, we tested simultaneously multiple mediators between caregiver stressors and depression. Results indicate that self-efficacy mediated the pathway from daily hassles to depression. Findings point to the importance of improving self-efficacy in psychosocial interventions for caregivers of older adults with memory loss.
La, Moonwoo; Park, Sang Min; Kim, Dong Sung
2015-01-01
In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
Xie, Wei-Qi; Chai, Xin-Sheng
2016-04-22
This paper describes a new method for the rapid determination of the moisture content in paper materials. The method is based on multiple headspace extraction gas chromatography (MHE-GC) at a temperature above the boiling point of water, from which an integrated water loss from the tested sample due to evaporation can be measured and from which the moisture content in the sample can be determined. The results show that the new method has a good precision (with the relative standard deviation <0.96%), high sensitivity (the limit of quantitation=0.005%) and good accuracy (the relative differences <1.4%). Therefore, the method is quite suitable for many uses in research and industrial applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Hyperspectral microscopic imaging by multiplex coherent anti-Stokes Raman scattering (CARS)
NASA Astrophysics Data System (ADS)
Khmaladze, Alexander; Jasensky, Joshua; Zhang, Chi; Han, Xiaofeng; Ding, Jun; Seeley, Emily; Liu, Xinran; Smith, Gary D.; Chen, Zhan
2011-10-01
Coherent anti-Stokes Raman scattering (CARS) microscopy is a powerful technique to image the chemical composition of complex samples in biophysics, biology and materials science. CARS is a four-wave mixing process. The application of a spectrally narrow pump beam and a spectrally wide Stokes beam excites multiple Raman transitions, which are probed by a probe beam. This generates a coherent directional CARS signal with several orders of magnitude higher intensity relative to spontaneous Raman scattering. Recent advances in the development of ultrafast lasers, as well as photonic crystal fibers (PCF), enable multiplex CARS. In this study, we employed two scanning imaging methods. In one, the detection is performed by a photo-multiplier tube (PMT) attached to the spectrometer. The acquisition of a series of images, while tuning the wavelengths between images, allows for subsequent reconstruction of spectra at each image point. The second method detects CARS spectrum in each point by a cooled coupled charged detector (CCD) camera. Coupled with point-by-point scanning, it allows for a hyperspectral microscopic imaging. We applied this CARS imaging system to study biological samples such as oocytes.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Minimal Clinically Important Difference of Berg Balance Scale in People With Multiple Sclerosis.
Gervasoni, Elisa; Jonsdottir, Johanna; Montesano, Angelo; Cattaneo, Davide
2017-02-01
To identify the minimal clinically important difference (MCID) to define clinically meaningful patient's improvement on the Berg Balance Scale (BBS) in people with multiple sclerosis (PwMS) in response to rehabilitation. Cohort study. Neurorehabilitation institute. PwMS (N=110). This study comprised inpatients and outpatients who participated in research on balance and gait rehabilitation. All received 20 rehabilitation sessions with different intensities. Inpatients received daily treatments over a period of 4 weeks, while outpatients received 2 to 3 treatments per week for 10 weeks. An anchor-based approach using clinical global impression of improvement in balance (Activities-specific Balance Confidence [ABC] Scale) was used to determine the MCID of the BBS. The MCID was defined as the minimum change in the BBS total score (postintervention - preintervention) that was needed to perceive at least a 10% improvement on the ABC Scale. Receiver operating characteristic curves were used to define the cutoff of the optimal MCID of the BBS discriminating between improved and not improved subjects. The MCID for change on the BBS was 3 points for the whole sample, 3 points for the inpatients, and 2 points for the outpatients. The area under the curve was .65 for the whole sample, .64 for inpatients, and .68 for outpatients. The MCID for improvement in balance as measured by the BBS was 3 points, meaning that PwMS are likely to perceive that as a reproducible and clinically important change in their balance performance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Sampling scales define occupancy and underlying occupancy-abundance relationships in animals.
Steenweg, Robin; Hebblewhite, Mark; Whittington, Jesse; Lukacs, Paul; McKelvey, Kevin
2018-01-01
Occupancy-abundance (OA) relationships are a foundational ecological phenomenon and field of study, and occupancy models are increasingly used to track population trends and understand ecological interactions. However, these two fields of ecological inquiry remain largely isolated, despite growing appreciation of the importance of integration. For example, using occupancy models to infer trends in abundance is predicated on positive OA relationships. Many occupancy studies collect data that violate geographical closure assumptions due to the choice of sampling scales and application to mobile organisms, which may change how occupancy and abundance are related. Little research, however, has explored how different occupancy sampling designs affect OA relationships. We develop a conceptual framework for understanding how sampling scales affect the definition of occupancy for mobile organisms, which drives OA relationships. We explore how spatial and temporal sampling scales, and the choice of sampling unit (areal vs. point sampling), affect OA relationships. We develop predictions using simulations, and test them using empirical occupancy data from remote cameras on 11 medium-large mammals. Surprisingly, our simulations demonstrate that when using point sampling, OA relationships are unaffected by spatial sampling grain (i.e., cell size). In contrast, when using areal sampling (e.g., species atlas data), OA relationships are affected by spatial grain. Furthermore, OA relationships are also affected by temporal sampling scales, where the curvature of the OA relationship increases with temporal sampling duration. Our empirical results support these predictions, showing that at any given abundance, the spatial grain of point sampling does not affect occupancy estimates, but longer surveys do increase occupancy estimates. For rare species (low occupancy), estimates of occupancy will quickly increase with longer surveys, even while abundance remains constant. Our results also clearly demonstrate that occupancy for mobile species without geographical closure is not true occupancy. The independence of occupancy estimates from spatial sampling grain depends on the sampling unit. Point-sampling surveys can, however, provide unbiased estimates of occupancy for multiple species simultaneously, irrespective of home-range size. The use of occupancy for trend monitoring needs to explicitly articulate how the chosen sampling scales define occupancy and affect the occupancy-abundance relationship. © 2017 by the Ecological Society of America.
Empirical analyses of plant-climate relationships for the western United States
Gerald E. Rehfeldt; Nicholas L. Crookston; Marcus V. Warwell; Jeffrey S. Evans
2006-01-01
The Random Forests multiple-regression tree was used to model climate profiles of 25 biotic communities of the western United States and nine of their constituent species. Analyses of the communities were based on a gridded sample of ca. 140,000 points, while those for the species used presence-absence data from ca. 120,000 locations. Independent variables included 35...
ERIC Educational Resources Information Center
Bridgeman, Brent; Pollack, Judith; Burton, Nancy
2008-01-01
Two methods of showing the ability of high school grades (high school grade point averages) and SAT scores to predict cumulative grades in different types of college courses were evaluated in a sample of 26 colleges. Each college contributed data from three cohorts of entering freshmen, and each cohort was followed for at least four years.…
Scanning sequences after Gibbs sampling to find multiple occurrences of functional elements
Tharakaraman, Kannan; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L; Landsman, David; Spouge, John L
2006-01-01
Background Many DNA regulatory elements occur as multiple instances within a target promoter. Gibbs sampling programs for finding DNA regulatory elements de novo can be prohibitively slow in locating all instances of such an element in a sequence set. Results We describe an improvement to the A-GLAM computer program, which predicts regulatory elements within DNA sequences with Gibbs sampling. The improvement adds an optional "scanning step" after Gibbs sampling. Gibbs sampling produces a position specific scoring matrix (PSSM). The new scanning step resembles an iterative PSI-BLAST search based on the PSSM. First, it assigns an "individual score" to each subsequence of appropriate length within the input sequences using the initial PSSM. Second, it computes an E-value from each individual score, to assess the agreement between the corresponding subsequence and the PSSM. Third, it permits subsequences with E-values falling below a threshold to contribute to the underlying PSSM, which is then updated using the Bayesian calculus. A-GLAM iterates its scanning step to convergence, at which point no new subsequences contribute to the PSSM. After convergence, A-GLAM reports predicted regulatory elements within each sequence in order of increasing E-values, so users have a statistical evaluation of the predicted elements in a convenient presentation. Thus, although the Gibbs sampling step in A-GLAM finds at most one regulatory element per input sequence, the scanning step can now rapidly locate further instances of the element in each sequence. Conclusion Datasets from experiments determining the binding sites of transcription factors were used to evaluate the improvement to A-GLAM. Typically, the datasets included several sequences containing multiple instances of a regulatory motif. The improvements to A-GLAM permitted it to predict the multiple instances. PMID:16961919
Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.
Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond
2018-04-01
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
McCobb, Timothy D.; LeBlanc, Denis R.
2011-01-01
The U.S. Geological Survey (USGS) collected water-quality data between 2001 and 2010 in the Fishermans Cove area of Ashumet Pond, Falmouth, Massachusetts, where the eastern portion of a treated-wastewater plume, created by more than 60 years of overland disposal, discharges to the pond. Temporary drive points were installed, and shallow pond-bottom groundwater was sampled, at 167 locations in 2001, 150 locations in 2003, and 120 locations in 2004 to delineate the distribution of wastewater-related constituents. In 2004, the Air Force Center for Engineering and the Environment (AFCEE) installed a pond-bottom permeable reactive barrier (PRB) to intercept phosphate in the plume at its discharge point to the pond. The USGS monitored the performance of the PRB by collecting samples from temporary drive points at multiple depth intervals in 2006 (200 samples at 76 locations) and 2009 (150 samples at 90 locations). During the first 5 years after installation of the PRB, water samples were collected periodically from five types of pore-water samplers that had been permanently installed in and near the PRB during the barrier's emplacement. The distribution of wastewater-related constituents in the pond-bottom groundwater and changes in the geochemistry of the pond-bottom groundwater after installation of the PRB have been documented in several published reports that are listed in the references.
P-value interpretation and alpha allocation in clinical trials.
Moyé, L A
1998-08-01
Although much value has been placed on type I error event probabilities in clinical trials, interpretive difficulties often arise that are directly related to clinical trial complexity. Deviations of the trial execution from its protocol, the presence of multiple treatment arms, and the inclusion of multiple end points complicate the interpretation of an experiment's reported alpha level. The purpose of this manuscript is to formulate the discussion of P values (and power for studies showing no significant differences) on the basis of the event whose relative frequency they represent. Experimental discordance (discrepancies between the protocol's directives and the experiment's execution) is linked to difficulty in alpha and beta interpretation. Mild experimental discordance leads to an acceptable adjustment for alpha or beta, while severe discordance results in their corruption. Finally, guidelines are provided for allocating type I error among a collection of end points in a prospectively designed, randomized controlled clinical trial. When considering secondary end point inclusion in clinical trials, investigators should increase the sample size to preserve the type I error rates at acceptable levels.
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornwell, Paris A; Bunn, Jeffrey R; Schmidlin, Joshua E
The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in amore » sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the scan arm creates. Once a model, fiducial, and measurement files are created, a special program, called SScanSS combines the information and by simulation of the sample on the diffractometer can help plan the experiment before using neutron time. Finally, the sample is mounted on the relevant stress measurement instrument and the fiducial points are measured again. In the HFIR beam room, a laser tracker is used in conjunction with a program called CAM2 to measure the fiducial points in the NRSF2 instrument's sample positioner coordinate system. SScanSS is then used again to perform a coordinate system transformation of the measurement file locations to the sample positioner coordinate system. A procedure file is then written with the coordinates in the sample positioner coordinate system for the desired measurement locations. This file is often called a script or command file and can be further modified using excel. It is very important to note that this process is not a linear one, but rather, it often is iterative. Many of the steps in this guide are interdependent on one another. It is very important to discuss the process as it pertains to the specific sample being measured. What works with one sample may not necessarily work for another. This guide attempts to provide a typical work flow that has been successful in most cases.« less
Tang, Fengyan; Jang, Heejung; Lingler, Jennifer; Tamres, Lisa K.; Erlen, Judith A.
2016-01-01
Caring for an older adult with memory loss is stressful. Caregiver stress could produce negative outcomes such as depression. Previous research is limited in examining multiple intermediate pathways from caregiver stress to depressive symptoms. This study addresses this limitation by examining the role of self-efficacy, social support, and problem-solving in mediating the relationships between caregiver stressors and depressive symptoms. Using a sample of 91 family caregivers, we tested simultaneously multiple mediators between caregiver stressors and depression. Results indicate that self-efficacy mediated the pathway from daily hassles to depression. Findings point to the importance of improving self-efficacy in psychosocial interventions for caregivers of older adults with memory loss. PMID:26317766
Critical fluid light scattering
NASA Technical Reports Server (NTRS)
Gammon, Robert W.
1988-01-01
The objective is to measure the decay rates of critical density fluctuations in a simple fluid (xenon) very near its liquid-vapor critical point using laser light scattering and photon correlation spectroscopy. Such experiments were severely limited on Earth by the presence of gravity which causes large density gradients in the sample when the compressibility diverges approaching the critical point. The goal is to measure fluctuation decay rates at least two decades closer to the critical point than is possible on earth, with a resolution of 3 microK. This will require loading the sample to 0.1 percent of the critical density and taking data as close as 100 microK to the critical temperature. The minimum mission time of 100 hours will allow a complete range of temperature points to be covered, limited by the thermal response of the sample. Other technical problems have to be addressed such as multiple scattering and the effect of wetting layers. The experiment entails measurement of the scattering intensity fluctuation decay rate at two angles for each temperature and simultaneously recording the scattering intensities and sample turbidity (from the transmission). The analyzed intensity and turbidity data gives the correlation length at each temperature and locates the critical temperature. The fluctuation decay rate data from these measurements will provide a severe test of the generalized hydrodynamic theories of transport coefficients in the critical regions. When compared to equivalent data from binary liquid critical mixtures they will test the universality of critical dynamics.
Subpixel resolution from multiple images
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Kanefsky, Rob; Stutz, John; Kraft, Richard
1994-01-01
Multiple images taken from similar locations and under similar lighting conditions contain similar, but not identical, information. Slight differences in instrument orientation and position produces mismatches between the projected pixel grids. These mismatches ensure that any point on the ground is sampled differently in each image. If all the images can be registered with respect to each other to a small fraction of a pixel accuracy, then the information from the multiple images can be combined to increase linear resolution by roughly the square root of the number of images. In addition, the gray-scale resolution of the composite image is also improved. We describe methods for multiple image registration and combination, and discuss some of the problems encountered in developing and extending them. We display test results with 8:1 resolution enhancement, and Viking Orbiter imagery with 2:1 and 4:1 enhancements.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Probabilistic #D data fusion for multiresolution surface generation
NASA Technical Reports Server (NTRS)
Manduchi, R.; Johnson, A. E.
2002-01-01
In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Evidence-based point-of-care tests and device designs for disaster preparedness.
Brock, T Keith; Mecozzi, Daniel M; Sumner, Stephanie; Kost, Gerald J
2010-01-01
To define pathogen tests and device specifications needed for emerging point-of-care (POC) technologies used in disasters. Surveys included multiple-choice and ranking questions. Multiple-choice questions were analyzed with the chi2 test for goodness-of-fit and the binomial distribution test. Rankings were scored and compared using analysis of variance and Tukey's multiple comparison test. Disaster care experts on the editorial boards of the American Journal of Disaster Medicine and the Disaster Medicine and Public Health Preparedness, and the readers of the POC Journal. Vibrio cholera and Staphylococcus aureus were top-ranked pathogens for testing in disaster settings. Respondents felt that disaster response teams should be equipped with pandemic infectious disease tests for novel 2009 H1N1 and avian H5N1 influenza (disaster care, p < 0.05; POC, p < 0.01). In disaster settings, respondents preferred self-contained test cassettes (disaster care, p < 0.05; POC, p < 0.001) for direct blood sampling (POC, p < 0.01) and disposal of biological waste (disaster care, p < 0.05; POC, p < 0.001). Multiplex testing performed at the POC was preferred in urgent care and emergency room settings. Evidence-based needs assessment identifies pathogen detection priorities in disaster care scenarios, in which Vibrio cholera, methicillin-sensitive and methicillin-resistant Staphylococcus aureus, and Escherichia coli ranked the highest. POC testing should incorporate setting-specific design criteria such as safe disposable cassettes and direct blood sampling at the site of care.
ERIC Educational Resources Information Center
Tollison, Sean J.; Mastroleo, Nadine R.; Mallett, Kimberly A.; Witkiewitz, Katie; Lee, Christine M.; Ray, Anne E.; Larimer, Mary E.
2013-01-01
The purpose of this study was to replicate and extend previous findings (Tollison et al., 2008) on the association between peer facilitator adherence to motivational interviewing (MI) microskills and college student drinking behavior. This study used a larger sample size, multiple follow-up time-points, and latent variable analyses allowing for…
Charles, Isabel; Sinclair, Ian; Addison, Daniel H
2014-04-01
A new approach to the storage, processing, and interrogation of the quality data for screening samples has improved analytical throughput and confidence and enhanced the opportunities for learning from the accumulating records. The approach has entailed the design, development, and implementation of a database-oriented system, capturing information from the liquid chromatography-mass spectrometry capabilities used for assessing the integrity of samples in AstraZeneca's screening collection. A Web application has been developed to enable the visualization and interactive annotation of the analytical data, monitor the current sample queue, and report the throughput rate. Sample purity and identity are certified automatically on the chromatographic peaks of interest if predetermined thresholds are reached on key parameters. Using information extracted in parallel from the compound registration and container inventory databases, the chromatographic and spectroscopic profiles for each vessel are linked to the sample structures and storage histories. A search engine facilitates the direct comparison of results for multiple vessels of the same or similar compounds, for single vessels analyzed at different time points, or for vessels related by their origin or process flow. Access to this network of information has provided a deeper understanding of the multiple factors contributing to sample quality assurance.
Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples
NASA Astrophysics Data System (ADS)
Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.
2014-12-01
Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Geometric correction and digital elevation extraction using multiple MTI datasets
Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.
2007-01-01
Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.
NASA Astrophysics Data System (ADS)
Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.
2004-06-01
The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.
Use of tracers and isotopes to evaluate vulnerability of water in domestic wells to septic waste
Verstraeten, Ingrid M.; Fetterman, G.S.; Meyer, M.J.; Bullen, T.; Sebree, S.K.
2005-01-01
In Nebraska, a large number (>200) of shallow sand-point and cased wells completed in coarse alluvial sediments along rivers and lakes still are used to obtain drinking water for human consumption, even though construction of sand-point wells for consumptive uses has been banned since 1987. The quality of water from shallow domestic wells potentially vulnerable to seepage from septic systems was evaluated by analyzing for the presence of tracers and multiple isotopes. Samples were collected from 26 sand-point and perforated, cased domestic wells and were analyzed for bacteria, coliphages, nitrogen species, nitrogen and boron isotopes, dissolved organic carbon (DOC), prescription and nonprescription drugs, or organic waste water contaminants. At least 13 of the 26 domestic well samples showed some evidence of septic system effects based on the results of several tracers including DOC, coliphages, NH4+, NO3-, N2, ?? 15N[NO3-] and boron isotopes, and antibiotics and other drugs. Sand-point wells within 30 m of a septic system and <14 m deep in a shallow, thin aquifer had the most tracers detected and the highest values, indicating the greatest vulnerability to contamination from septic waste. Copyright ?? 2005 National Ground Water Association.
Nelson, Andrew; Chomitz, Kenneth M.
2011-01-01
Protected areas (PAs) cover a quarter of the tropical forest estate. Yet there is debate over the effectiveness of PAs in reducing deforestation, especially when local people have rights to use the forest. A key analytic problem is the likely placement of PAs on marginal lands with low pressure for deforestation, biasing comparisons between protected and unprotected areas. Using matching techniques to control for this bias, this paper analyzes the global tropical forest biome using forest fires as a high resolution proxy for deforestation; disaggregates impacts by remoteness, a proxy for deforestation pressure; and compares strictly protected vs. multiple use PAs vs indigenous areas. Fire activity was overlaid on a 1 km map of tropical forest extent in 2000; land use change was inferred for any point experiencing one or more fires. Sampled points in pre-2000 PAs were matched with randomly selected never-protected points in the same country. Matching criteria included distance to road network, distance to major cities, elevation and slope, and rainfall. In Latin America and Asia, strict PAs substantially reduced fire incidence, but multi-use PAs were even more effective. In Latin America, where there is data on indigenous areas, these areas reduce forest fire incidence by 16 percentage points, over two and a half times as much as naïve (unmatched) comparison with unprotected areas would suggest. In Africa, more recently established strict PAs appear to be effective, but multi-use tropical forest protected areas yield few sample points, and their impacts are not robustly estimated. These results suggest that forest protection can contribute both to biodiversity conservation and CO2 mitigation goals, with particular relevance to the REDD agenda. Encouragingly, indigenous areas and multi-use protected areas can help to accomplish these goals, suggesting some compatibility between global environmental goals and support for local livelihoods. PMID:21857950
Low-Rank Discriminant Embedding for Multiview Learning.
Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke
2017-11-01
This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.
Imaging synthetic aperture radar
Burns, Bryan L.; Cordaro, J. Thomas
1997-01-01
A linear-FM SAR imaging radar method and apparatus to produce a real-time image by first arranging the returned signals into a plurality of subaperture arrays, the columns of each subaperture array having samples of dechirped baseband pulses, and further including a processing of each subaperture array to obtain coarse-resolution in azimuth, then fine-resolution in range, and lastly, to combine the processed subapertures to obtain the final fine-resolution in azimuth. Greater efficiency is achieved because both the transmitted signal and a local oscillator signal mixed with the returned signal can be varied on a pulse-to-pulse basis as a function of radar motion. Moreover, a novel circuit can adjust the sampling location and the A/D sample rate of the combined dechirped baseband signal which greatly reduces processing time and hardware. The processing steps include implementing a window function, stabilizing either a central reference point and/or all other points of a subaperture with respect to doppler frequency and/or range as a function of radar motion, sorting and compressing the signals using a standard fourier transforms. The stabilization of each processing part is accomplished with vector multiplication using waveforms generated as a function of radar motion wherein these waveforms may be synthesized in integrated circuits. Stabilization of range migration as a function of doppler frequency by simple vector multiplication is a particularly useful feature of the invention; as is stabilization of azimuth migration by correcting for spatially varying phase errors prior to the application of an autofocus process.
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Method for determining surface coverage by materials exhibiting different fluorescent properties
NASA Technical Reports Server (NTRS)
Chappelle, Emmett W. (Inventor); Daughtry, Craig S. T. (Inventor); Mcmurtrey, James E., III (Inventor)
1995-01-01
An improved method for detecting, measuring, and distinguishing crop residue, live vegetation, and mineral soil is presented. By measuring fluorescence in multiple bands, live and dead vegetation are distinguished. The surface of the ground is illuminated with ultraviolet radiation, inducing fluorescence in certain molecules. The emitted fluorescent emission induced by the ultraviolet radiation is measured by means of a fluorescence detector, consisting of a photodetector or video camera and filters. The spectral content of the emitted fluorescent emission is characterized at each point sampled, and the proportion of the sampled area covered by residue or vegetation is calculated.
Project VALOR: Trajectories of Change in PTSD in Combat-Exposed Veterans
2014-10-01
8. PERFORMING ORGANIZATION REPORT NUMBER Boston VA Research Institute Inc. 150 South Huntington Ave Boston, MA 02130...comprehensive data on PTSD symptoms and related exposures and outcomes at multiple time points in a cohort of VA users with and without PTSD provide...proportion of women in our sample will allow us to examine variation in the associations by gender. 15. SUBJECT TERMS Risk factors for PTSD, PTSD symptom
Complement pathway biomarkers and age-related macular degeneration
Gemenetzi, M; Lotery, A J
2016-01-01
In the age-related macular degeneration (AMD) ‘inflammation model', local inflammation plus complement activation contributes to the pathogenesis and progression of the disease. Multiple genetic associations have now been established correlating the risk of development or progression of AMD. Stratifying patients by their AMD genetic profile may facilitate future AMD therapeutic trials resulting in meaningful clinical trial end points with smaller sample sizes and study duration. PMID:26493033
Study of coherent reflectometer for imaging internal structures of highly scattering media
NASA Astrophysics Data System (ADS)
Poupardin, Mathieu; Dolfi, Agnes
1996-01-01
Optical reflectometers are potentially useful tools for imaging internal structures of turbid media, particularly of biological media. To get a point by point image, an active imaging system has to distinguish light scattered from a sample volume and light scattered by other locations in the media. Operating this discrimination of light with reflectometers based on coherence can be realized in two ways: assuring a geometric selection or a temporal selection. In this paper we present both methods, showing in each case the influence of the different parameters on the size of the sample volume under the assumption of single scattering. We also study the influence on the detection efficiency of the coherence loss of the incident light resulting from multiple scattering. We adapt a model, first developed for atmospheric lidar in turbulent atmosphere, to get an analytical expression of this detection efficiency in the function of the optical coefficients of the media.
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
Cleveland, Danielle; Brumbaugh, William G.; MacDonald, Donald D.
2017-01-01
Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater.
Least Squares Solution of Small Sample Multiple-Master PSInSAR System
NASA Astrophysics Data System (ADS)
Zhang, Lei; Ding, Xiao Li; Lu, Zhong
2010-03-01
In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.
NASA Astrophysics Data System (ADS)
Loveday, S.; Harris, D. B.; Schiappa, T.; Pecha, M.
2017-12-01
The specific sources of sediments deposited in the Appalachian basin prior to and immediately following the Alleghenian orogeny has long been a topic of debate. Recent advances in U-Pb dating of detrital zircons have greatly helped to determine some of the sources of these sediments. For this study, sandstone samples were collected from the Pottsville Formation in the northern Appalachian Foreland Basin, Venango County, Pennsylvania to provide supplementary data for previous work that sought to describe the provenance of the same sediments by point counts of thin sections of the same units. Results of this previous work established that the provenance for these units was transitional recycled orogenic, including multiple recycled sediments, and that a cratonic contribution was not able to be determined clearly. The previous results suggested that the paleoenvironment was a fluvial dominated delta prograding in the northern direction. However, no geochronologic data was found during this study to confirm this interpretation. We sought to verify these results by U-Pb analysis of detrital zircons. Samples were collected from the areas where the previous research took place. U-Pb ages were found from sample at the highest elevation and lowest elevation. In the first sample, sample 17SL01 (younger sample stratigraphically), the zircons yield U-Pb age range peaks at 442-468 ma and 1037-1081 ma. The probability density plot for this specific sample displays a complete age gap from 500 ma to 811 ma. In the second sample, sample 17SL03 (older rock stratigraphically), the zircons yield U-Pb ages range peaks of 424-616 ma and 975-1057 ma. This sample doesn't show any ages younger than 424 ma and it doesn't display the sample age gap as sample 17SL01 does. The ages of zircons are consistent with thin section point counting provenance results from previous research suggesting zircon transport from the northern direction.
Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.
Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu
2016-11-01
In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2017-06-01
We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.
Improvement on Timing Accuracy of LIDAR for Remote Sensing
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.
2018-05-01
The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.
Katchman, Benjamin A.; Smith, Joseph T.; Obahiagbon, Uwadiae; Kesiraju, Sailaja; Lee, Yong-Kyun; O’Brien, Barry; Kaftanoglu, Korhan; Blain Christen, Jennifer; Anderson, Karen S.
2016-01-01
Point-of-care molecular diagnostics can provide efficient and cost-effective medical care, and they have the potential to fundamentally change our approach to global health. However, most existing approaches are not scalable to include multiple biomarkers. As a solution, we have combined commercial flat panel OLED display technology with protein microarray technology to enable high-density fluorescent, programmable, multiplexed biorecognition in a compact and disposable configuration with clinical-level sensitivity. Our approach leverages advances in commercial display technology to reduce pre-functionalized biosensor substrate costs to pennies per cm2. Here, we demonstrate quantitative detection of IgG antibodies to multiple viral antigens in patient serum samples with detection limits for human IgG in the 10 pg/mL range. We also demonstrate multiplexed detection of antibodies to the HPV16 proteins E2, E6, and E7, which are circulating biomarkers for cervical as well as head and neck cancers. PMID:27374875
Katchman, Benjamin A; Smith, Joseph T; Obahiagbon, Uwadiae; Kesiraju, Sailaja; Lee, Yong-Kyun; O'Brien, Barry; Kaftanoglu, Korhan; Blain Christen, Jennifer; Anderson, Karen S
2016-07-04
Point-of-care molecular diagnostics can provide efficient and cost-effective medical care, and they have the potential to fundamentally change our approach to global health. However, most existing approaches are not scalable to include multiple biomarkers. As a solution, we have combined commercial flat panel OLED display technology with protein microarray technology to enable high-density fluorescent, programmable, multiplexed biorecognition in a compact and disposable configuration with clinical-level sensitivity. Our approach leverages advances in commercial display technology to reduce pre-functionalized biosensor substrate costs to pennies per cm(2). Here, we demonstrate quantitative detection of IgG antibodies to multiple viral antigens in patient serum samples with detection limits for human IgG in the 10 pg/mL range. We also demonstrate multiplexed detection of antibodies to the HPV16 proteins E2, E6, and E7, which are circulating biomarkers for cervical as well as head and neck cancers.
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Cleanthous, Sophie; Kinter, Elizabeth; Marquis, Patrick; Petrillo, Jennifer; You, Xiaojun; Wakeford, Craig; Sabatella, Guido
2017-01-01
Background Study objectives were to evaluate the Multiple Sclerosis Impact Scale (MSIS-29) and explore an optimized scoring structure based on empirical post-hoc analyses of data from the Phase III ADVANCE clinical trial. Methods ADVANCE MSIS-29 data from six time-points were analyzed in a sample of patients with relapsing–remitting multiple sclerosis (RRMS). Rasch Measurement Theory (RMT) analysis was undertaken to examine three broad areas: sample-to-scale targeting, measurement scale properties, and sample measurement validity. Interpretation of results led to an alternative MSIS-29 scoring structure, further evaluated alongside responsiveness of the original and revised scales at Week 48. Results RMT analysis provided mixed evidence for Physical and Psychological Impact scales that were sub-optimally targeted at the lower functioning end of the scales. Their conceptual basis could also stand to improve based on item fit results. The revised MSIS-29 rescored scales improved but did not resolve the measurement scale properties and targeting of the MSIS-29. In two out of three revised scales, responsiveness analysis indicated strengthened ability to detect change. Conclusion The revised MSIS-29 provides an initial evidence-based improved patient-reported outcome (PRO) instrument for evaluating the impact of MS. Revised scoring improves conceptual clarity and interpretation of scores by refining scale structure to include Symptoms, Psychological Impact, and General Limitations. Clinical trial ADVANCE (ClinicalTrials.gov identifier NCT00906399). PMID:29104758
Griffith, J.A.; Stehman, S.V.; Sohl, Terry L.; Loveland, Thomas R.
2003-01-01
Temporal trends in landscape pattern metrics describing texture, patch shape and patch size were evaluated in the US Middle Atlantic Coastal Plain Ecoregion. The landscape pattern metrics were calculated for a sample of land use/cover data obtained for four points in time from 1973-1992. The multiple sampling dates permit evaluation of trend, whereas availability of only two sampling dates allows only evaluation of change. Observed statistically significant trends in the landscape pattern metrics demonstrated that the sampling-based monitoring protocol was able to detect a trend toward a more fine-grained landscape in this ecoregion. This sampling and analysis protocol is being extended spatially to the remaining 83 ecoregions in the US and temporally to the year 2000 to provide a national and regional synthesis of the temporal and spatial dynamics of landscape pattern covering the period 1973-2000.
On spatial coalescents with multiple mergers in two dimensions.
Heuer, Benjamin; Sturm, Anja
2013-08-01
We consider the genealogy of a sample of individuals taken from a spatially structured population when the variance of the offspring distribution is relatively large. The space is structured into discrete sites of a graph G. If the population size at each site is large, spatial coalescents with multiple mergers, so called spatial Λ-coalescents, for which ancestral lines migrate in space and coalesce according to some Λ-coalescent mechanism, are shown to be appropriate approximations to the genealogy of a sample of individuals. We then consider as the graph G the two dimensional torus with side length 2L+1 and show that as L tends to infinity, and time is rescaled appropriately, the partition structure of spatial Λ-coalescents of individuals sampled far enough apart converges to the partition structure of a non-spatial Kingman coalescent. From a biological point of view this means that in certain circumstances both the spatial structure as well as larger variances of the underlying offspring distribution are harder to detect from the sample. However, supplemental simulations show that for moderately large L the different structure is still evident. Copyright © 2012 Elsevier Inc. All rights reserved.
Pointright: a system to redirect mouse and keyboard control among multiple machines
Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA
2008-09-30
The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.
Estimating vehicle height using homographic projections
Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter
2013-07-16
Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.
Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T
2011-04-01
Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.
Spatial distribution of Legionella pneumophila MLVA-genotypes in a drinking water system.
Rodríguez-Martínez, Sarah; Sharaby, Yehonatan; Pecellín, Marina; Brettar, Ingrid; Höfle, Manfred; Halpern, Malka
2015-06-15
Bacteria of the genus Legionella cause water-based infections, resulting in severe pneumonia. To improve our knowledge about Legionella spp. ecology, its prevalence and its relationships with environmental factors were studied. Seasonal samples were taken from both water and biofilm at seven sampling points of a small drinking water distribution system in Israel. Representative isolates were obtained from each sample and identified to the species level. Legionella pneumophila was further determined to the serotype and genotype level. High resolution genotyping of L. pneumophila isolates was achieved by Multiple-Locus Variable number of tandem repeat Analysis (MLVA). Within the studied water system, Legionella plate counts were higher in summer and highly variable even between adjacent sampling points. Legionella was present in six out of the seven selected sampling points, with counts ranging from 1.0 × 10(1) to 5.8 × 10(3) cfu/l. Water counts were significantly higher in points where Legionella was present in biofilms. The main fraction of the isolated Legionella was L. pneumophila serogroup 1. Serogroup 3 and Legionella sainthelensis were also isolated. Legionella counts were positively correlated with heterotrophic plate counts at 37 °C and negatively correlated with chlorine. Five MLVA-genotypes of L. pneumophila were identified at different buildings of the sampled area. The presence of a specific genotype, "MLVA-genotype 4", consistently co-occurred with high Legionella counts and seemed to "trigger" high Legionella counts in cold water. Our hypothesis is that both the presence of L. pneumophila in biofilm and the presence of specific genotypes, may indicate and/or even lead to high Legionella concentration in water. This observation deserves further studies in a broad range of drinking water systems to assess its potential for general use in drinking water monitoring and management. Copyright © 2015 Elsevier Ltd. All rights reserved.
Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip
Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.
2017-01-01
Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028
Different EGFR gene mutations in two patients with synchronous multiple lung cancers: A case report
Sakai, Hiroki; Kimura, Hiroyuki; Tsuda, Masataka; Wakiyama, Yoichi; Miyazawa, Tomoyuki; Marushima, Hideki; Kojima, Koji; Hoshikawa, Masahiro; Takagi, Masayuki; Nakamura, Haruhiko
2017-01-01
Routine clinical and pathological evaluations to determine the relationship between different lesions are often not completely conclusive. Interestingly, detailed genetic analysis of tumor samples may provide important additional information and identify second primary lung cancers. In the present study, we report cases of two synchronous lung adenocarcinomas composed of two distinct pathological subtypes with different EGFR gene mutations: a homozygous deletion in exon 19 of the papillary adenocarcinoma subtype and a point mutation of L858R in exon 21 of the tubular adenocarcinoma. The present report highlights the clinical importance of molecular cancer biomarkers to guide management decisions in cases involving multiple lung tumors. PMID:29090842
Generalized estimators of avian abundance from count survey data
Royle, J. Andrew
2004-01-01
I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
Sampling challenges in a study examining refugee resettlement
2011-01-01
Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation. PMID:21406104
Sampling challenges in a study examining refugee resettlement.
Sulaiman-Hill, Cheryl Mr; Thompson, Sandra C
2011-03-15
As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation.
NASA Astrophysics Data System (ADS)
Riyanto, J.; Sudibya; Cahyadi, M.; Aji, A. P.
2018-01-01
This aim of this study was to determine the quality of nutritional contents of beef brisket point end of Simental Ongole Crossbred meat in various boiling temperatures. Simental Ongole Crossbred had been fattened for 9 months. Furthermore, they were slaughtered at slaughterhouse and brisket point end part of meat had been prepared to analyse its nutritional contents using Food Scan. These samples were then boiled at 100°C for 0 (TR), 15 (R15), and 30 (R30) minutes, respectively. The data was analysed using Randomized Complete Design (CRD) and Duncan’s multiple range test (DMRT) had been conducted to differentiate among three treatments. The results showed that boiling temperatures significantly affected moisture, and cholesterol contents of beef (P<0.05) while fat content was not significantly affected by boiling temperatures. The boiling temperature decreased beef water contents from 72.77 to 70.84%, on the other hand, the treatment increased beef protein and cholesterol contents from 20.77 to 25.14% and 47.55 to 50.45 mg/100g samples, respectively. The conclusion of this study was boiling of beef at 100°C for 15 minutes and 30 minutes decreasing water content and increasing protein and cholesterol contents of brisket point end of Simental Ongole Crossbred beef.
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
Wickremsinhe, Enaksha R; Perkins, Everett J
2015-03-01
Traditional pharmacokinetic analysis in nonclinical studies is based on the concentration of a test compound in plasma and requires approximately 100 to 200 μL blood collected per time point. However, the total blood volume of mice limits the number of samples that can be collected from an individual animal-often to a single collection per mouse-thus necessitating dosing multiple mice to generate a pharmacokinetic profile in a sparse-sampling design. Compared with traditional methods, dried blood spot (DBS) analysis requires smaller volumes of blood (15 to 20 μL), thus supporting serial blood sampling and the generation of a complete pharmacokinetic profile from a single mouse. Here we compare plasma-derived data with DBS-derived data, explain how to adopt DBS sampling to support discovery mouse studies, and describe how to generate pharmacokinetic and pharmacodynamic data from a single mouse. Executing novel study designs that use DBS enhances the ability to identify and streamline better drug candidates during drug discovery. Implementing DBS sampling can reduce the number of mice needed in a drug discovery program. In addition, the simplicity of DBS sampling and the smaller numbers of mice needed translate to decreased study costs. Overall, DBS sampling is consistent with 3Rs principles by achieving reductions in the number of animals used, decreased restraint-associated stress, improved data quality, direct comparison of interanimal variability, and the generation of multiple endpoints from a single study.
Wickremsinhe, Enaksha R; Perkins, Everett J
2015-01-01
Traditional pharmacokinetic analysis in nonclinical studies is based on the concentration of a test compound in plasma and requires approximately 100 to 200 µL blood collected per time point. However, the total blood volume of mice limits the number of samples that can be collected from an individual animal—often to a single collection per mouse—thus necessitating dosing multiple mice to generate a pharmacokinetic profile in a sparse-sampling design. Compared with traditional methods, dried blood spot (DBS) analysis requires smaller volumes of blood (15 to 20 µL), thus supporting serial blood sampling and the generation of a complete pharmacokinetic profile from a single mouse. Here we compare plasma-derived data with DBS-derived data, explain how to adopt DBS sampling to support discovery mouse studies, and describe how to generate pharmacokinetic and pharmacodynamic data from a single mouse. Executing novel study designs that use DBS enhances the ability to identify and streamline better drug candidates during drug discovery. Implementing DBS sampling can reduce the number of mice needed in a drug discovery program. In addition, the simplicity of DBS sampling and the smaller numbers of mice needed translate to decreased study costs. Overall, DBS sampling is consistent with 3Rs principles by achieving reductions in the number of animals used, decreased restraint-associated stress, improved data quality, direct comparison of interanimal variability, and the generation of multiple endpoints from a single study. PMID:25836959
Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.
Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan
2013-06-01
The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.
1991-11-01
Tilted Rough Disc," Donald J. Schertler and Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George...Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse Bryan J. Stossel Responses Nicholas George z 0 zw V) w LU 0...number of impulses present in the degradation. IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSESt Bryan J. Stossel Nicholas George Institute of Optics
The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.
Thompson, Christopher Glen; Becker, Betsy Jane
2014-09-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.
Harding-Esch, Emma M; Nori, Achyuta V; Hegazi, Aseel; Pond, Marcus J; Okolo, Olanike; Nardone, Anthony; Lowndes, Catherine M; Hay, Phillip; Sadiq, S Tariq
2017-09-01
To assess clinical service value of STI point-of-care test (POCT) use in a 'sample first' clinical pathway (patients providing samples on arrival at clinic, before clinician consultation). Specific outcomes were: patient acceptability; whether a rapid nucleic acid amplification test (NAAT) for Chlamydia trachomatis/Neisseria gonorrhoeae (CT/NG) could be used as a POCT in practice; feasibility of non-NAAT POCT implementation for Trichomonas vaginalis (TV) and bacterial vaginosis (BV); impact on patient diagnosis and treatment. Service evaluation in a south London sexual health clinic. Symptomatic female and male patients and sexual contacts of CT/NG-positive individuals provided samples for diagnostic testing on clinic arrival, prior to clinical consultation. Tests included routine culture and microscopy; CT/NG (GeneXpert) NAAT; non-NAAT POCTs for TV and BV. All 70 (35 males, 35 females) patients approached participated. The 'sample first' pathway was acceptable, with >90% reporting they were happy to give samples on arrival and receive results in the same visit. Non-NAAT POCT results were available for all patients prior to leaving clinic; rapid CT/NG results were available for only 21.4% (15/70; 5 males, 10 females) of patients prior to leaving clinic. Known negative CT/NG results led to two females avoiding presumptive treatment, and one male receiving treatment directed at possible Mycoplasma genitalium infection causing non-gonococcal urethritis. Non-NAAT POCTs detected more positives than routine microscopy (TV 3 vs 2; BV 24 vs 7), resulting in more patients receiving treatment. A 'sample first' clinical pathway to enable multiple POCT use was acceptable to patients and feasible in a busy sexual health clinic, but rapid CT/NG processing time was too long to enable POCT use. There is need for further development to improve test processing times to enable POC use of rapid NAATs. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert
2014-05-01
LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
Wang, Zhuo; Jin, Shuilin; Liu, Guiyou; Zhang, Xiurui; Wang, Nan; Wu, Deliang; Hu, Yang; Zhang, Chiping; Jiang, Qinghua; Xu, Li; Wang, Yadong
2017-05-23
The development of single-cell RNA sequencing has enabled profound discoveries in biology, ranging from the dissection of the composition of complex tissues to the identification of novel cell types and dynamics in some specialized cellular environments. However, the large-scale generation of single-cell RNA-seq (scRNA-seq) data collected at multiple time points remains a challenge to effective measurement gene expression patterns in transcriptome analysis. We present an algorithm based on the Dynamic Time Warping score (DTWscore) combined with time-series data, that enables the detection of gene expression changes across scRNA-seq samples and recovery of potential cell types from complex mixtures of multiple cell types. The DTWscore successfully classify cells of different types with the most highly variable genes from time-series scRNA-seq data. The study was confined to methods that are implemented and available within the R framework. Sample datasets and R packages are available at https://github.com/xiaoxiaoxier/DTWscore .
Barnett, Jacqueline M.; Wraith, Patrick; Kiely, Janice; Persad, Raj; Hurley, Katrina; Hawkins, Peter; Luxton, Richard
2014-01-01
We describe the detection characteristics of a device the Resonant Coil Magnetometer (RCM) to quantify paramagnetic particles (PMPs) in immunochromatographic (lateral flow) assays. Lateral flow assays were developed using PMPs for the measurement of total prostate specific antigen (PSA) in serum samples. A detection limit of 0.8 ng/mL was achieved for total PSA using the RCM and is at clinically significant concentrations. Comparison of data obtained in a pilot study from the analysis of serum samples with commercially available immunoassays shows good agreement. The development of a quantitative magneto-immunoassay in lateral flow format for total PSA suggests the potential of the RCM to operate with many immunoassay formats. The RCM has the potential to be modified to quantify multiple analytes in this format. This research shows promise for the development of an inexpensive device capable of quantifying multiple analytes at the point-of-care using a magneto-immunoassay in lateral flow format. PMID:25587419
A multiple pointing-mount control strategy for space platforms
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1992-01-01
A new disturbance-adaptive control strategy for multiple pointing-mount space platforms is proposed and illustrated by consideration of a simplified 3-link dynamic model of a multiple pointing-mount space platform. Simulation results demonstrate the effectiveness of the new platform control strategy. The simulation results also reveal a system 'destabilization phenomena' that can occur if the set of individual platform-mounted experiment controllers are 'too responsive.'
NASA Astrophysics Data System (ADS)
Haider, Shahid A.; Tran, Megan Y.; Wong, Alexander
2018-02-01
Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.
Stricklin, Mary Lou; Bierer, S Beth; Struk, Cynthia
2003-01-01
Point-of-care technology for home care use will be the final step in enterprise-wide healthcare electronic communications. Successful implementation of home care point-of-care technology hinges upon nurses' attitudes toward point-of-care technology and its use in clinical practice. This study addresses the factors associated with home care nurses' attitudes using Stronge and Brodt's Nurse Attitudes Toward Computers instrument. In this study, the Nurses Attitudes Toward Computers instrument was administered to a convenience sample of 138 nurses employed by a large midwestern home care agency, with an 88% response rate. Confirmatory factor analysis corroborated the Nurses Attitudes Toward Computers' 3-dimensional factor structure for practicing nurses, which was labeled as nurses' work, security issues, and perceived barriers. Results from the confirmatory factor analysis also suggest that these 3 factors are internally correlated and represent multiple dimensions of a higher order construct labeled as nurses' attitudes toward computers. Additionally, two of these factors, nurses' work and perceived barriers, each appears to explain more variance in nurses' attitudes toward computers than security issues. Instrument reliability was high for the sample (.90), with subscale reliabilities ranging from 86 to 70.
The Importance and Role of Intracluster Correlations in Planning Cluster Trials
Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark
2008-01-01
There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.
Li, Jilong; Cheng, Jianlin
2016-05-10
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling
Li, Jilong; Cheng, Jianlin
2016-01-01
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489
Vana, Kimberly D; Silva, Graciela E; Muzyka, Diann; Hirani, Lorraine M
2011-06-01
It has been proposed that students' use of an audience response system, commonly called clickers, may promote comprehension and retention of didactic material. Whether this method actually improves students' grades, however, is still not determined. The purpose of this study was to evaluate whether a lecture format utilizing multiple-choice PowerPoint slides and an audience response system was more effective than a lecture format using only multiple-choice PowerPoint slides in the comprehension and retention of pharmacological knowledge in baccalaureate nursing students. The study also assessed whether the additional use of clickers positively affected students' satisfaction with their learning. Results from 78 students who attended lecture classes with multiple-choice PowerPoint slides plus clickers were compared with those of 55 students who utilized multiple-choice PowerPoint slides only. Test scores between these two groups were not significantly different. A satisfaction questionnaire showed that 72.2% of the control students did not desire the opportunity to use clickers. Of the group utilizing the clickers, 92.3% recommend the use of this system in future courses. The use of multiple-choice PowerPoint slides and an audience response system did not seem to improve the students' comprehension or retention of pharmacological knowledge as compared with those who used solely multiple-choice PowerPoint slides.
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
Brichta-Harhay, Dayna M.; Kalchayanand, Norasak; Bosilevac, Joseph M.; Shackelford, Steven D.; Wheeler, Tommy L.; Koohmaraie, Mohammad
2012-01-01
The objective of this study was to characterize Salmonella enterica contamination on carcasses in two large U.S. commercial pork processing plants. The carcasses were sampled at three points, before scalding (prescald), after dehairing/polishing but before evisceration (preevisceration), and after chilling (chilled final). The overall prevalences of Salmonella on carcasses at these three sampling points, prescald, preevisceration, and after chilling, were 91.2%, 19.1%, and 3.7%, respectively. At one of the two plants, the prevalence of Salmonella was significantly higher (P < 0.01) for each of the carcass sampling points. The prevalences of carcasses with enumerable Salmonella at prescald, preevisceration, and after chilling were 37.7%, 4.8%, and 0.6%, respectively. A total of 294 prescald carcasses had Salmonella loads of >1.9 log CFU/100 cm2, but these carcasses were not equally distributed between the two plants, as 234 occurred at the plant with higher Salmonella prevalences. Forty-one serotypes were identified on prescald carcasses with Salmonella enterica serotypes Derby, Typhimurium, and Anatum predominating. S. enterica serotypes Typhimurium and London were the most common of the 24 serotypes isolated from preevisceration carcasses. The Salmonella serotypes Johannesburg and Typhimurium were the most frequently isolated serotypes of the 9 serotypes identified from chilled final carcasses. Antimicrobial susceptibility was determined for selected isolates from each carcass sampling point. Multiple drug resistance (MDR), defined as resistance to three or more classes of antimicrobial agents, was identified for 71.2%, 47.8%, and 77.5% of the tested isolates from prescald, preevisceration, and chilled final carcasses, respectively. The results of this study indicate that the interventions used by pork processing plants greatly reduce the prevalence of Salmonella on carcasses, but MDR Salmonella was isolated from 3.2% of the final carcasses sampled. PMID:22327585
Spatially explicit models for inference about density in unmarked or partially marked populations
Chandler, Richard B.; Royle, J. Andrew
2013-01-01
Recently developed spatial capture–recapture (SCR) models represent a major advance over traditional capture–recapture (CR) models because they yield explicit estimates of animal density instead of population size within an unknown area. Furthermore, unlike nonspatial CR methods, SCR models account for heterogeneity in capture probability arising from the juxtaposition of animal activity centers and sample locations. Although the utility of SCR methods is gaining recognition, the requirement that all individuals can be uniquely identified excludes their use in many contexts. In this paper, we develop models for situations in which individual recognition is not possible, thereby allowing SCR concepts to be applied in studies of unmarked or partially marked populations. The data required for our model are spatially referenced counts made on one or more sample occasions at a collection of closely spaced sample units such that individuals can be encountered at multiple locations. Our approach includes a spatial point process for the animal activity centers and uses the spatial correlation in counts as information about the number and location of the activity centers. Camera-traps, hair snares, track plates, sound recordings, and even point counts can yield spatially correlated count data, and thus our model is widely applicable. A simulation study demonstrated that while the posterior mean exhibits frequentist bias on the order of 5–10% in small samples, the posterior mode is an accurate point estimator as long as adequate spatial correlation is present. Marking a subset of the population substantially increases posterior precision and is recommended whenever possible. We applied our model to avian point count data collected on an unmarked population of the northern parula (Parula americana) and obtained a density estimate (posterior mode) of 0.38 (95% CI: 0.19–1.64) birds/ha. Our paper challenges sampling and analytical conventions in ecology by demonstrating that neither spatial independence nor individual recognition is needed to estimate population density—rather, spatial dependence can be informative about individual distribution and density.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Cheng, Hsiao-Fen; Li, Chia-Chun; Shih, Ching-Tien; Chiang, Ming-Shan
2010-01-01
This study evaluated whether four persons (two groups) with developmental disabilities would be able to improve their collaborative pointing performance through a Multiple Cursor Automatic Pointing Assistive Program (MCAPAP) with a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and is able to…
Kodogiannis, Vassilis S; Lygouras, John N; Tarczynski, Andrzej; Chowdrey, Hardial S
2008-11-01
Current clinical diagnostics are based on biochemical, immunological, or microbiological methods. However, these methods are operator dependent, time-consuming, expensive, and require special skills, and are therefore, not suitable for point-of-care testing. Recent developments in gas-sensing technology and pattern recognition methods make electronic nose technology an interesting alternative for medical point-of-care devices. An electronic nose has been used to detect urinary tract infection from 45 suspected cases that were sent for analysis in a U.K. Public Health Registry. These samples were analyzed by incubation in a volatile generation test tube system for 4-5 h. Two issues are being addressed, including the implementation of an advanced neural network, based on a modified expectation maximization scheme that incorporates a dynamic structure methodology and the concept of a fusion of multiple classifiers dedicated to specific feature parameters. This study has shown the potential for early detection of microbial contaminants in urine samples using electronic nose technology.
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
A Semiparametric Change-Point Regression Model for Longitudinal Observations.
Xing, Haipeng; Ying, Zhiliang
2012-12-01
Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling
2010-01-01
The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
NASA Technical Reports Server (NTRS)
Bunting, Charles F.; Yu, Shih-Pin
2006-01-01
This paper emphasizes the application of numerical methods to explore the ideas related to shielding effectiveness from a statistical view. An empty rectangular box is examined using a hybrid modal/moment method. The basic computational method is presented followed by the results for single- and multiple observation points within the over-moded empty structure. The statistics of the field are obtained by using frequency stirring, borrowed from the ideas connected with reverberation chamber techniques, and extends the ideas of shielding effectiveness well into the multiple resonance regions. The study presented in this paper will address the average shielding effectiveness over a broad spatial sample within the enclosure as the frequency is varied.
Muthu, Pravin; Lutz, Stefan
2016-04-05
Fast, simple and cost-effective methods for detecting and quantifying pharmaceutical agents in patients are highly sought after to replace equipment and labor-intensive analytical procedures. The development of new diagnostic technology including portable detection devices also enables point-of-care by non-specialists in resource-limited environments. We have focused on the detection and dose monitoring of nucleoside analogues used in viral and cancer therapies. Using deoxyribonucleoside kinases (dNKs) as biosensors, our chemometric model compares observed time-resolved kinetics of unknown analytes to known substrate interactions across multiple enzymes. The resulting dataset can simultaneously identify and quantify multiple nucleosides and nucleoside analogues in complex sample mixtures. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dark Signal Characterization of 1.7 micron cutoff devices for SNAP
NASA Astrophysics Data System (ADS)
Smith, R. M.; SNAP Collaboration
2004-12-01
We report initial progress characterizing non-photometric sources of error -- dark current, noise, and zero point drift -- for 1.7 micron cutoff HgCdTe and InGaAs detectors under development by Raytheon, Rockwell, and Sensors Unlimited for SNAP. Dark current specifications can already be met with several detector types. Changes to the manufacturing process are being explored to improve the noise reduction available through multiple sampling. In some cases, a significant number of pixels suffer from popcorn noise, with a few percent of all pixels exhibiting a ten fold noise increase. A careful study of zero point drifts is also under way, since these errors can dominate dark current, and may contribute to the noise degradation seen in long exposures.
TIGGERC: Turbomachinery Interactive Grid Generator for 2-D Grid Applications and Users Guide
NASA Technical Reports Server (NTRS)
Miller, David P.
1994-01-01
A two-dimensional multi-block grid generator has been developed for a new design and analysis system for studying multiple blade-row turbomachinery problems. TIGGERC is a mouse driven, interactive grid generation program which can be used to modify boundary coordinates and grid packing and generates surface grids using a hyperbolic tangent or algebraic distribution of grid points on the block boundaries. The interior points of each block grid are distributed using a transfinite interpolation approach. TIGGERC can generate a blocked axisymmetric H-grid, C-grid, I-grid or O-grid for studying turbomachinery flow problems. TIGGERC was developed for operation on Silicon Graphics workstations. Detailed discussion of the grid generation methodology, menu options, operational features and sample grid geometries are presented.
Micro/Nano-scale Strain Distribution Measurement from Sampling Moiré Fringes.
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi
2017-05-23
This work describes the measurement procedure and principles of a sampling moiré technique for full-field micro/nano-scale deformation measurements. The developed technique can be performed in two ways: using the reconstructed multiplication moiré method or the spatial phase-shifting sampling moiré method. When the specimen grid pitch is around 2 pixels, 2-pixel sampling moiré fringes are generated to reconstruct a multiplication moiré pattern for a deformation measurement. Both the displacement and strain sensitivities are twice as high as in the traditional scanning moiré method in the same wide field of view. When the specimen grid pitch is around or greater than 3 pixels, multi-pixel sampling moiré fringes are generated, and a spatial phase-shifting technique is combined for a full-field deformation measurement. The strain measurement accuracy is significantly improved, and automatic batch measurement is easily achievable. Both methods can measure the two-dimensional (2D) strain distributions from a single-shot grid image without rotating the specimen or scanning lines, as in traditional moiré techniques. As examples, the 2D displacement and strain distributions, including the shear strains of two carbon fiber-reinforced plastic specimens, were measured in three-point bending tests. The proposed technique is expected to play an important role in the non-destructive quantitative evaluations of mechanical properties, crack occurrences, and residual stresses of a variety of materials.
NASA Astrophysics Data System (ADS)
Tully, B. J.; Heidelberg, J. F.; Kraft, B.; Girguis, P. R.; Huber, J. A.
2016-12-01
The oceanic crust contains the largest aquifer on Earth with a volume approximately 2% of the global ocean. Ongoing research at the North Pond (NP) site, west of the Mid-Atlantic Ridge, provides an environment representative of oxygenated crustal aquifers beneath oligotrophic surface waters. Using subseafloor CORK observatories for multiple sampling depths beneath the seafloor, crustal fluids were sampled along the predicted aquifer fluid flow path over a two-year period. DNA was extracted and sequenced for metagenomic analysis from 22 crustal fluid samples, along with the overlying bottom. At broad taxonomic groupings, the aquifer system is highly dynamic over time and space, with shifts in dominant taxa and "blooms" of transient groups that appear at discreet time points and sample depths. We were able to reconstruct 194 high-quality, low-contamination bacterial and archaeal metagenomic-assembled genomes (MAGs) with estimated completeness >50% (429 MAGs >20% complete). Environmental genomes were assigned to phylogenies from the major bacterial phyla, putative novel groups, and poorly sampled phylogenetic groups, including the Marinimicrobia, Candidate Phyla Radiation, and Planctomycetes. Biogeochemically relevant processes were assigned to MAGs, including denitrification, dissimilatory sulfur and hydrogen cycling, and carbon fixation. Collectively, the oxic NP aquifer system represents a diverse, dynamic microbial habitat with the metabolic potential to impact multiple globally relevant biogeochemical cycles, including nitrogen, sulfur, and carbon.
NASA Astrophysics Data System (ADS)
Vech, Daniel; Chen, Christopher
2016-04-01
One of the most important features of the plasma turbulence is the anisotropy, which arises due to the presence of the magnetic field. The understanding of the anisotropy is particularly important to reveal how the turbulent cascade operates. It is well known that anisotropy exists with respect to the mean magnetic field, however recent theoretical studies suggested anisotropy with respect to the radial direction. The purpose of this study is to investigate the variance and spectral anisotropies of the solar wind turbulence with multiple point spacecraft observations. The study includes the Advanced Composition Analyzer (ACE), WIND and Cluster spacecraft data. The second order structure functions are derived for two different spacecraft configurations: when the pair of spacecraft are separated radially (with respect to the spacecraft -Sun line) and when they are separated along the transverse direction. We analyze the effect of the different sampling directions on the variance anisotropy, global spectral anisotropy, local 3D spectral anisotropy and discuss the implications for our understanding of solar wind turbulence.
Multiple Point Dynamic Gas Density Measurements Using Molecular Rayleigh Scattering
NASA Technical Reports Server (NTRS)
Seasholtz, Richard; Panda, Jayanta
1999-01-01
A nonintrusive technique for measuring dynamic gas density properties is described. Molecular Rayleigh scattering is used to measure the time-history of gas density simultaneously at eight spatial locations at a 50 kHz sampling rate. The data are analyzed using the Welch method of modified periodograms to reduce measurement uncertainty. Cross-correlations, power spectral density functions, cross-spectral density functions, and coherence functions may be obtained from the data. The technique is demonstrated using low speed co-flowing jets with a heated inner jet.
2008-01-01
strategies, increasing the prevalence of both hypoglycemia and anemia in the ICU.14–20 The change in allogeneic blood transfusion practices occurred in...measurements in samples with low HCT levels.4,5,7,8,12 The error occurs because de- creased red blood cell causes less displacement of plasma, resulting...Nonlinear component regression was performed be- cause HCT has a nonlinear effect on accuracy of POC glucometers. A dual parameter correction factor was
Inci, Fatih; Filippini, Chiara; Baday, Murat; Ozen, Mehmet Ozgun; Calamak, Semih; Durmus, Naside Gozde; Wang, ShuQi; Hanhauser, Emily; Hobbs, Kristen S; Juillard, Franceline; Kuang, Ping Ping; Vetter, Michael L; Carocci, Margot; Yamamoto, Hidemi S; Takagi, Yuko; Yildiz, Umit Hakan; Akin, Demir; Wesemann, Duane R; Singhal, Amit; Yang, Priscilla L; Nibert, Max L; Fichorova, Raina N; Lau, Daryl T-Y; Henrich, Timothy J; Kaye, Kenneth M; Schachter, Steven C; Kuritzkes, Daniel R; Steinmetz, Lars M; Gambhir, Sanjiv S; Davis, Ronald W; Demirci, Utkan
2015-08-11
Recent advances in biosensing technologies present great potential for medical diagnostics, thus improving clinical decisions. However, creating a label-free general sensing platform capable of detecting multiple biotargets in various clinical specimens over a wide dynamic range, without lengthy sample-processing steps, remains a considerable challenge. In practice, these barriers prevent broad applications in clinics and at patients' homes. Here, we demonstrate the nanoplasmonic electrical field-enhanced resonating device (NE(2)RD), which addresses all these impediments on a single platform. The NE(2)RD employs an immunodetection assay to capture biotargets, and precisely measures spectral color changes by their wavelength and extinction intensity shifts in nanoparticles without prior sample labeling or preprocessing. We present through multiple examples, a label-free, quantitative, portable, multitarget platform by rapidly detecting various protein biomarkers, drugs, protein allergens, bacteria, eukaryotic cells, and distinct viruses. The linear dynamic range of NE(2)RD is five orders of magnitude broader than ELISA, with a sensitivity down to 400 fg/mL This range and sensitivity are achieved by self-assembling gold nanoparticles to generate hot spots on a 3D-oriented substrate for ultrasensitive measurements. We demonstrate that this precise platform handles multiple clinical samples such as whole blood, serum, and saliva without sample preprocessing under diverse conditions of temperature, pH, and ionic strength. The NE(2)RD's broad dynamic range, detection limit, and portability integrated with a disposable fluidic chip have broad applications, potentially enabling the transition toward precision medicine at the point-of-care or primary care settings and at patients' homes.
Inci, Fatih; Filippini, Chiara; Ozen, Mehmet Ozgun; Calamak, Semih; Durmus, Naside Gozde; Wang, ShuQi; Hanhauser, Emily; Hobbs, Kristen S.; Juillard, Franceline; Kuang, Ping Ping; Vetter, Michael L.; Carocci, Margot; Yamamoto, Hidemi S.; Takagi, Yuko; Yildiz, Umit Hakan; Akin, Demir; Wesemann, Duane R.; Singhal, Amit; Yang, Priscilla L.; Nibert, Max L.; Fichorova, Raina N.; Lau, Daryl T.-Y.; Henrich, Timothy J.; Kaye, Kenneth M.; Schachter, Steven C.; Kuritzkes, Daniel R.; Steinmetz, Lars M.; Gambhir, Sanjiv S.; Davis, Ronald W.; Demirci, Utkan
2015-01-01
Recent advances in biosensing technologies present great potential for medical diagnostics, thus improving clinical decisions. However, creating a label-free general sensing platform capable of detecting multiple biotargets in various clinical specimens over a wide dynamic range, without lengthy sample-processing steps, remains a considerable challenge. In practice, these barriers prevent broad applications in clinics and at patients’ homes. Here, we demonstrate the nanoplasmonic electrical field-enhanced resonating device (NE2RD), which addresses all these impediments on a single platform. The NE2RD employs an immunodetection assay to capture biotargets, and precisely measures spectral color changes by their wavelength and extinction intensity shifts in nanoparticles without prior sample labeling or preprocessing. We present through multiple examples, a label-free, quantitative, portable, multitarget platform by rapidly detecting various protein biomarkers, drugs, protein allergens, bacteria, eukaryotic cells, and distinct viruses. The linear dynamic range of NE2RD is five orders of magnitude broader than ELISA, with a sensitivity down to 400 fg/mL This range and sensitivity are achieved by self-assembling gold nanoparticles to generate hot spots on a 3D-oriented substrate for ultrasensitive measurements. We demonstrate that this precise platform handles multiple clinical samples such as whole blood, serum, and saliva without sample preprocessing under diverse conditions of temperature, pH, and ionic strength. The NE2RD’s broad dynamic range, detection limit, and portability integrated with a disposable fluidic chip have broad applications, potentially enabling the transition toward precision medicine at the point-of-care or primary care settings and at patients’ homes. PMID:26195743
Nurgul, Keser; Nursan, Cinar; Dilek, Kose; Over, Ozcelik Tijen; Sevin, Altinkaynak
2015-01-01
Once limited with face-to face courses, health education has now moved into the web environment after new developments in information technology This study was carried out in order to give training to the university academic and administrative female staff who have difficulty in attending health education planned for specific times and places. The web-supported training focuses on healthy diet, the importance of physical activity, damage of smoking and stress management. The study was carried out in Sakarya University between the years 2012-2013 as a descriptive and quasi experimental study. The sample consisted of 30 participants who agreed to take part in the survey, filled in the forms and completed the whole training. The data were collected via a "Personel Information Form", "Health Promotion Life-Style Profile (HPLSP)", and "Multiple Choice Questionnaire (MCQ). There was a statistically significant difference between the total points from "Health Promotion Life-Style Profile" and the total points from the sub-scale after and before the training (t=3.63, p=0.001). When the points from the multiple choice questionnaire after and before training were compared, it was seen that the average points were higher after the training (t=8.57, p<0.001). It was found that web-supported health training has a positive effect on the healthy living behaviour of female staff working at a Turkish university and on their knowledge of health promotion.
A precision multi-sampler for deep-sea hydrothermal microbial mat studies
NASA Astrophysics Data System (ADS)
Breier, J. A.; Gomez-Ibanez, D.; Reddington, E.; Huber, J. A.; Emerson, D.
2012-12-01
A new tool was developed for deep-sea microbial mat studies by remotely operated vehicles and was successfully deployed during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Mat Sampler allows for discrete, controlled material collection from complex microbial structures, vertical-profiling within thick microbial mats and particulate and fluid sample collection from venting seafloor fluids. It has a reconfigurable and expandable sample capacity based on magazines of 6 syringes, filters, or water bottles. Multiple magazines can be used such that 12-36 samples can be collected routinely during a single dive; several times more if the dive is dedicated for this purpose. It is capable of hosting in situ physical, electrochemical, and optical sensors, including temperature and oxygen probes in order to guide sampling and to record critical environmental parameters at the time and point of sample collection. The precision sampling capability of this instrument will greatly enhance efforts to understand the structured, delicate, microbial mat communities that grow in diverse benthic habitats.
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Generic and Automated Data Evaluation in Analytical Measurement.
Adam, Martin; Fleischer, Heidi; Thurow, Kerstin
2017-04-01
In the past year, automation has become more and more important in the field of elemental and structural chemical analysis to reduce the high degree of manual operation and processing time as well as human errors. Thus, a high number of data points are generated, which requires fast and automated data evaluation. To handle the preprocessed export data from different analytical devices with software from various vendors offering a standardized solution without any programming knowledge should be preferred. In modern laboratories, multiple users will use this software on multiple personal computers with different operating systems (e.g., Windows, Macintosh, Linux). Also, mobile devices such as smartphones and tablets have gained growing importance. The developed software, Project Analytical Data Evaluation (ADE), is implemented as a web application. To transmit the preevaluated data from the device software to the Project ADE, the exported XML report files are detected and the included data are imported into the entities database using the Data Upload software. Different calculation types of a sample within one measurement series (e.g., method validation) are identified using information tags inside the sample name. The results are presented in tables and diagrams on different information levels (general, detailed for one analyte or sample).
Culture adaptation of malaria parasites selects for convergent loss-of-function mutants.
Claessens, Antoine; Affara, Muna; Assefa, Samuel A; Kwiatkowski, Dominic P; Conway, David J
2017-01-24
Cultured human pathogens may differ significantly from source populations. To investigate the genetic basis of laboratory adaptation in malaria parasites, clinical Plasmodium falciparum isolates were sampled from patients and cultured in vitro for up to three months. Genome sequence analysis was performed on multiple culture time point samples from six monoclonal isolates, and single nucleotide polymorphism (SNP) variants emerging over time were detected. Out of a total of five positively selected SNPs, four represented nonsense mutations resulting in stop codons, three of these in a single ApiAP2 transcription factor gene, and one in SRPK1. To survey further for nonsense mutants associated with culture, genome sequences of eleven long-term laboratory-adapted parasite strains were examined, revealing four independently acquired nonsense mutations in two other ApiAP2 genes, and five in Epac. No mutants of these genes exist in a large database of parasite sequences from uncultured clinical samples. This implicates putative master regulator genes in which multiple independent stop codon mutations have convergently led to culture adaptation, affecting most laboratory lines of P. falciparum. Understanding the adaptive processes should guide development of experimental models, which could include targeted gene disruption to adapt fastidious malaria parasite species to culture.
Emergency Dose Estimation Using Optically Stimulated Luminescence from Human Tooth Enamel.
Sholom, S; Dewitt, R; Simon, S L; Bouville, A; McKeever, S W S
2011-09-01
Human teeth were studied for potential use as emergency Optically Stimulated Luminescence (OSL) dosimeters. By using multiple-teeth samples in combination with a custom-built sensitive OSL reader, (60)Co-equivalent doses below 0.64 Gy were measured immediately after exposure with the lowest value being 27 mGy for the most sensitive sample. The variability of OSL sensitivity, from individual to individual using multiple-teeth samples, was determined to be 53%. X-ray and beta exposure were found to produce OSL curves with the same shape that differed from those due to ultraviolet (UV) exposure; as a result, correlation was observed between OSL signals after X-ray and beta exposure and was absent if compared to OSL signals after UV exposure. Fading of the OSL signal was "typical" for most teeth with just a few of incisors showing atypical behavior. Typical fading dependences were described by a bi-exponential decay function with "fast" (decay time around of 12 min) and "slow" (decay time about 14 h) components. OSL detection limits, based on the techniques developed to-date, were found to be satisfactory from the point-of-view of medical triage requirements if conducted within 24 hours of the exposure.
Biopsychosocial correlates of lifetime major depression in a multiple sclerosis population.
Patten, S B; Metz, L M; Reimer, M A
2000-04-01
The objective of this paper was to evaluate the lifetime and point prevalence of major depression in a population-based Multiple Sclerosis (MS) clinic sample, and to describe associations between selected biopsychosocial variables and the prevalence of lifetime major depression in this sample. Subjects who had participated in an earlier study were re-contacted for additional data collection. Eighty-three per cent (n=136) of those eligible consented to participate. Each subject completed the Composite International Diagnostic Interview (CIDI) and an interviewer-administered questionnaire evaluating a series of biopsychosocial variables. The lifetime prevalence of major depression in this sample was 22.8%, somewhat lower than previous estimates in MS clinic populations. Women, those under 35, and those with a family history of major depression had a higher prevalence. Also, subjects reporting high levels of stress and heavy ingestion of caffeine (>400 mg) had a higher prevalence of major depression. As this was a cross-sectional analysis, the direction of causal effect for the observed associations could not be determined. By identifying variables that are associated with lifetime major depression, these data generate hypotheses for future prospective studies. Such studies will be needed to further understand the etiology of depressive disorders in MS.
Understanding cracking failures of coatings: A fracture mechanics approach
NASA Astrophysics Data System (ADS)
Kim, Sung-Ryong
A fracture mechanics analysis of coating (paint) cracking was developed. A strain energy release rate (G(sub c)) expression due to the formation of a new crack in a coating was derived for bending and tension loadings in terms of the moduli, thicknesses, Poisson's ratios, load, residual strain, etc. Four-point bending and instrumented impact tests were used to determine the in-situ fracture toughness of coatings as functions of increasing baking (drying) time. The system used was a thin coating layer on a thick substrate layer. The substrates included steel, aluminum, polycarbonate, acrylonitrile-butadiene-styrene (ABS), and Noryl. The coatings included newly developed automotive paints. The four-point bending configuration promoted nice transversed multiple coating cracks on both steel and polymeric substrates. The crosslinked type automotive coatings on steel substrates showed big cracks without microcracks. When theoretical predictions for energy release rate were compared to experimental data for coating/steel substrate samples with multiple cracking, the agreement was good. Crosslinked type coatings on polymeric substrates showed more cracks than theory predicted and the G(sub c)'s were high. Solvent evaporation type coatings on polymeric substrates showed clean multiple cracking and the G(sub c)'s were higher than those obtained by tension analysis of tension experiments with the same substrates. All the polymeric samples showed surface embrittlement after long baking times using four-point bending tests. The most apparent surface embrittlement was observed in the acrylonitrile-butadiene-styrene (ABS) substrate system. The impact properties of coatings as a function of baking time were also investigated. These experiments were performed using an instrumented impact tester. There was a rapid decrease in G(sub c) at short baking times and convergence to a constant value at long baking times. The surface embrittlement conditions and an embrittlement toughness were found upon impact loading. This analysis provides a basis for a quantitative approach to measuring coating toughness.
Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang
2014-11-01
Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.
NASA Astrophysics Data System (ADS)
Hanyu, Ryosuke; Tsuji, Toshiaki
This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.
Baker, Laurie L; Mills Flemming, Joanna E; Jonsen, Ian D; Lidgard, Damian C; Iverson, Sara J; Bowen, W Don
2015-01-01
Paired with satellite location telemetry, animal-borne instruments can collect spatiotemporal data describing the animal's movement and environment at a scale relevant to its behavior. Ecologists have developed methods for identifying the area(s) used by an animal (e.g., home range) and those used most intensely (utilization distribution) based on location data. However, few have extended these models beyond their traditional roles as descriptive 2D summaries of point data. Here we demonstrate how the home range method, T-LoCoH, can be expanded to quantify collective sampling coverage by multiple instrumented animals using grey seals (Halichoerus grypus) equipped with GPS tags and acoustic transceivers on the Scotian Shelf (Atlantic Canada) as a case study. At the individual level, we illustrate how time and space-use metrics quantifying individual sampling coverage may be used to determine the rate of acoustic transmissions received. Grey seals collectively sampled an area of 11,308 km (2) and intensely sampled an area of 31 km (2) from June-December. The largest area sampled was in July (2094.56 km (2)) and the smallest area sampled occurred in August (1259.80 km (2)), with changes in sampling coverage observed through time. T-LoCoH provides an effective means to quantify changes in collective sampling effort by multiple instrumented animals and to compare these changes across time. We also illustrate how time and space-use metrics of individual instrumented seal movement calculated using T-LoCoH can be used to account for differences in the amount of time a bioprobe (biological sampling platform) spends in an area.
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
Shibata, Tomoyuki; Solo-Gabriele, Helena M.; Fleming, Lora E.; Elmir, Samir
2008-01-01
The microbial water quality at two beaches, Hobie Beach and Crandon Beach, in Miami-Dade County, Florida, USA was measured using multiple microbial indicators for the purpose of evaluating correlations between microbes and for identifying possible sources of contamination. The indicator microbes chosen for this study (enterococci, Escherichia coli, fecal coliform, total coliform and C. perfringens) were evaluated through three different sampling efforts. These efforts included daily measurements at four locations during a wet season month and a dry season month, spatially intensive water sampling during low- and high-tide periods, and a sand sampling effort. Results indicated that concentrations did not vary in a consistent fashion between one indicator microbe and another. Daily water quality frequently exceeded guideline levels at Hobie Beach for all indicator microbes except for fecal coliform, which never exceeded the guideline. Except for total coliform, the concentrations of microbes did not change significantly between seasons in spite of the fact that the physical–chemical parameters (rainfall, temperature, pH, and salinity) changed significantly between the two monitoring periods. Spatially intense water sampling showed that the concentrations of microbes were significantly different with distance from the shoreline. The highest concentrations were observed at shoreline points and decreased at offshore points. Furthermore, the highest concentrations of indicator microbe concentrations were observed at high tide, when the wash zone area of the beach was submerged. Beach sands within the wash zone tested positive for all indicator microbes, thereby suggesting that this zone may serve as the source of indicator microbes. Ultimate sources of indicator microbes to this zone may include humans, animals, and possibly the survival and regrowth of indicator microbes due to the unique environmental conditions found within this zone. Overall, the results of this study indicated that the concentrations of indicator microbes do not necessarily correlate with one another. Exceedence of water quality guidelines, and thus the frequency of beach advisories, depends upon which indicator microbe is chosen. PMID:15261551
Floyd A. Johnson
1961-01-01
This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...
Pointing with Power or Creating with Chalk
ERIC Educational Resources Information Center
Rudow, Sasha R.; Finck, Joseph E.
2015-01-01
This study examines the attitudes of students on the use of PowerPoint and chalk/white boards in college science lecture classes. Students were asked to complete a survey regarding their experiences with PowerPoint and chalk/white boards in their science classes. Both multiple-choice and short answer questions were used. The multiple-choice…
Sampling through time and phylodynamic inference with coalescent and birth–death models
Volz, Erik M.; Frost, Simon D. W.
2014-01-01
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173
Habbous, Steven; Chu, Karen P.; Lau, Harold; Schorr, Melissa; Belayneh, Mathieos; Ha, Michael N.; Murray, Scott; O’Sullivan, Brian; Huang, Shao Hui; Snow, Stephanie; Parliament, Matthew; Hao, Desiree; Cheung, Winson Y.; Xu, Wei; Liu, Geoffrey
2017-01-01
BACKGROUND: The incidence of oropharyngeal cancer has risen over the past 2 decades. This rise has been attributed to human papillomavirus (HPV), but information on temporal trends in incidence of HPV-associated cancers across Canada is limited. METHODS: We collected social, clinical and demographic characteristics and p16 protein status (p16-positive or p16-negative, using this immunohistochemistry variable as a surrogate marker of HPV status) for 3643 patients with oropharyngeal cancer diagnosed between 2000 and 2012 at comprehensive cancer centres in British Columbia (6 centres), Edmonton, Calgary, Toronto and Halifax. We used receiver operating characteristic curves and multiple imputation to estimate the p16 status for missing values. We chose a best-imputation probability cut point on the basis of accuracy in samples with known p16 status and through an independent relation between p16 status and overall survival. We used logistic and Cox proportional hazard regression. RESULTS: We found no temporal changes in p16-positive status initially, but there was significant selection bias, with p16 testing significantly more likely to be performed in males, lifetime never-smokers, patients with tonsillar or base-of-tongue tumours and those with nodal involvement (p < 0.05 for each variable). We used the following variables associated with p16-positive status for multiple imputation: male sex, tonsillar or base-of-tongue tumours, smaller tumours, nodal involvement, less smoking and lower alcohol consumption (p < 0.05 for each variable). Using sensitivity analyses, we showed that different imputation probability cut points for p16-positive status each identified a rise from 2000 to 2012, with the best-probability cut point identifying an increase from 47.3% in 2000 to 73.7% in 2012 (p < 0.001). INTERPRETATION: Across multiple centres in Canada, there was a steady rise in the proportion of oropharyngeal cancers attributable to HPV from 2000 to 2012. PMID:28808115
Casper, Andrew; Liu, Dalong; Ebbini, Emad S
2012-01-01
A system for the realtime generation and control of multiple-focus ultrasound phased-array heating patterns is presented. The system employs a 1-MHz, 64-element array and driving electronics capable of fine spatial and temporal control of the heating pattern. The driver is integrated with a realtime 2-D temperature imaging system implemented on a commercial scanner. The coordinates of the temperature control points are defined on B-mode guidance images from the scanner, together with the temperature set points and controller parameters. The temperature at each point is controlled by an independent proportional, integral, and derivative controller that determines the focal intensity at that point. Optimal multiple-focus synthesis is applied to generate the desired heating pattern at the control points. The controller dynamically reallocates the power available among the foci from the shared power supply upon reaching the desired temperature at each control point. Furthermore, anti-windup compensation is implemented at each control point to improve the system dynamics. In vitro experiments in tissue-mimicking phantom demonstrate the robustness of the controllers for short (2-5 s) and longer multiple-focus high-intensity focused ultrasound exposures. Thermocouple measurements in the vicinity of the control points confirm the dynamics of the temperature variations obtained through noninvasive feedback. © 2011 IEEE
Connolly, Kiah; Beier, Lancelot; Langdorf, Mark I; Anderson, Craig L; Fox, John C
2015-01-01
Our objective was to evaluate the effectiveness of hands-on training at a bedside ultrasound (US) symposium ("Ultrafest") to improve both clinical knowledge and image acquisition skills of medical students. Primary outcome measure was improvement in multiple choice questions on pulmonary or Focused Assessment with Sonography in Trauma (FAST) US knowledge. Secondary outcome was improvement in image acquisition for either pulmonary or FAST. Prospective cohort study of 48 volunteers at "Ultrafest," a free symposium where students received five contact training hours. Students were evaluated before and after training for proficiency in either pulmonary US or FAST. Proficiency was assessed by clinical knowledge through written multiple-choice exam, and clinical skills through accuracy of image acquisition. We used paired sample t-tests with students as their own controls. Pulmonary knowledge scores increased by a mean of 10.1 points (95% CI [8.9-11.3], p<0.00005), from 8.4 to a posttest average of 18.5/21 possible points. The FAST knowledge scores increased by a mean of 7.5 points (95% CI [6.3-8.7] p<0.00005), from 8.1 to a posttest average of 15.6/21. We analyzed clinical skills data on 32 students. The mean score was 1.7 pretest and 4.7 posttest of 12 possible points. Mean improvement was 3.0 points (p<0.00005) overall, 3.3 (p=0.0001) for FAST, and 2.6 (p=0.003) for the pulmonary US exam. This study suggests that a symposium on US can improve clinical knowledge, but is limited in achieving image acquisition for pulmonary and FAST US assessments. US training external to official medical school curriculum may augment students' education.
Multivariate survivorship analysis using two cross-sectional samples.
Hill, M E
1999-11-01
As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammersley, S.; Dawson, P.; Kappers, M. J.
2015-09-28
InGaN-based light emitting diodes and multiple quantum wells designed to emit in the green spectral region exhibit, in general, lower internal quantum efficiencies than their blue-emitting counter parts, a phenomenon referred to as the “green gap.” One of the main differences between green-emitting and blue-emitting samples is that the quantum well growth temperature is lower for structures designed to emit at longer wavelengths, in order to reduce the effects of In desorption. In this paper, we report on the impact of the quantum well growth temperature on the optical properties of InGaN/GaN multiple quantum wells designed to emit at 460 nmmore » and 530 nm. It was found that for both sets of samples increasing the temperature at which the InGaN quantum well was grown, while maintaining the same indium composition, led to an increase in the internal quantum efficiency measured at 300 K. These increases in internal quantum efficiency are shown to be due reductions in the non-radiative recombination rate which we attribute to reductions in point defect incorporation.« less
Genome-Wide Association Study of Multiple Sclerosis Confirms a Novel Locus at 5p13.1
Sanna, Serena; Gayán, Javier; Urcelay, Elena; Zara, Ilenia; Pitzalis, Maristella; Cavanillas, María L.; Arroyo, Rafael; Zoledziewska, Magdalena; Marrosu, Marisa; Fernández, Oscar; Leyva, Laura; Alcina, Antonio; Fedetz, Maria; Moreno-Rey, Concha; Velasco, Juan; Real, Luis M.; Ruiz-Peña, Juan Luis; Cucca, Francesco
2012-01-01
Multiple Sclerosis (MS) is the most common progressive and disabling neurological condition affecting young adults in the world today. From a genetic point of view, MS is a complex disorder resulting from the combination of genetic and non-genetic factors. We aimed to identify previously unidentified loci conducting a new GWAS of Multiple Sclerosis (MS) in a sample of 296 MS cases and 801 controls from the Spanish population. Meta-analysis of our data in combination with previous GWAS was done. A total of 17 GWAS-significant SNPs, corresponding to three different loci were identified:HLA, IL2RA, and 5p13.1. All three have been previously reported as GWAS-significant. We confirmed our observation in 5p13.1 for rs9292777 using two additional independent Spanish samples to make a total of 4912 MS cases and 7498 controls (ORpooled = 0.84; 95%CI: 0.80–0.89; p = 1.36×10-9). This SNP differs from the one reported within this locus in a recent GWAS. Although it is unclear whether both signals are tapping the same genetic association, it seems clear that this locus plays an important role in the pathogenesis of MS. PMID:22570697
Rogers, Geoffrey
2018-06-01
The Yule-Nielsen effect is an influence on halftone color caused by the diffusion of light within the paper upon which the halftone ink is printed. The diffusion can be characterized by a point spread function. In this paper, a point spread function for paper is derived using the multiple-path model of reflection. This model treats the interaction of light with turbid media as a random walk. Using the multiple-path point spread function, a general expression is derived for the average reflectance of light from a frequency-modulated halftone, in which dot size is constant and the number of dots is varied, with the arrangement of dots random. It is also shown that the line spread function derived from the multiple-path model has the form of a Lorentzian function.
Use of Thematic Mapper for water quality assessment
NASA Technical Reports Server (NTRS)
Horn, E. M.; Morrissey, L. A.
1984-01-01
The evaluation of simulated TM data obtained on an ER-2 aircraft at twenty-five predesignated sample sites for mapping water quality factors such as conductivity, pH, suspended solids, turbidity, temperature, and depth, is discussed. Using a multiple regression for the seven TM bands, an equation is developed for the suspended solids. TM bands 1, 2, 3, 4, and 6 are used with logarithm conductivity in a multiple regression. The assessment of regression equations for a high coefficient of determination (R-squared) and statistical significance is considered. Confidence intervals about the mean regression point are calculated in order to assess the robustness of the regressions used for mapping conductivity, turbidity, and suspended solids, and by regressing random subsamples of sites and comparing the resultant range of R-squared, cross validation is conducted.
Shame, pride, and suicidal ideation in a military clinical sample.
Bryan, Craig J; Ray-Sannerud, Bobbie; Morrow, Chad E; Etienne, Neysa
2013-05-01
Suicide risk among U.S. military personnel has been increasing over the past decade. Fluid vulnerability theory (FVT; Rudd, 2006) posits that acute suicidal episodes increase in severity when trait-based (e.g., shame) and state-based (e.g., hopelessness) risk factors interact, especially among individuals who have been previously suicidal. In contrast, trait-based protective factors (e.g., pride) should buffer the deleterious effects of risk factors. 77 active duty military personnel (95% Air Force; 58.4% male, 39.0% female; 67.5% Caucasian, 19.5% African-American, 1.3% Native American, 1.3% Native Hawaiian/Pacific Islander, 1.3% Asian, and 5.2% other) engaged in outpatient mental health treatment completed self-report surveys of shame, hopelessness, pride, and suicidal ideation. Multiple generalized regression was utilized to test the associations and interactive effects of shame, hopelessness, and worst-point past suicidal ideation on severity of current suicidal ideation. Shame significantly interacted with hopelessness (B=-0.013, SE=0.004, p<0.001) and worst-point suicidal ideation (B=0.027, SE=0.010, p=0.010), augmenting each variable's effect on severity of current suicidal ideation. A significant three-way interaction among shame, worst-point suicidal ideation, and pride was also observed (B=-0.010, SE=0.0043, p=0.021), indicating that pride buffered the interactive effects of shame with worst-point suicidal ideation. Small sample size, cross-sectional design, and primarily Air Force sample. Among military outpatients with histories of severe suicidal episodes, pride buffers the effects of hopelessness on current suicidal ideation. Results are consistent with FVT. Copyright © 2013 Elsevier B.V. All rights reserved.
A multi-species framework for landscape conservation planning
Schwenk, W. Scott; Donovan, Therese
2011-01-01
Rapidly changing landscapes have spurred the need for quantitative methods for conservation assessment and planning that encompass large spatial extents. We devised and tested a multispecies framework for conservation planning to complement single-species assessments and ecosystem-level approaches. Our framework consisted of 4 elements: sampling to effectively estimate population parameters, measuring how human activity affects landscapes at multiple scales, analyzing the relation between landscape characteristics and individual species occurrences, and evaluating and comparing the responses of multiple species to landscape modification. We applied the approach to a community of terrestrial birds across 25,000 km2 with a range of intensities of human development. Human modification of land cover, road density, and other elements of the landscape, measured at multiple spatial extents, had large effects on occupancy of the 67 species studied. Forest composition within 1 km of points had a strong effect on occupancy of many species and a range of negative, intermediate, and positive associations. Road density within 1 km of points, percent evergreen forest within 300 m, and distance from patch edge were also strongly associated with occupancy for many species. We used the occupancy results to group species into 11 guilds that shared patterns of association with landscape characteristics. Our multispecies approach to conservation planning allowed us to quantify the trade-offs of different scenarios of land-cover change in terms of species occupancy.
A Multispecies Framework for Landscape Conservation Planning
Schwenk, W.S.; Donovan, T.M.
2011-01-01
Rapidly changing landscapes have spurred the need for quantitative methods for conservation assessment and planning that encompass large spatial extents. We devised and tested a multispecies framework for conservation planning to complement single-species assessments and ecosystem-level approaches. Our framework consisted of 4 elements: sampling to effectively estimate population parameters, measuring how human activity affects landscapes at multiple scales, analyzing the relation between landscape characteristics and individual species occurrences, and evaluating and comparing the responses of multiple species to landscape modification. We applied the approach to a community of terrestrial birds across 25,000 km2 with a range of intensities of human development. Human modification of land cover, road density, and other elements of the landscape, measured at multiple spatial extents, had large effects on occupancy of the 67 species studied. Forest composition within 1 km of points had a strong effect on occupancy of many species and a range of negative, intermediate, and positive associations. Road density within 1 km of points, percent evergreen forest within 300 m, and distance from patch edge were also strongly associated with occupancy for many species. We used the occupancy results to group species into 11 guilds that shared patterns of association with landscape characteristics. Our multispecies approach to conservation planning allowed us to quantify the trade-offs of different scenarios of land-cover change in terms of species occupancy. ?? 2011 Society for Conservation Biology.
Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu
2016-08-02
The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.
Active machine learning for rapid landslide inventory mapping with VHR satellite images (Invited)
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2013-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
The effect of surface grain reversal on the AC losses of sintered Nd-Fe-B permanent magnets
NASA Astrophysics Data System (ADS)
Moore, Martina; Roth, Stefan; Gebert, Annett; Schultz, Ludwig; Gutfleisch, Oliver
2015-02-01
Sintered Nd-Fe-B magnets are exposed to AC magnetic fields in many applications, e.g. in permanent magnet electric motors. We have measured the AC losses of sintered Nd-Fe-B magnets in a closed circuit arrangement using AC fields with root mean square-values up to 80 mT (peak amplitude 113 mT) over the frequency range 50 to 1000 Hz. Two magnet grades with different dysprosium content were investigated. Around the remanence point the low grade material (1.7 wt% Dy) showed significant hysteresis losses; whereas the losses in the high grade material (8.9 wt% Dy) were dominated by classical eddy currents. Kerr microscopy images revealed that the hysteresis losses measured for the low grade magnet can be mainly ascribed to grains at the sample surface with multiple domains. This was further confirmed when the high grade material was subsequently exposed to DC and AC magnetic fields. Here a larger number of surface grains with multiple domains are also present once the step in the demagnetization curve attributed to the surface grain reversal is reached and a rise in the measured hysteresis losses is evident. If in the low grade material the operating point is slightly offset from the remanence point, such that zero field is not bypassed, its AC losses can also be fairly well described with classical eddy current theory.
Floating-Point Units and Algorithms for field-programmable gate arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, Keith D.; Hemmert, K. Scott
2005-11-01
The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
The Multiple Control of Verbal Behavior
Michael, Jack; Palmer, David C; Sundberg, Mark L
2011-01-01
Amid the novel terms and original analyses in Skinner's Verbal Behavior, the importance of his discussion of multiple control is easily missed, but multiple control of verbal responses is the rule rather than the exception. In this paper we summarize and illustrate Skinner's analysis of multiple control and introduce the terms convergent multiple control and divergent multiple control. We point out some implications for applied work and discuss examples of the role of multiple control in humor, poetry, problem solving, and recall. Joint control and conditional discrimination are discussed as special cases of multiple control. We suggest that multiple control is a useful analytic tool for interpreting virtually all complex behavior, and we consider the concepts of derived relations and naming as cases in point. PMID:22532752
Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms
NASA Astrophysics Data System (ADS)
Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.
1997-09-01
This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
Intrinsic properties of cupric oxide nanoparticles enable effective filtration of arsenic from water
McDonald, Kyle J.; Reynolds, Brandon; Reddy, K. J.
2015-01-01
The contamination of arsenic in human drinking water supplies is a serious global health concern. Despite multiple years of research, sustainable arsenic treatment technologies have yet to be developed. This study demonstrates the intrinsic abilities of cupric oxide nanoparticles (CuO-NP) towards arsenic adsorption and the development of a point-of-use filter for field application. X-ray diffraction and X-ray photoelectron spectroscopy experiments were used to examine adsorption, desorption, and readsorption of aqueous arsenite and arsenate by CuO-NP. Field experiments were conducted with a point-of-use filter, coupled with real-time arsenic monitoring, to remove arsenic from domestic groundwater samples. The CuO-NP were regenerated by desorbing arsenate via increasing pH above the zero point of charge. Results suggest an effective oxidation of arsenite to arsenate on the surface of CuO-NP. Naturally occurring arsenic was effectively removed by both as-prepared and regenerated CuO-NP in a field demonstration of the point-of-use filter. A sustainable arsenic mitigation model for contaminated water is proposed. PMID:26047164
NASA Astrophysics Data System (ADS)
Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.
2017-08-01
In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.
Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.
Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael
2014-09-01
In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.
New fast DCT algorithms based on Loeffler's factorization
NASA Astrophysics Data System (ADS)
Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon
2012-10-01
This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.
Fu, Glenn K; Wilhelmy, Julie; Stern, David; Fan, H Christina; Fodor, Stephen P A
2014-03-18
We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification.
Caruccio, Nicholas
2011-01-01
DNA library preparation is a common entry point and bottleneck for next-generation sequencing. Current methods generally consist of distinct steps that often involve significant sample loss and hands-on time: DNA fragmentation, end-polishing, and adaptor-ligation. In vitro transposition with Nextera™ Transposomes simultaneously fragments and covalently tags the target DNA, thereby combining these three distinct steps into a single reaction. Platform-specific sequencing adaptors can be added, and the sample can be enriched and bar-coded using limited-cycle PCR to prepare di-tagged DNA fragment libraries. Nextera technology offers a streamlined, efficient, and high-throughput method for generating bar-coded libraries compatible with multiple next-generation sequencing platforms.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
On the difficulties in characterizing ZnO nanowires.
Schlenker, E; Bakin, A; Weimann, T; Hinze, P; Weber, D H; Gölzhäuser, A; Wehmann, H-H; Waag, A
2008-09-10
The electrical properties of single ZnO nanowires grown by vapor phase transport were investigated. While some samples were contacted by Ti/Au electrodes, another set of samples was investigated using a manipulator tip in a low energy electron point-source microscope. The deduced resistivities range from 1 to 10(3) Ωcm. Additionally, the resistivities of nanowires from multiple publications were brought together and compared to the values obtained from our measurements. The overview of all data shows enormous differences (10(-3)-10(5) Ωcm) in the measured resistivities. In order to reveal the origin of the discrepancies, the influence of growth parameters, measuring methods, contact resistances, crystal structures and ambient conditions are investigated and discussed in detail.
Hermanrud, Christina; Ryner, Malin; Luft, Thomas; Jensen, Poul Erik; Ingenhoven, Kathleen; Rat, Dorothea; Deisenhammer, Florian; Sørensen, Per Soelberg; Pallardy, Marc; Sikkema, Dan; Bertotti, Elisa; Kramer, Daniel; Creeke, Paul; Fogdell-Hahn, Anna
2016-03-01
Neutralizing anti-drug antibodies (NAbs) against therapeutic interferon beta (IFNβ) in people with multiple sclerosis (MS) are measured with cell-based bioassays. The aim of this study was to redevelop and validate two luciferase reporter-gene bioassays, LUC and iLite, using a cut-point approach to identify NAb positive samples. Such an approach is favored by the pharmaceutical industry and governmental regulatory agencies as it has a clear statistical basis and overcomes the limitations of the current assays based on the Kawade principle. The work was conducted following the latest assay guidelines. The assays were re-developed and validated as part of the "Anti-Biopharmaceutical Immunization: Prediction and analysis of clinical relevance to minimize the risk" (ABIRISK) consortium and involved a joint collaboration between four academic laboratories and two pharmaceutical companies. The LUC assay was validated at Innsbruck Medical University (LUCIMU) and at Rigshospitalet (LUCRH) Copenhagen, and the iLite assay at Karolinska Institutet, Stockholm. For both assays, the optimal serum sample concentration in relation to sensitivity and recovery was 2.5% (v/v) in assay media. A Shapiro-Wilk test indicated a normal distribution for the majority of runs, allowing a parametric approach for cut-point calculation to be used, where NAb positive samples could be identified with 95% confidence. An analysis of means and variances indicated that a floating cut-point should be used for all assays. The assays demonstrated acceptable sensitivity for being cell-based assays, with a confirmed limit of detection in neat serum of 1519 ng/mL for LUCIMU, 814 ng/mL for LUCRH, and 320 ng/mL for iLite. Use of the validated cut-point assay, in comparison with the previously used Kawade method, identified 14% more NAb positive samples. In conclusion, implementation of the cut-point design resulted in increased sensitivity to detect NAbs. However, the clinical significance of these low positive titers needs to be further evaluated. Copyright © 2016 Elsevier B.V. All rights reserved.
Proposing a Tentative Cut Point for the Compulsive Sexual Behavior Inventory
Storholm, Erik David; Fisher, Dennis G.; Napper, Lucy E.; Reynolds, Grace L.
2015-01-01
Bivariate analyses were utilized in order to identify the relations between scores on the Compulsive Sexual Behavior Inventory (CSBI) and self-report of risky sexual behavior and drug abuse among 482 racially and ethnically diverse men and women. CSBI scores were associated with both risky sexual behavior and drug abuse among a diverse non-clinical sample, thereby providing evidence of criterion-related validity. The variables that demonstrated a high association with the CSBI were subsequently entered into a multiple regression model. Four variables (number of sexual partners in the last 30 days, self-report of trading drugs for sex, having paid for sex, and perceived chance of acquiring HIV) were retained as variables with good model fit. Receiver operating characteristic (ROC) curve analyses were conducted in order to determine the optimal tentative cut point for the CSBI. The four variables retained in the multiple regression model were utilized as exploratory gold standards in order to construct ROC curves. The ROC curves were then compared to one another in order to determine the point that maximized both sensitivity and specificity in the identification of compulsive sexual behavior with the CSBI scale. The current findings suggest that a tentative cut point of 40 may prove clinically useful in discriminating between persons who exhibit compulsive sexual behavior and those who do not. Because of the association between compulsive sexual behavior and HIV, STIs, and drug abuse, it is paramount that a psychometrically sound measure of compulsive sexual behavior is made available to all healthcare professionals working in disease prevention and other areas. PMID:21203814
Proposing a tentative cut point for the Compulsive Sexual Behavior Inventory.
Storholm, Erik David; Fisher, Dennis G; Napper, Lucy E; Reynolds, Grace L; Halkitis, Perry N
2011-12-01
Bivariate analyses were utilized in order to identify the relations between scores on the Compulsive Sexual Behavior Inventory (CSBI) and self-report of risky sexual behavior and drug abuse among 482 racially and ethnically diverse men and women. CSBI scores were associated with both risky sexual behavior and drug abuse among a diverse non-clinical sample, thereby providing evidence of criterion-related validity. The variables that demonstrated a high association with the CSBI were subsequently entered into a multiple regression model. Four variables (number of sexual partners in the last 30 days, self-report of trading drugs for sex, having paid for sex, and perceived chance of acquiring HIV) were retained as variables with good model fit. Receiver operating characteristic (ROC) curve analyses were conducted in order to determine the optimal tentative cut point for the CSBI. The four variables retained in the multiple regression model were utilized as exploratory gold standards in order to construct ROC curves. The ROC curves were then compared to one another in order to determine the point that maximized both sensitivity and specificity in the identification of compulsive sexual behavior with the CSBI scale. The current findings suggest that a tentative cut point of 40 may prove clinically useful in discriminating between persons who exhibit compulsive sexual behavior and those who do not. Because of the association between compulsive sexual behavior and HIV, STIs, and drug abuse, it is paramount that a psychometrically sound measure of compulsive sexual behavior is made available to all healthcare professionals working in disease prevention and other areas.
1980-07-01
his extreme help in the field nd laboratory. Mr. 1. Husler is thanked for his contribution with the chemical studies. The author is grateful to the Air...conducted by the staff chemist, John Husler . Detailed laboratory procedures used in the weathering experiments are given in Appendix B. Procedures...Edo-- AnAM 9 4A Appendix C Chemical Analysis of Selected * Pedogenic Caliche Samples (Whole Rock Analysis by John Husler , Staff Chemist) 130
Pakula, Basia; Marshall, Brandon D L; Shoveller, Jean A; Chesney, Margaret A; Coates, Thomas J; Koblin, Beryl; Mayer, Kenneth; Mimiaga, Matthew; Operario, Don
2016-08-01
This study examines gradients in depressive symptoms by socioeconomic position (SEP; i.e., income, education, employment) in a sample of men who have sex with men (MSM). Data were used from EXPLORE, a randomized, controlled behavioral HIV prevention trial for HIV-uninfected MSM in six U.S. cities (n = 4,277). Depressive symptoms were assessed using the Center for Epidemiologic Studies Depression scale (short form). Multiple linear regressions were fitted with interaction terms to assess additive and multiplicative relationships between SEP and depressive symptoms. Depressive symptoms were more prevalent among MSM with lower income, lower educational attainment, and those in the unemployed/other employment category. Income, education, and employment made significant contributions in additive models after adjustment. The employment-income interaction was statistically significant, indicating a multiplicative effect. This study revealed gradients in depressive symptoms across SEP of MSM, pointing to income and employment status and, to a lesser extent, education as key factors for understanding heterogeneity of depressive symptoms.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Calvasina, Paola; Muntaner, Carles; Quiñonez, Carlos
2014-12-03
Immigrants are often considered to have poorer oral health than native born-populations. One possible explanation for immigrants' poor oral health is lack of access to dental care. There is very little information on Canadian immigrants' access to dental care, and unmet dental care needs. This study examines predictors of unmet dental care needs among a sample of adult immigrants to Canada over a three-point-five-year post-migration period. A secondary data analysis was conducted on the Longitudinal Survey of Immigrants to Canada (LSIC). Sampling and bootstrap weights were applied to make the data nationally representative. Simple descriptive analyses were conducted to describe the demographic characteristics of the sample. Bivariate and multiple logistic regression analyses were applied to identify factors associated with immigrants' unmet dental care needs over a three-point-five-year period. Approximately 32% of immigrants reported unmet dental care needs. Immigrants lacking dental insurance (OR = 2.63; 95% CI: 2.05-3.37), and those with an average household income of $20,000 to $40,000 per year (OR = 1.62; 95% CI: 1.01-2.61), and lower than $20,000 (OR = 2.25; 95% CI: 1.31-3.86), were more likely to report unmet dental care needs than those earning more than $60,000 per year. In addition, South Asian (OR = 1.85; CI: 1.25-2.73) and Chinese (OR = 2.17; CI: 1.47-3.21) immigrants had significantly higher odds of reporting unmet dental care needs than Europeans. Lack of dental insurance, low income and ethnicity predicted unmet dental care needs over a three-point-five-year period in a sample of immigrants to Canada.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William
2015-09-01
In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry
NASA Astrophysics Data System (ADS)
Dai, Jisheng; Liu, An; Lau, Vincent K. N.
2018-05-01
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
du Prel, Jean-Baptist; Iskenius, Mario; Peter, Richard
2014-12-01
To investigate multiple mediations of the association between education and depressive symptoms (BDI-V) by work-related stress (ERI) and social isolation, the regional variation of the first mediation and a potential moderating effect of regional unemployment rate. 6339 employees born in 1959 and 1965 were randomly recruited from 222 sample points in a German cohort study on work, age, health and work participation. A multilevel model of moderated lower-level mediation was used to investigate the confirmatory research question. Multiple mediations were tested corresponding to Baron and Kenny. These analyses were stratified for age and adjusted for sex, negative affectivity and overcommitment. In the association between education and depressive symptoms, indirect effects of work-related stress and social isolation were significant in both age cohorts whereas a direct association was observable in the younger cohort, only. The significant regional variation in the association between work-related stress and depressive symptoms was not statistically explained by regional unemployment rate. Our findings point out that work-related stress and social isolation play an intermediary role between education and depressive symptoms in middle-aged employees.
Dendrimer-Linked Antifreeze Proteins Have Superior Activity and Thermal Recovery.
Stevens, Corey A; Drori, Ran; Zalis, Shiran; Braslavsky, Ido; Davies, Peter L
2015-09-16
By binding to ice, antifreeze proteins (AFPs) depress the freezing point of a solution and inhibit ice recrystallization if freezing does occur. Previous work showed that the activity of an AFP was incrementally increased by fusing it to another protein. Even larger increases in activity were achieved by doubling the number of ice-binding sites by dimerization. Here, we have combined the two strategies by linking multiple outward-facing AFPs to a dendrimer to significantly increase both the size of the molecule and the number of ice-binding sites. Using a heterobifunctional cross-linker, we attached between 6 and 11 type III AFPs to a second-generation polyamidoamine (G2-PAMAM) dendrimer with 16 reactive termini. This heterogeneous sample of dendrimer-linked type III constructs showed a greater than 4-fold increase in freezing point depression over that of monomeric type III AFP. This multimerized AFP was particularly effective at ice recrystallization inhibition activity, likely because it can simultaneously bind multiple ice surfaces. Additionally, attachment to the dendrimer has afforded the AFP superior recovery from heat denaturation. Linking AFPs together via polymers can generate novel reagents for controlling ice growth and recrystallization.
Emergency Dose Estimation Using Optically Stimulated Luminescence from Human Tooth Enamel
Sholom, S.; DeWitt, R.; Simon, S.L.; Bouville, A.; McKeever, S.W.S.
2011-01-01
Human teeth were studied for potential use as emergency Optically Stimulated Luminescence (OSL) dosimeters. By using multiple-teeth samples in combination with a custom-built sensitive OSL reader, 60Co-equivalent doses below 0.64 Gy were measured immediately after exposure with the lowest value being 27 mGy for the most sensitive sample. The variability of OSL sensitivity, from individual to individual using multiple-teeth samples, was determined to be 53%. X-ray and beta exposure were found to produce OSL curves with the same shape that differed from those due to ultraviolet (UV) exposure; as a result, correlation was observed between OSL signals after X-ray and beta exposure and was absent if compared to OSL signals after UV exposure. Fading of the OSL signal was “typical” for most teeth with just a few of incisors showing atypical behavior. Typical fading dependences were described by a bi-exponential decay function with “fast” (decay time around of 12 min) and “slow” (decay time about 14 h) components. OSL detection limits, based on the techniques developed to-date, were found to be satisfactory from the point-of-view of medical triage requirements if conducted within 24 hours of the exposure. PMID:21949479
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy.
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy
NASA Astrophysics Data System (ADS)
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Drivers of microbiological quality of household drinking water - a case study in rural Ethiopia.
Usman, Muhammed A; Gerber, Nicolas; Pangaribowo, Evita H
2018-04-01
This study aims at assessing the determinants of microbiological contamination of household drinking water under multiple-use water systems in rural areas of Ethiopia. For this analysis, a random sample of 454 households was surveyed between February and March 2014, and water samples from community sources and household storage containers were collected and tested for fecal contamination. The number of Escherichia coli (E. coli) colony-forming units per 100 mL water was used as an indicator of fecal contamination. The microbiological tests demonstrated that 58% of household stored water samples and 38% of protected community water sources were contaminated with E. coli. Moreover, most improved water sources often considered to provide safe water showed the presence of E. coli. The result shows that households' stored water collected from unprotected wells/springs had higher levels of E. coli than stored water from alternative sources. Distance to water sources and water collection containers are also strongly associated with stored water quality. To ensure the quality of stored water, the study suggests that there is a need to promote water safety from the point-of-source to point-of-use, with due considerations for the linkages between water and agriculture to advance the Sustainable Development Goal 6 of ensuring access to clean water for everyone.
A versatile electrophoresis-based self-test platform.
Staal, Steven; Ungerer, Mathijn; Floris, Arjan; Ten Brinke, Hans-Willem; Helmhout, Roy; Tellegen, Marian; Janssen, Kjeld; Karstens, Erik; van Arragon, Charlotte; Lenk, Stefan; Staijen, Erik; Bartholomew, Jody; Krabbe, Hans; Movig, Kris; Dubský, Pavel; van den Berg, Albert; Eijkel, Jan
2015-03-01
This paper reports on recent research creating a family of electrophoresis-based point of care devices for the determination of a wide range of ionic analytes in various sample matrices. These devices are based on a first version for the point-of-care measurement of Li(+), reported in 2010 by Floris et al. (Lab Chip 2010, 10, 1799-1806). With respect to this device, significant improvements in accuracy, precision, detection limit, and reliability have been obtained especially by the use of multiple injections of one sample on a single chip and integrated data analysis. Internal and external validation by clinical laboratories for the determination of analytes in real patients by a self-test is reported. For Li(+) in blood better precision than the standard clinical determination for Li(+) was achieved. For Na(+) in human urine the method was found to be within the clinical acceptability limits. In a veterinary application, Ca(2+) and Mg(2+) were determined in bovine blood by means of the same chip, but using a different platform. Finally, promising preliminary results are reported with the Medimate platform for the determination of creatinine in whole blood and quantification of both cations and anions through replicate measurements on the same sample with the same chip. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
1991-11-01
Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George 14. SUBJECT TERMS 15. NUMBER OF PAGES...Keith B. Farr Nicholas George Backscatter from a Tilted Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse ...correlation components. Uf) c)z 0 CL C/) Ix I- z 0 0 LL C,z -J a 0l IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSES Bryan J. Stossel and Nicholas George
Protein crystallography prescreen kit
Segelke, Brent W.; Krupka, Heike I.; Rupp, Bernhard
2007-10-02
A kit for prescreening protein concentration for crystallization includes a multiplicity of vials, a multiplicity of pre-selected reagents, and a multiplicity of sample plates. The reagents and a corresponding multiplicity of samples of the protein in solutions of varying concentrations are placed on sample plates. The sample plates containing the reagents and samples are incubated. After incubation the sample plates are examined to determine which of the sample concentrations are too low and which the sample concentrations are too high. The sample concentrations that are optimal for protein crystallization are selected and used.
Protein crystallography prescreen kit
Segelke, Brent W.; Krupka, Heike I.; Rupp, Bernhard
2005-07-12
A kit for prescreening protein concentration for crystallization includes a multiplicity of vials, a multiplicity of pre-selected reagents, and a multiplicity of sample plates. The reagents and a corresponding multiplicity of samples of the protein in solutions of varying concentrations are placed on sample plates. The sample plates containing the reagents and samples are incubated. After incubation the sample plates are examined to determine which of the sample concentrations are too low and which the sample concentrations are too high. The sample concentrations that are optimal for protein crystallization are selected and used.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Chang, Man-Ling; Shih, Ching-Tien
2009-01-01
This study evaluated whether two people with multiple disabilities and minimal motor behavior would be able to improve their pointing performance using finger poke ability with a mouse wheel through a Dynamic Pointing Assistive Program (DPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, changes a…
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Chiu, Sheng-Kai; Chu, Chiung-Ling; Shih, Ching-Tien; Liao, Yung-Kun; Lin, Chia-Chen
2010-01-01
This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance using hand swing with a standard mouse through an Extended Dynamic Pointing Assistive Program (EDPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and changes a mouse into a precise…
Dynamics Sampling in Transition Pathway Space.
Zhou, Hongyu; Tao, Peng
2018-01-09
The minimum energy pathway contains important information describing the transition between two states on a potential energy surface (PES). Chain-of-states methods were developed to efficiently calculate minimum energy pathways connecting two stable states. In the chain-of-states framework, a series of structures are generated and optimized to represent the minimum energy pathway connecting two states. However, multiple pathways may exist connecting two existing states and should be identified to obtain a full view of the transitions. Therefore, we developed an enhanced sampling method, named as the direct pathway dynamics sampling (DPDS) method, to facilitate exploration of a PES for multiple pathways connecting two stable states as well as addition minima and their associated transition pathways. In the DPDS method, molecular dynamics simulations are carried out on the targeting PES within a chain-of-states framework to directly sample the transition pathway space. The simulations of DPDS could be regulated by two parameters controlling distance among states along the pathway and smoothness of the pathway. One advantage of the chain-of-states framework is that no specific reaction coordinates are necessary to generate the reaction pathway, because such information is implicitly represented by the structures along the pathway. The chain-of-states setup in a DPDS method greatly enhances the sufficient sampling in high-energy space between two end states, such as transition states. By removing the constraint on the end states of the pathway, DPDS will also sample pathways connecting minima on a PES in addition to the end points of the starting pathway. This feature makes DPDS an ideal method to directly explore transition pathway space. Three examples demonstrate the efficiency of DPDS methods in sampling the high-energy area important for reactions on the PES.
Multiple introductions of the dengue vector, Aedes aegypti, into California
Gloria-Soria, Andrea; Evans, Benjamin R.; Kramer, Vicki; Bolling, Bethany G.; Tabachnick, Walter J.; Powell, Jeffrey R.
2017-01-01
The yellow fever mosquito Aedes aegypti inhabits much of the tropical and subtropical world and is a primary vector of dengue, Zika, and chikungunya viruses. Breeding populations of A. aegypti were first reported in California (CA) in 2013. Initial genetic analyses using 12 microsatellites on collections from Northern CA in 2013 indicated the South Central US region as the likely source of the introduction. We expanded genetic analyses of CA A. aegypti by: (a) examining additional Northern CA samples and including samples from Southern CA, (b) including more southern US populations for comparison, and (c) genotyping a subset of samples at 15,698 SNPs. Major results are: (1) Northern and Southern CA populations are distinct. (2) Northern populations are more genetically diverse than Southern CA populations. (3) Northern and Southern CA groups were likely founded by two independent introductions which came from the South Central US and Southwest US/northern Mexico regions respectively. (4) Our genetic data suggest that the founding events giving rise to the Northern CA and Southern CA populations likely occurred before the populations were first recognized in 2013 and 2014, respectively. (5) A Northern CA population analyzed at multiple time-points (two years apart) is genetically stable, consistent with permanent in situ breeding. These results expand previous work on the origin of California A. aegypti with the novel finding that this species entered California on multiple occasions, likely some years before its initial detection. This work has implications for mosquito surveillance and vector control activities not only in California but also in other regions where the distribution of this invasive mosquito is expanding. PMID:28796789
Multiple introductions of the dengue vector, Aedes aegypti, into California.
Pless, Evlyn; Gloria-Soria, Andrea; Evans, Benjamin R; Kramer, Vicki; Bolling, Bethany G; Tabachnick, Walter J; Powell, Jeffrey R
2017-08-01
The yellow fever mosquito Aedes aegypti inhabits much of the tropical and subtropical world and is a primary vector of dengue, Zika, and chikungunya viruses. Breeding populations of A. aegypti were first reported in California (CA) in 2013. Initial genetic analyses using 12 microsatellites on collections from Northern CA in 2013 indicated the South Central US region as the likely source of the introduction. We expanded genetic analyses of CA A. aegypti by: (a) examining additional Northern CA samples and including samples from Southern CA, (b) including more southern US populations for comparison, and (c) genotyping a subset of samples at 15,698 SNPs. Major results are: (1) Northern and Southern CA populations are distinct. (2) Northern populations are more genetically diverse than Southern CA populations. (3) Northern and Southern CA groups were likely founded by two independent introductions which came from the South Central US and Southwest US/northern Mexico regions respectively. (4) Our genetic data suggest that the founding events giving rise to the Northern CA and Southern CA populations likely occurred before the populations were first recognized in 2013 and 2014, respectively. (5) A Northern CA population analyzed at multiple time-points (two years apart) is genetically stable, consistent with permanent in situ breeding. These results expand previous work on the origin of California A. aegypti with the novel finding that this species entered California on multiple occasions, likely some years before its initial detection. This work has implications for mosquito surveillance and vector control activities not only in California but also in other regions where the distribution of this invasive mosquito is expanding.
Kunze-Szikszay, Nils; Krack, Lennart A; Wildenauer, Pauline; Wand, Saskia; Heyne, Tim; Walliser, Karoline; Spering, Christopher; Bauer, Martin; Quintel, Michael; Roessler, Markus
2016-10-10
Hyperfibrinolysis (HF) is a major contributor to coagulopathy and mortality in trauma patients. This study investigated (i) the rate of HF during the pre-hospital management of patients with multiple injuries and (ii) the effects of pre-hospital tranexamic acid (TxA) administration on the coagulation system. From 27 trauma patients with pre-hospital an estimated injury severity score (ISS) ≥16 points blood was obtained at the scene and on admission to the emergency department (ED). All patients received 1 g of TxA after the first blood sample was taken. Rotational thrombelastometry (ROTEM) was performed for both blood samples, and the results were compared. HF was defined as a maximum lysis (ML) >15 % in EXTEM. The median (min-max) ISS was 17 points (4-50 points). Four patients (15 %) had HF diagnosed via ROTEM at the scene, and 2 patients (7.5 %) had HF diagnosed via ROTEM on admission to the ED. The median ML before TxA administration was 11 % (3-99 %) vs. 10 % after TxA administration (4-18 %; p > 0.05). TxA was administered 37 min (10-85 min) before ED arrival. The ROTEM results before and after TxA administration did not significantly differ. No adverse drug reactions were observed after TxA administration. HF can be present in severely injured patients during pre-hospital care. Antifibrinolytic therapy administered at the scene is a significant time saver. Even in milder trauma fibrinogen can be decreased to critically low levels. Early administration of TxA cannot reverse or entirely stop this decrease. The pre-hospital use of TxA should be considered for severely injured patients to prevent the worsening of trauma-induced coagulopathy and unnecessarily high fibrinogen consumption. ClinicalTrials.gov ID NCT01938768 (Registered 5 September 2013).
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
Human blood metabolite timetable indicates internal body time
Kasukawa, Takeya; Sugimoto, Masahiro; Hida, Akiko; Minami, Yoichi; Mori, Masayo; Honma, Sato; Honma, Ken-ichi; Mishima, Kazuo; Soga, Tomoyoshi; Ueda, Hiroki R.
2012-01-01
A convenient way to estimate internal body time (BT) is essential for chronotherapy and time-restricted feeding, both of which use body-time information to maximize potency and minimize toxicity during drug administration and feeding, respectively. Previously, we proposed a molecular timetable based on circadian-oscillating substances in multiple mouse organs or blood to estimate internal body time from samples taken at only a few time points. Here we applied this molecular-timetable concept to estimate and evaluate internal body time in humans. We constructed a 1.5-d reference timetable of oscillating metabolites in human blood samples with 2-h sampling frequency while simultaneously controlling for the confounding effects of activity level, light, temperature, sleep, and food intake. By using this metabolite timetable as a reference, we accurately determined internal body time within 3 h from just two anti-phase blood samples. Our minimally invasive, molecular-timetable method with human blood enables highly optimized and personalized medicine. PMID:22927403
Selbig, William R.; ,; Roger T. Bannerman,
2011-01-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.
Selbig, William R; Bannerman, Roger T
2011-04-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.
Selbig, W.R.; Bannerman, R.T.
2011-01-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water. ?? 2010 Publishing Technology.
Momentary assessment of PTSD symptoms and sexual risk behavior in male OEF/OIF/OND Veterans.
Black, Anne C; Cooney, Ned L; Justice, Amy C; Fiellin, Lynn E; Pietrzak, Robert H; Lazar, Christina M; Rosen, Marc I
2016-01-15
Post-traumatic stress disorder (PTSD) in Veterans is associated with increased sexual risk behaviors, but the nature of this association is not well understood. Typical PTSD measurement deriving a summary estimate of symptom severity over a period of time precludes inferences about symptom variability, and whether momentary changes in symptom severity predict risk behavior. We assessed the feasibility of measuring daily PTSD symptoms, substance use, and high-risk sexual behavior in Veterans using ecological momentary assessment (EMA). Feasibility indicators were survey completion, PTSD symptom variability, and variability in rates of substance use and sexual risk behavior. Nine male Veterans completed web-based questionnaires by cell phone three times per day for 28 days. Median within-day survey completion rates maintained near 90%, and PTSD symptoms showed high within-person variability, ranging up to 59 points on the 80-point scale. Six Veterans reported alcohol or substance use, and substance users reported use of more than one drug. Eight Veterans reported 1 to 28 high-risk sexual events. Heightened PTSD-related negative affect and externalizing behaviors preceded high-risk sexual events. Greater PTSD symptom instability was associated with having multiple sexual partners in the 28-day period. These results are preliminary, given this small sample size, and multiple comparisons, and should be verified with larger Veteran samples. Results support the feasibility and utility of using of EMA to better understand the relationship between PTSD symptoms and sexual risk behavior in Veterans. Specific antecedent-risk behavior patterns provide promise for focused clinical interventions. Published by Elsevier B.V.
Teunissen, Hanneke A; Spijkerman, Renske; Kuntsche, Emmanuel; Engels, Rutger C M E; Scholte, Ron H J
2017-04-16
There is still limited understanding of how different kinds of drinker prototypes are associated with adolescent drinking. This study uses the strengths of multiple time-point diary measures (enhanced validity of alcohol use measurement) to test the predictive value of abstainer, moderate and heavy drinker prototypes in social situations. We examined whether the favorability of these prototypes (i.e., "prototype evaluation"), the perceived similarity of these prototypes to one's self-image (i.e., "prototype similarity") assessed at baseline, and their interaction predict alcohol use assessed in social situations. Drinker prototypes were assessed in a baseline sample of 599 adolescents. Subsequently, a sample of 77 alcohol-using 16 to 18-year-old males reported their Friday and Saturday evening drinking behavior the next day during eight weeks (resulting in 495 daily measures). Alcohol use was assessed in the company of peers. The more adolescents perceived themselves as similar to heavy drinker prototypes the higher their alcohol consumption in social situations. The more adolescents held favorable abstainer prototypes, the lower their alcohol consumption. The interaction between prototype evaluation and similarity was not significant. By using a more reliable and valid method to assess adolescents' alcohol use, the present study showed that more "extreme" drinker prototypes (i.e., heavy drinker and abstainer prototypes) are most predictive of adolescent alcohol use in social situations. Increasing the perceived dissimilarity to heavy drinker prototypes and the favorability of abstainer prototypes may therefore be important targets in interventions aimed at reducing adolescents' alcohol consumption.
NASA Technical Reports Server (NTRS)
Moustafa, Samiah E.; Rennermalm, Asa K.; Roman, Miguel O.; Wang, Zhuosen; Schaaf, Crystal B.; Smith, Laurence C.; Koenig, Lora S.; Erb, Angela
2017-01-01
MODerate resolution Imaging Spectroradiometer (MODIS) albedo products have been validated over spatially uniform, snow-covered areas of the Greenland ice sheet (GrIS) using the so-called single 'point-to-pixel' method. This study expands on this methodology by applying a 'multiple-point-to-pixel' method and examination of spatial autocorrelation (here using semivariogram analysis) by using in situ observations, high-resolution World- View-2 (WV-2) surface reflectances, and MODIS Collection V006 daily blue-sky albedo over a spatially heterogeneous surfaces in the lower ablation zone in southwest Greenland. Our results using 232 ground-based samples within two MODIS pixels, one being more spatial heterogeneous than the other, show little difference in accuracy among narrow and broad band albedos (except for Band 2). Within the more homogenous pixel area, in situ and MODIS albedos were very close (error varied from -4% to +7%) and within the range of ASD standard errors. The semivariogram analysis revealed that the minimum observational footprint needed for a spatially representative sample is 30 m. In contrast, over the more spatially heterogeneous surface pixel, a minimum footprint size was not quantifiable due to spatial autocorrelation, and far exceeds the effective resolution of the MODIS retrievals. Over the high spatial heterogeneity surface pixel, MODIS is lower than ground measurements by 4-7%, partly due to a known in situ undersampling of darker surfaces that often are impassable by foot (e.g., meltwater features and shadowing effects over crevasses). Despite the sampling issue, our analysis errors are very close to the stated general accuracy of the MODIS product of 5%. Thus, our study suggests that the MODIS albedo product performs well in a very heterogeneous, low-albedo, area of the ice sheet ablation zone. Furthermore, we demonstrate that single 'point-to-pixel' methods alone are insufficient in characterizing and validating the variation of surface albedo displayed in the lower ablation area. This is true because the distribution of in situ data deviations from MODIS albedo show a substantial range, with the average values for the 10th and 90th percentiles being -0.30 and 0.43 across all bands. Thus, if only single point is taken for ground validation, and is randomly selected from either distribution tails, the error would appear to be considerable. Given the need for multiple in-situ points, concurrent albedo measurements derived from existing AWSs, (low-flying vehicles (airborne or unmanned) and high-resolution imagery (WV-2)) are needed to resolve high sub-pixel variability in the ablation zone, and thus, further improve our characterization of Greenland's surface albedo.
Equation of state of solid, liquid and gaseous tantalum from first principles
Miljacic, Ljubomir; Demers, Steven; Hong, Qi-Jun; ...
2015-09-18
Here, we present ab initio calculations of the phase diagram and the equation of state of Ta in a wide range of volumes and temperatures, with volumes from 9 to 180 Å 3/atom, temperature as high as 20000 K, and pressure up to 7 Mbars. The calculations are based on first principles, in combination with techniques of molecular dynamics, thermodynamic integration, and statistical modeling. Multiple phases are studied, including the solid, fluid, and gas single phases, as well as two-phase coexistences. We calculate the critical point by direct molecular dynamics sampling, and extend the equation of state to very lowmore » density through virial series fitting. The accuracy of the equation of state is assessed by comparing both the predicted melting curve and the critical point with previous experimental and theoretical investigations.« less
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point-within-transect and park-level effect. Our results suggest that this model can provide insight into the detection process during avian surveys and reduce bias in estimates of relative abundance but is best applied to surveys of species with greater availability (e.g., breeding songbirds).
Reviving common standards in point-count surveys for broad inference across studies
Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.
2014-01-01
We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.
NASA Astrophysics Data System (ADS)
Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.
2017-12-01
Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.
Churkin, Alexander; Barash, Danny
2008-01-01
Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Li-hong; Li, Hui; Li, Jin-ping
2011-12-09
Highlights: Black-Right-Pointing-Pointer miR-125b is frequently down-regulated in osteosarcoma samples and human osteosarcoma cell lines. Black-Right-Pointing-Pointer Ectopic restoration of miR-125b suppresses cell proliferation and migration in vitro. Black-Right-Pointing-Pointer STAT3 is the direct and functional downstream target of miR-125b. Black-Right-Pointing-Pointer STAT3 can bind to the promoter region of miR-125b and serves as a transactivator. -- Abstract: There is accumulating evidence that microRNAs are involved in multiple processes in development and tumor progression. Abnormally expressed miR-125b was found to play a fundamental role in several types of cancer; however, whether miR-125b participates in regulating the initiation and progress of osteosarcoma still remains unclear.more » Here we demonstrate that miR-125b is frequently down-regulated in osteosarcoma samples and human osteosarcoma cell lines. The ectopic restoration of miR-125b expression in human osteosarcoma cells suppresses proliferation and migration in vitro and inhibits tumor formation in vivo. We further identified signal transducer and activator of transcription 3 (STAT3) as the direct and functional downstream target of miR-125b. Interestingly, we discovered that the expression of miR-125b is regulated by STAT3 at the level of transcription. STAT3 binds to the promoter region of miR-125b in vitro and serves as a transactivator. Taken together, our findings point to an important role in the molecular etiology of osteosarcoma and suggest that miR-125b is a potential target in the treatment of osteosarcoma.« less
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
A multispecies framework for landscape conservation planning.
Schwenk, W Scott; Donovan, Therese M
2011-10-01
Rapidly changing landscapes have spurred the need for quantitative methods for conservation assessment and planning that encompass large spatial extents. We devised and tested a multispecies framework for conservation planning to complement single-species assessments and ecosystem-level approaches. Our framework consisted of 4 elements: sampling to effectively estimate population parameters, measuring how human activity affects landscapes at multiple scales, analyzing the relation between landscape characteristics and individual species occurrences, and evaluating and comparing the responses of multiple species to landscape modification. We applied the approach to a community of terrestrial birds across 25,000 km(2) with a range of intensities of human development. Human modification of land cover, road density, and other elements of the landscape, measured at multiple spatial extents, had large effects on occupancy of the 67 species studied. Forest composition within 1 km of points had a strong effect on occupancy of many species and a range of negative, intermediate, and positive associations. Road density within 1 km of points, percent evergreen forest within 300 m, and distance from patch edge were also strongly associated with occupancy for many species. We used the occupancy results to group species into 11 guilds that shared patterns of association with landscape characteristics. Our multispecies approach to conservation planning allowed us to quantify the trade-offs of different scenarios of land-cover change in terms of species occupancy. Conservation Biology © 2011 Society for Conservation Biology. No claim to original US government works.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2011-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
Strickland, Karen; Worth, Allison; Kennedy, Catriona
2015-12-01
This study explores the experience of the diagnosis of Multiple Sclerosis for the support person and identifies the impact on their lives. At the time of diagnosis, the support person may not be readily identified in a traditional caring role; however, the diagnosis itself brings with it the possibility of changes to the roles in the relationship and possible consequences for biographical construction. A hermeneutic phenomenological study. A convenience sample of nine support persons was interviewed between December 2008-March 2010. The data were analysed using interpretative phenomenological analysis. The participants in this study were often not readily identifiable as 'carers'; however, the diagnosis of Multiple Sclerosis implied a shift towards a caring role at some point in the future. The uncertainty surrounding the nature and progression of the condition left this identity hanging, incomplete and as such contributed to a liminal way of being. This paper reveals that biographical disruption is not limited to the person diagnosed with Multiple Sclerosis but that the support person also undergoes a transition to their sense of self to that of 'anticipatory carer'. The findings provide insight into the biographical and emotional impact of Multiple Sclerosis on the support persons early in the development of the condition. © 2015 John Wiley & Sons Ltd.
Knifsend, Casey A; Graham, Sandra
2012-03-01
Although adolescents often participate in multiple extracurricular activities, little research has examined how the breadth of activities in which an adolescent is involved relates to school-related affect and academic performance. Relying on a large, multi-ethnic sample (N = 864; 55.9% female), the current study investigated linear and non-linear relationships of 11th grade activity participation in four activity domains (academic/leadership groups, arts activities, clubs, and sports) to adolescents' sense of belonging at school, academic engagement, and grade point average, contemporarily and in 12th grade. Results of multiple regression models revealed curvilinear relationships for sense of belonging at school in 11th and 12th grade, grade point average in 11th grade, and academic engagement in 12th grade. Adolescents who were moderately involved (i.e., in two domains) reported a greater sense of belonging at school in 11th and 12th grade, a higher grade point average in 11th grade, and greater academic engagement in 12th grade, relative to those who were more or less involved. Furthermore, adolescents' sense of belonging at school in 11th grade mediated the relationship of domain participation in 11th grade to academic engagement in 12th grade. This study suggests that involvement in a moderate number of activity domains promotes positive school-related affect and greater academic performance. School policy implications and recommendations are discussed.
Sensory and Instrumental Flavor Changes in Green Tea Brewed Multiple Times
Lee, Jeehyun; Chambers, Delores; Chambers, Edgar
2013-01-01
Green teas in leaf form are brewed multiple times, a common selling point. However, the flavor changes, both sensory and volatile compounds, of green teas that have been brewed multiple times are unknown. The objectives of this study were to determine how the aroma and flavor of green teas change as they are brewed multiple times, to determine if a relationship exists between green tea flavors and green tea volatile compounds, and to suggest the number of times that green tea leaves can be brewed. The first and second brews of the green tea samples provided similar flavor intensities. The third and fourth brews provided milder flavors and lower bitterness and astringency when measured using descriptive sensory analysis. In the brewed liquor of green tea mostly linalool, nonanal, geraniol, jasmone, and β-ionone volatile compounds were present at low levels (using gas chromatography-mass spectrometry). The geraniol, linalool, and linalool oxide compounds in green tea may contribute to the floral/perfumy flavor. Green teas in leaf form may be brewed up to four times: the first two brews providing stronger flavor, bitterness, and astringency whereas the third and fourth brews will provide milder flavor, bitterness, and astringency. PMID:28239138
Pyne, Saumyadipta; Lee, Sharon X; Wang, Kui; Irish, Jonathan; Tamayo, Pablo; Nazaire, Marc-Danie; Duong, Tarn; Ng, Shu-Kay; Hafler, David; Levy, Ronald; Nolan, Garry P; Mesirov, Jill; McLachlan, Geoffrey J
2014-01-01
In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.
Information Foraging for Perceptual Decisions
2016-01-01
We tested an information foraging framework to characterize the mechanisms that drive active (visual) sampling behavior in decision problems that involve multiple sources of information. Experiments 1 through 3 involved participants making an absolute judgment about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between 2 motion patterns that could only be sampled sequentially. Our results show that: (a) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (b) The limited growth is attributable to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (c) Little information is lost once a new source of information is being sampled; and (d) The point at which the observer switches from 1 source to another is governed by online monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behavior. PMID:27819455
Synchronizing data from irregularly sampled sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uluyol, Onder
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.
Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri
2015-11-01
There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.
Gschwind, Michael K [Chappaqua, NY
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
NASA Astrophysics Data System (ADS)
Bartholomeusz, Daniel A.; Davies, Rupert H.; Andrade, Joseph D.
2006-02-01
A centrifugal-based microfluidic device1 was built with lyophilized bioluminescent reagents for measuring multiple metabolites from a sample of less than 15 μL. Microfluidic channels, reaction wells, and valves were cut in adhesive vinyl film using a knife plotter with features down to 30 μm and transferred to metalized polycarbonate compact disks (CDs). The fabrication method was simple enough to test over 100 prototypes within a few months. It also allowed enzymes to be packaged in microchannels without exposure to heat or chemicals. The valves were rendered hydrophobic using liquid phase deposition. Microchannels were patterned using soft lithography to make them hydrophilic. Reagents and calibration standards were deposited and lyophilized in different wells before being covered with another adhesive film. Sample delivery was controlled by a modified CD ROM. The CD was capable of distributing 200 nL sample aliquots to 36 channels, each with a different set of reagents that mixed with the sample before initiating the luminescent reactions. Reflection of light from the metalized layer and lens configuration allowed for 20% of the available light to be collected from each channel. ATP was detected down to 0.1 μM. Creatinine, glucose, and galactose were also measured in micro and milliMolar ranges. Other optical-based analytical assays can easily be incorporated into the device design. The minimal sample size needed and expandability of the device make it easier to simultaneously measure a variety of clinically relevant analytes in point-of-care settings.
Effects of electrofishing gear type on spatial and temporal variability in fish community sampling
Meador, M.R.; McIntyre, J.P.
2003-01-01
Fish community data collected from 24 major river basins between 1993 and 1998 as part of the U.S. Geological Survey's National Water-Quality Assessment Program were analyzed to assess multiple-reach (three consecutive reaches) and multiple-year (three consecutive years) variability in samples collected at a site. Variability was assessed using the coefficient of variation (CV; SD/mean) of species richness, the Jaccard index (JI), and the percent similarity index (PSI). Data were categorized by three electrofishing sample collection methods: backpack, towed barge, and boat. Overall, multiple-reach CV values were significantly lower than those for multiple years, whereas multiple-reach JI and PSI values were significantly greater than those for multiple years. Multiple-reach and multiple-year CV values did not vary significantly among electrofishing methods, although JI and PSI values were significantly greatest for backpack electrofishing across multiple reaches and multiple years. The absolute difference between mean species richness for multiple-reach samples and mean species richness for multiple-year samples was 0.8 species (9.5% of total species richness) for backpack samples, 1.7 species (10.1%) for towed-barge samples, and 4.5 species (24.4%) for boat-collected samples. Review of boat-collected fish samples indicated that representatives of four taxonomic families - Catostomidae, Centrarchidae, Cyprinidae, and Ictaluridae - were collected at all sites. Of these, catostomids exhibited greater interannual variability than centrarchids, cyprinids, or ictalurids. Caution should be exercised when combining boat-collected fish community data from different years because of relatively high interannual variability, which is primarily due to certain relatively mobile species. Such variability may obscure longer-term trends.
Graph transformation method for calculating waiting times in Markov chains.
Trygubenko, Semen A; Wales, David J
2006-06-21
We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.
NASA Technical Reports Server (NTRS)
Hewes, C. R.; Bosshart, P. W.; Eversole, W. L.; Dewit, M.; Buss, D. D.
1976-01-01
Two CCD techniques were discussed for performing an N-point sampled data correlation between an input signal and an electronically programmable reference function. The design and experimental performance of an implementation of the direct time correlator utilizing two analog CCDs and MOS multipliers on a single IC were evaluated. The performance of a CCD implementation of the chirp z transform was described, and the design of a new CCD integrated circuit for performing correlation by multiplication in the frequency domain was presented. This chip provides a discrete Fourier transform (DFT) or inverse DFT, multipliers, and complete support circuitry for the CCD CZT. The two correlation techniques are compared.
Microfluidic point-of-care blood panel based on a novel technique: Reversible electroosmotic flow
Mohammadi, Mahdi; Madadi, Hojjat; Casals-Terré, Jasmina
2015-01-01
A wide range of diseases and conditions are monitored or diagnosed from blood plasma, but the ability to analyze a whole blood sample with the requirements for a point-of-care device, such as robustness, user-friendliness, and simple handling, remains unmet. Microfluidics technology offers the possibility not only to work fresh thumb-pricked whole blood but also to maximize the amount of the obtained plasma from the initial sample and therefore the possibility to implement multiple tests in a single cartridge. The microfluidic design presented in this paper is a combination of cross-flow filtration with a reversible electroosmotic flow that prevents clogging at the filter entrance and maximizes the amount of separated plasma. The main advantage of this design is its efficiency, since from a small amount of sample (a single droplet ∼10 μl) almost 10% of this (approx 1 μl) is extracted and collected with high purity (more than 99%) in a reasonable time (5–8 min). To validate the quality and quantity of the separated plasma and to show its potential as a clinical tool, the microfluidic chip has been combined with lateral flow immunochromatography technology to perform a qualitative detection of the thyroid-stimulating hormone and a blood panel for measuring cardiac Troponin and Creatine Kinase MB. The results from the microfluidic system are comparable to previous commercial lateral flow assays that required more sample for implementing fewer tests. PMID:26396660
Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.
2009-01-01
To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.
Downregulation of tumor suppressor QKI in gastric cancer and its implication in cancer prognosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, Yongqian; Wang, Li; Lu, Huanyu
2012-05-25
Highlights: Black-Right-Pointing-Pointer QKI expression is decreased in gastric cancer samples. Black-Right-Pointing-Pointer Promoter hyper methylation contributes to the downregulation of QKI. Black-Right-Pointing-Pointer QKI inhibits the growth of gastric cancer cells. Black-Right-Pointing-Pointer Decreased QKI expression predicts poor survival. -- Abstract: Gastric cancer (GC) is the fourth most common cancer and second leading cause of cancer-related death worldwide. RNA-binding protein Quaking (QKI) is a newly identified tumor suppressor in multiple cancers, while its role in GC is largely unknown. Our study here aimed to clarify the relationship between QKI expression with the clinicopathologic characteristics and the prognosis of GC. In the 222 GCmore » patients' specimens, QKI expression was found to be significantly decreased in most of the GC tissues, which was largely due to promoter hypermethylation. QKI overexpression reduced the proliferation ability of GC cell line in vitro study. In addition, the reduced QKI expression correlated well with poor differentiation status, depth of invasion, gastric lymph node metastasis, distant metastasis, advanced TNM stage, and poor survival. Multivariate analysis showed QKI expression was an independent prognostic factor for patient survival.« less
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Beer, Sebastian; Dobler, Dorota; Gross, Alexander; Ost, Martin; Elseberg, Christiane; Maeder, Ulf; Schmidts, Thomas Michael; Keusgen, Michael; Fiebich, Martin; Runkel, Frank
2013-01-30
Multiple emulsions offer various applications in a wide range of fields such as pharmaceutical, cosmetics and food technology. Two features are known to yield a great influence on multiple emulsion quality and utility as encapsulation efficiency and prolonged stability. To achieve a prolonged stability, the production of the emulsions has to be observed and controlled, preferably in line. In line measurements provide available parameters in a short time frame without the need for the sample to be removed from the process stream, thereby enabling continuous process control. In this study, information about the physical state of multiple emulsions obtained from dielectric spectroscopy (DS) is evaluated for this purpose. Results from dielectric measurements performed in line during the production cycle are compared to theoretically expected results and to well established off line measurements. Thus, a first step to include the production of multiple emulsions into the process analytical technology (PAT) guidelines of the Food and Drug Administration (FDA) is achieved. DS proved to be beneficial in determining the crucial stopping criterion, which is essential in the production of multiple emulsions. The stopping of the process at a less-than-ideal point can severely lower the encapsulation efficiency and the stability, thereby lowering the quality of the emulsion. DS is also expected to provide further information about the multiple emulsion like encapsulation efficiency. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Porter, Kristin E.
2018-01-01
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
NASA Astrophysics Data System (ADS)
Link, Paul Karl; Fanning, C. Mark; Beranek, Luke P.
2005-12-01
Detrital-zircon age-spectra effectively define provenance in Holocene and Neogene fluvial sands from the Snake River system of the northern Rockies, U.S.A. SHRIMP U-Pb dates have been measured for forty-six samples (about 2700 zircon grains) of fluvial and aeolian sediment. The detrital-zircon age distributions are repeatable and demonstrate predictable longitudinal variation. By lumping multiple samples to attain populations of several hundred grains, we recognize distinctive, provenance-defining zircon-age distributions or "barcodes," for fluvial sedimentary systems of several scales, within the upper and middle Snake River system. Our detrital-zircon studies effectively define the geochronology of the northern Rocky Mountains. The composite detrital-zircon grain distribution of the middle Snake River consists of major populations of Neogene, Eocene, and Cretaceous magmatic grains plus intermediate and small grain populations of multiply recycled Grenville (˜950 to 1300 Ma) grains and Yavapai-Mazatzal province grains (˜1600 to 1800 Ma) recycled through the upper Belt Supergroup and Cretaceous sandstones. A wide range of older Paleoproterozoic and Archean grains are also present. The best-case scenario for using detrital-zircon populations to isolate provenance is when there is a point-source pluton with known age, that is only found in one location or drainage. We find three such zircon age-populations in fluvial sediments downstream from the point-source plutons: Ordovician in the southern Beaverhead Mountains, Jurassic in northern Nevada, and Oligocene in the Albion Mountains core complex of southern Idaho. Large detrital-zircon age-populations derived from regionally well-defined, magmatic or recycled sedimentary, sources also serve to delimit the provenance of Neogene fluvial systems. In the Snake River system, defining populations include those derived from Cretaceous Atlanta lobe of the Idaho batholith (80 to 100 Ma), Eocene Challis Volcanic Group and associated plutons (˜45 to 52 Ma), and Neogene rhyolitic Yellowstone-Snake River Plain volcanics (˜0 to 17 Ma). For first-order drainage basins containing these zircon-rich source terranes, or containing a point-source pluton, a 60-grain random sample is sufficient to define the dominant provenance. The most difficult age-distributions to analyze are those that contain multiple small zircon age-populations and no defining large populations. Examples of these include streams draining the Proterozoic and Paleozoic Cordilleran miogeocline in eastern Idaho and Pleistocene loess on the Snake River Plain. For such systems, large sample bases of hundreds of grains, plus the use of statistical methods, may be necessary to distinguish detrital-zircon age-spectra.
Cleveland, Danielle; Brumbaugh, William G; MacDonald, Donald D
2017-11-01
Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater. Environ Toxicol Chem 2017;36:2906-2915. Published 2017 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2017 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
Maile, Michael D; Standiford, Theodore J; Engoren, Milo C; Stringer, Kathleen A; Jewell, Elizabeth S; Rajendiran, Thekkelnaycke M; Soni, Tanu; Burant, Charles F
2018-04-10
It is unknown if the plasma lipidome is a useful tool for improving our understanding of the acute respiratory distress syndrome (ARDS). Therefore, we measured the plasma lipidome of individuals with ARDS at two time-points to determine if changes in the plasma lipidome distinguished survivors from non-survivors. We hypothesized that both the absolute concentration and change in concentration over time of plasma lipids are associated with 28-day mortality in this population. Samples for this longitudinal observational cohort study were collected at multiple tertiary-care academic medical centers as part of a previous multicenter clinical trial. A mass spectrometry shot-gun lipidomic assay was used to quantify the lipidome in plasma samples from 30 individuals. Samples from two different days were analyzed for each subject. After removing lipids with a coefficient of variation > 30%, differences between cohorts were identified using repeated measures analysis of variance. The false discovery rate was used to adjust for multiple comparisons. Relationships between significant compounds were explored using hierarchical clustering of the Pearson correlation coefficients and the magnitude of these relationships was described using receiver operating characteristic curves. The mass spectrometry assay reliably measured 359 lipids. After adjusting for multiple comparisons, 90 compounds differed between survivors and non-survivors. Survivors had higher levels for each of these lipids except for five membrane lipids. Glycerolipids, particularly those containing polyunsaturated fatty acid side-chains, represented many of the lipids with higher concentrations in survivors. The change in lipid concentration over time did not differ between survivors and non-survivors. The concentration of multiple plasma lipids is associated with mortality in this group of critically ill patients with ARDS. Absolute lipid levels provided more information than the change in concentration over time. These findings support future research aimed at integrating lipidomics into critical care medicine.
Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
Estimating Statistical Power When Making Adjustments for Multiple Tests
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…
Multi-point laser coherent detection system and its application on vibration measurement
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, C.; Xu, Y. J.; Liu, H.; Yan, K.; Guo, M.
2015-05-01
Laser Doppler vibrometry (LDV) is a well-known interferometric technique to measure the motions, vibrations and mode shapes of machine components and structures. The drawback of commercial LDV is that it can only offer a pointwise measurement. In order to build up a vibrometric image, a scanning device is normally adopted to scan the laser point in two spatial axes. These scanning laser Doppler vibrometers (SLDV) assume that the measurement conditions remain invariant while multiple and identical, sequential measurements are performed. This assumption makes SLDVs impractical to do measurement on transient events. In this paper, we introduce a new multiple-point laser coherent detection system based on spatial-encoding technology and fiber configuration. A simultaneous vibration measurement on multiple points is realized using a single photodetector. A prototype16-point laser coherent detection system is built and it is applied to measure the vibration of various objects, such as body of a car or a motorcycle when engine is on and under shock tests. The results show the prospect of multi-point laser coherent detection system in the area of nondestructive test and precise dynamic measurement.
Magnarelli, L A; Dumler, J S; Anderson, J F; Johnson, R C; Fikrig, E
1995-11-01
Serum specimens from persons with or without Lyme borreliosis were analyzed by indirect fluorescent antibody staining methods for total immunoglobulins to Babesia microti, Ehrlichia chaffeensis (Arkansas strain), and Ehrlichia equi (MRK strain). There was serologic evidence of human exposure to multiple tick-borne agents in 15 (6.6%) of 227 serum samples obtained in Connecticut and Minnesota. Of these, 10 serum samples were from Connecticut patients who had erythema migrans and antibodies to Borrelia burgdorferi (range, 1:160 to 1:40, 960). A maximal antibody titer of 1:640 was noted for a B. microti infection, whereas titration end points of 1:640 and 1:1,280 were recorded for E. chaffeensis and E. equi seropositives, respectively. In specificity tests, there was no cross-reactivity among the antisera and antigens tested for the four tick-borne pathogens. On the basis of serologic testing, a small group of persons who had Lyme borreliosis had been exposed to one or more other tick-borne agents, but there was no clinical diagnosis of babesiosis or ehrlichiosis. Therefore, if the clinical picture is unclear or multiple tick-associated illnesses are suspected, more extensive laboratory testing is suggested.
NASA Astrophysics Data System (ADS)
Stokes, Robert J.; Smith, W. Ewen; Foulger, Brian; Lewis, Colin
2008-10-01
A low cost technique is reported for the rapid screening of containers for materials that potentially could be used for terrorist activities. For peroxide based samples it is demonstrated that full characterisation can be achieved in a continuous curve fitting monitoring mode acquiring up to 10 spectra per second. This clearly demonstrates the potential for a Raman based method to be incorporated into a check-point whilst retaining fast throughput. A number of precursor compounds to nerve agents and peroxide and nitrate based improvised explosive materials have been studied. The potential strengths and weaknesses of using Raman for multiple target identification are discussed with regard to the common vibrations associated with each group of agents. Within this context we also introduce the use of fast Raman line mapping into the trace analysis of multiple component targets. The method presented is suited to volatile or light sensitive samples (such as derived peroxides) and can be employed on a variety of surfaces. As speed and throughput are traded against spectral bandwidth categorising threat compounds into groups based on common functionalities allows the full potential for multiplexed targeting to be realised.
Otsu, Yo; Bormuth, Volker; Wong, Jerome; Mathieu, Benjamin; Dugué, Guillaume P; Feltz, Anne; Dieudonné, Stéphane
2008-08-30
Two-photon microscopy offers the promise of monitoring brain activity at multiple locations within intact tissue. However, serial sampling of voxels has been difficult to reconcile with millisecond timescales characteristic of neuronal activity. This is due to the conflicting constraints of scanning speed and signal amplitude. The recent use of acousto-optic deflector scanning to implement random-access multiphoton microscopy (RAMP) potentially allows to preserve long illumination dwell times while sampling multiple points-of-interest at high rates. However, the real-life abilities of RAMP microscopy regarding sensitivity and phototoxicity issues, which have so far impeded prolonged optical recordings at high frame rates, have not been assessed. Here, we describe the design, implementation and characterisation of an optimised RAMP microscope. We demonstrate the application of the microscope by monitoring calcium transients in Purkinje cells and cortical pyramidal cell dendrites and spines. We quantify the illumination constraints imposed by phototoxicity and show that stable continuous high-rate recordings can be obtained. During these recordings the fluorescence signal is large enough to detect spikes with a temporal resolution limited only by the calcium dye dynamics, improving upon previous techniques by at least an order of magnitude.
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Reconstruction of three-dimensional porous media using a single thin section
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2012-06-01
The purpose of any reconstruction method is to generate realizations of two- or multiphase disordered media that honor limited data for them, with the hope that the realizations provide accurate predictions for those properties of the media for which there are no data available, or their measurement is difficult. An important example of such stochastic systems is porous media for which the reconstruction technique must accurately represent their morphology—the connectivity and geometry—as well as their flow and transport properties. Many of the current reconstruction methods are based on low-order statistical descriptors that fail to provide accurate information on the properties of heterogeneous porous media. On the other hand, due to the availability of high resolution two-dimensional (2D) images of thin sections of a porous medium, and at the same time, the high cost, computational difficulties, and even unavailability of complete 3D images, the problem of reconstructing porous media from 2D thin sections remains an outstanding unsolved problem. We present a method based on multiple-point statistics in which a single 2D thin section of a porous medium, represented by a digitized image, is used to reconstruct the 3D porous medium to which the thin section belongs. The method utilizes a 1D raster path for inspecting the digitized image, and combines it with a cross-correlation function, a grid splitting technique for deciding the resolution of the computational grid used in the reconstruction, and the Shannon entropy as a measure of the heterogeneity of the porous sample, in order to reconstruct the 3D medium. It also utilizes an adaptive technique for identifying the locations and optimal number of hard (quantitative) data points that one can use in the reconstruction process. The method is tested on high resolution images for Berea sandstone and a carbonate rock sample, and the results are compared with the data. To make the comparison quantitative, two sets of statistical tests consisting of the autocorrelation function, histogram matching of the local coordination numbers, the pore and throat size distributions, multiple-points connectivity, and single- and two-phase flow permeabilities are used. The comparison indicates that the proposed method reproduces the long-range connectivity of the porous media, with the computed properties being in good agreement with the data for both porous samples. The computational efficiency of the method is also demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
Jalava, Katri; Rintala, Hanna; Ollgren, Jukka; Maunula, Leena; Gomez-Alvarez, Vicente; Revez, Joana; Palander, Marja; Antikainen, Jenni; Kauppinen, Ari; Räsänen, Pia; Siponen, Sallamaari; Nyholm, Outi; Kyyhkynen, Aino; Hakkarainen, Sirpa; Merentie, Juhani; Pärnänen, Martti; Loginov, Raisa; Ryu, Hodon; Kuusi, Markku; Siitonen, Anja; Miettinen, Ilkka; Santo Domingo, Jorge W; Hänninen, Marja-Liisa; Pitkänen, Tarja
2014-01-01
Failures in the drinking water distribution system cause gastrointestinal outbreaks with multiple pathogens. A water distribution pipe breakage caused a community-wide waterborne outbreak in Vuorela, Finland, July 2012. We investigated this outbreak with advanced epidemiological and microbiological methods. A total of 473/2931 inhabitants (16%) responded to a web-based questionnaire. Water and patient samples were subjected to analysis of multiple microbial targets, molecular typing and microbial community analysis. Spatial analysis on the water distribution network was done and we applied a spatial logistic regression model. The course of the illness was mild. Drinking untreated tap water from the defined outbreak area was significantly associated with illness (RR 5.6, 95% CI 1.9-16.4) increasing in a dose response manner. The closer a person lived to the water distribution breakage point, the higher the risk of becoming ill. Sapovirus, enterovirus, single Campylobacter jejuni and EHEC O157:H7 findings as well as virulence genes for EPEC, EAEC and EHEC pathogroups were detected by molecular or culture methods from the faecal samples of the patients. EPEC, EAEC and EHEC virulence genes and faecal indicator bacteria were also detected in water samples. Microbial community sequencing of contaminated tap water revealed abundance of Arcobacter species. The polyphasic approach improved the understanding of the source of the infections, and aided to define the extent and magnitude of this outbreak.
Higher certainty of the laser-induced damage threshold test with a redistributing data treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Lars; Mrohs, Marius; Gyamfi, Mark
2015-10-15
As a consequence of its statistical nature, the measurement of the laser-induced damage threshold holds always risks to over- or underestimate the real threshold value. As one of the established measurement procedures, the results of S-on-1 (and 1-on-1) tests outlined in the corresponding ISO standard 21 254 depend on the amount of data points and their distribution over the fluence scale. With the limited space on a test sample as well as the requirements on test site separation and beam sizes, the amount of data from one test is restricted. This paper reports on a way to treat damage testmore » data in order to reduce the statistical error and therefore measurement uncertainty. Three simple assumptions allow for the assignment of one data point to multiple data bins and therefore virtually increase the available data base.« less
Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...
2016-04-13
Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less
Polski, J M; Kimzey, S; Percival, R W; Grosso, L E
1998-01-01
AIM: To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. METHODS: The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. RESULTS: In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. CONCLUSION: In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition. PMID:9893748
Polski, J M; Kimzey, S; Percival, R W; Grosso, L E
1998-08-01
To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition.
3D Printing and Assay Development for Point-of-Care Applications
NASA Astrophysics Data System (ADS)
Jagadeesh, Shreesha
Existing centralized labs do not serve patients adequately in remote areas. To enable universal timely healthcare, there is a need to develop low cost, portable systems that can diagnose multiple disease (Point-of-Care (POC) devices). Future POC diagnostics can be more multi-functional if medical device vendors can develop interoperability standards. This thesis developed the following medical diagnostic modules: Plasma from 25 microl blood was extracted through a filter membrane to demonstrate a 3D printed sample preparation module. Sepsis biomarker, C - reactive protein, was quantified through adsorption on nylon beads to demonstrate bead-based assay suitable for 3D printed disposable cartridge module. Finally, a modular fluorescent detection kit was built using 3D printed parts to detect CD4 cells in a disposable cartridge from ChipCare Corp. Due to the modularity enabled by 3D printing technique, the developed units can be easily adapted to detect other diseases.
Leveraging the U.S. Criminal Justice System to Access Women for HIV Interventions.
Meyer, Jaimie P; Muthulingam, Dharushana; El-Bassel, Nabila; Altice, Frederick L
2017-12-01
The criminal justice (CJ) system can be leveraged to access women for HIV prevention and treatment programs. Research is lacking on effective implementation strategies tailored to the specific needs of CJ-involved women. We conducted a scoping review of published studies in English from the United States that described HIV interventions, involved women or girls, and used the CJ system as an access point for sampling or intervention delivery. We identified 350 studies and synthesized data from 42 unique interventions, based in closed (n = 26), community (n = 7), or multiple/other CJ settings (n = 9). A minority of reviewed programs incorporated women-specific content or conducted gender-stratified analyses. CJ systems are comprised of diverse access points, each with unique strengths and challenges for implementing HIV treatment and prevention programs for women. Further study is warranted to develop women-specific and trauma-informed content and evaluate program effectiveness.
Turbidity very near the critical point of methanol-cyclohexane mixtures
NASA Technical Reports Server (NTRS)
Kopelman, R. B.; Gammon, R. W.; Moldover, M. R.
1984-01-01
The turbidity of a critical mixture of methanol and cyclohexane has been measured extremely close to the consolute point. The data span the reduced-temperature range between 10 to the -7th and 10 to the -3d, which is two decades closer to Tc than previous measurements. In this temperature range, the turbidity varies approximately as 1nt, as expected from the integrated form for Ornstein-Zernike scattering. A thin cell (200-micron optical path) with a very small volume (0.08 ml) was used to avoid multiple scattering. A carefully controlled temperature history was used to mix the sample and to minimize the effects of critical wetting layers. The data are consistent with a correlation-length amplitude of 3.9 plus or minus 1.0 A, in agreement with the value 3.5 A calculated from two-scale-factor universality and heat-capacity data from the literature.
Turbidity very near the critical point of methanol-cyclohexane mixtures
NASA Astrophysics Data System (ADS)
Kopelman, R. B.; Gammon, R. W.; Moldover, M. R.
1984-04-01
The turbidity of a critical mixture of methanol and cyclohexane has been measured extremely close to the consolute point. The data span the reduced-temperature range between 10 to the -7th and 10 to the -3d, which is two decades closer to Tc than previous measurements. In this temperature range, the turbidity varies approximately as 1nt, as expected from the integrated form for Ornstein-Zernike scattering. A thin cell (200-micron optical path) with a very small volume (0.08 ml) was used to avoid multiple scattering. A carefully controlled temperature history was used to mix the sample and to minimize the effects of critical wetting layers. The data are consistent with a correlation-length amplitude of 3.9 plus or minus 1.0 A, in agreement with the value 3.5 A calculated from two-scale-factor universality and heat-capacity data from the literature.
High-speed spatial scanning pyrometer
NASA Technical Reports Server (NTRS)
Cezairliyan, A.; Chang, R. F.; Foley, G. M.; Miller, A. P.
1993-01-01
A high-speed spatial scanning pyrometer has been designed and developed to measure spectral radiance temperatures at multiple target points along the length of a rapidly heating/cooling specimen in dynamic thermophysical experiments at high temperatures (above about 1800 K). The design, which is based on a self-scanning linear silicon array containing 1024 elements, enables the pyrometer to measure spectral radiance temperatures (nominally at 650 nm) at 1024 equally spaced points along a 25-mm target length. The elements of the array are sampled consecutively every 1 microsec, thereby permitting one cycle of measurements to be completed in approximately 1 msec. Procedures for calibration and temperature measurement as well as the characteristics and performance of the pyrometer are described. The details of sources and estimated magnitudes of possible errors are given. An example of measurements of radiance temperatures along the length of a tungsten rod, during its cooling following rapid resistive pulse heating, is presented.
An interactive modular design for computerized photometry in spectrochemical analysis
NASA Technical Reports Server (NTRS)
Bair, V. L.
1980-01-01
A general functional description of totally automatic photometry of emission spectra is not available for an operating environment in which the sample compositions and analysis procedures are low-volume and non-routine. The advantages of using an interactive approach to computer control in such an operating environment are demonstrated. This approach includes modular subroutines selected at multiple-option, menu-style decision points. This style of programming is used to trace elemental determinations, including the automated reading of spectrographic plates produced by a 3.4 m Ebert mount spectrograph using a dc-arc in an argon atmosphere. The simplified control logic and modular subroutine approach facilitates innovative research and program development, yet is easily adapted to routine tasks. Operator confidence and control are increased by the built-in options including degree of automation, amount of intermediate data printed out, amount of user prompting, and multidirectional decision points.
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-10-01
A simple model is presented of a possible inspection regimen applied to each leg of a cargo containers' journey between its point of origin and destination. Several candidate modalities are proposed to be used at multiple remote locations to act as a pre-screen inspection as the target approaches a perimeter and as the primary inspection modality at the portal. Information from multiple data sets are fused to optimize the costs and performance of a network of such inspection systems. A series of image processing algorithms are presented that automatically process X-ray images of containerized cargo. The goal of this processing is to locate the container in a real time stream of traffic traversing a portal without impeding the flow of commerce. Such processing may facilitate the inclusion of unmanned/unattended inspection systems in such a network. Several samples of the processing applied to data collected from deployed systems are included. Simulated data from a notional cargo inspection system with multiple sensor modalities and advanced data fusion algorithms are also included to show the potential increased detection and throughput performance of such a configuration.
NASA Astrophysics Data System (ADS)
Li, J.; Dong, J.; Zhu, F.
2017-12-01
Melting plays an unparalleled role in planetary differentiation processes including the formation of metallic cores, basaltic crusts, and atmospheres. Knowledge of the melting behavior of Earth materials provides critical constraints for establishing the Earth's thermal structure, interpreting regional seismic anomalies, and understanding the nature of chemical heterogeneity. Measuring the melting points of compressed materials, however, have remained challenging mainly because melts are often mobile and reactive, and temperature and pressure gradients across millimeter or micron-sized samples introduce large uncertainties in melting detection. Here the melting curve of KCl was determined through in situ ionic conductivity measurements, using the multi-anvil apparatus at the University of Michigan. The method improves upon the symmetric configuration that was used recently for studying the melting behaviors of NaCl, Na2CO3, and CaCO3 (Li and Li 2015 American Mineralogist, Li et al. 2017 Earth and Planetary Science Letters). In the new configuration, the thermocouple and electrodes are placed together with the sample at the center of a cylindrical heater where the temperature is the highest along the axis, in order to minimize uncertainties in temperature measurements and increase the stability of the sample and electrodes. With 1% reproducibility in melting point determination at pressures up to 20 GPa, this method allows us to determine the sample pressure to oil load relationship at high temperatures during multiple heating and cooling cycles, on the basis of the well-known melting curves of ionic compounds. This approach enables more reliable pressure measurements than relying on a small number of fixed-point phase transitions. The new data on KCl bridge the gap between the piston-cylinder results up to 4 GPa (Pistorius 1965 J. of Physics and Chemistry of Solids) and several diamond-anvil cell data points above 20 GPa (Boehler et al. 1996 Physical Review). We will examine the effect of solid-state phase transition on the melting curves of halides and test the validity of various melting theories.
Detection of fungal hyphae using smartphone and pocket magnifier: going cellular.
Agarwal, Tushar; Bandivadekar, Pooja; Satpathy, Gita; Sharma, Namrata; Titiyal, Jeewan S
2015-03-01
The aim of this study was to detect fungal hyphae in a corneal scraping sample using a cost-effective assembly of smartphone and pocket magnifier. In this case report, a tissue sample was obtained by conventional corneal scraping from a clinically suspicious case of mycotic keratitis. The smear was stained with Gram stain, and a 10% potassium hydroxide mount was prepared. It was imaged using a smartphone coupled with a compact pocket magnifier and integrated light-emitting diode assembly at point-of-care. Photographs of multiple sections of slides were viewed using smartphone screen and pinch-to-zoom function. The same slides were subsequently screened under a light microscope by an experienced microbiologist. The scraping from the ulcer was also inoculated on blood agar and Sabouraud dextrose agar. Smartphone-based digital imaging revealed the presence of gram-positive organism with hyphae. Examination under a light microscope also yielded similar findings. Fusarium was cultured from the corneal scraping, confirming the diagnosis of mycotic keratitis. The patient responded to topical 5% natamycin therapy, with resolution of the ulcer after 4 weeks. Smartphones can be successfully used as novel point-of-care, cost-effective, reliable microscopic screening tools.
NASA Astrophysics Data System (ADS)
Hong, Changki; Park, Jinhong; Chung, Yunchul; Choi, Hyungkook; Umansky, Vladimir
2017-11-01
Transmission through a quantum point contact (QPC) in the quantum Hall regime usually exhibits multiple resonances as a function of gate voltage and high nonlinearity in bias. Such behavior is unpredictable and changes sample by sample. Here, we report the observation of a sharp transition of the transmission through an open QPC at finite bias, which was observed consistently for all the tested QPCs. It is found that the bias dependence of the transition can be fitted to the Fermi-Dirac distribution function through universal scaling. The fitted temperature matches quite nicely to the electron temperature measured via shot-noise thermometry. While the origin of the transition is unclear, we propose a phenomenological model based on our experimental results that may help to understand such a sharp transition. Similar transitions are observed in the fractional quantum Hall regime, and it is found that the temperature of the system can be measured by rescaling the quasiparticle energy with the effective charge (e*=e /3 ). We believe that the observed phenomena can be exploited as a tool for measuring the electron temperature of the system and for studying the quasiparticle charges of the fractional quantum Hall states.
Microfluidic-integrated biosensors: prospects for point-of-care diagnostics.
Kumar, Suveen; Kumar, Saurabh; Ali, Md Azahar; Anand, Pinki; Agrawal, Ved Varun; John, Renu; Maji, Sagar; Malhotra, Bansi D
2013-11-01
There is a growing demand to integrate biosensors with microfluidics to provide miniaturized platforms with many favorable properties, such as reduced sample volume, decreased processing time, low cost analysis and low reagent consumption. These microfluidics-integrated biosensors would also have numerous advantages such as laminar flow, minimal handling of hazardous materials, multiple sample detection in parallel, portability and versatility in design. Microfluidics involves the science and technology of manipulation of fluids at the micro- to nano-liter level. It is predicted that combining biosensors with microfluidic chips will yield enhanced analytical capability, and widen the possibilities for applications in clinical diagnostics. The recent developments in microfluidics have helped researchers working in industries and educational institutes to adopt some of these platforms for point-of-care (POC) diagnostics. This review focuses on the latest advancements in the fields of microfluidic biosensing technologies, and on the challenges and possible solutions for translation of this technology for POC diagnostic applications. We also discuss the fabrication techniques required for developing microfluidic-integrated biosensors, recently reported biomarkers, and the prospects of POC diagnostics in the medical industry. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Feature point based 3D tracking of multiple fish from multi-view images
Qian, Zhi-Ming
2017-01-01
A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966
Feature point based 3D tracking of multiple fish from multi-view images.
Qian, Zhi-Ming; Chen, Yan Qiu
2017-01-01
A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.
Liu, Liang-Ying; Salamova, Amina; Venier, Marta; Hites, Ronald A
2016-01-01
Air (vapor and particle phase) samples were collected every 12days at five sites near the North American Great Lakes from 1 January 2005 to 31 December 2013 as a part of the Integrated Atmospheric Deposition Network (IADN). The concentrations of 35 polybrominated diphenyl ethers (PBDEs) and eight other halogenated flame retardants were measured in each of the ~1,300 samples. The levels of almost all of these flame retardants, except for pentabromoethylbenzene (PBEB), hexabromobenzene (HBB), and Dechlorane Plus (DP), were significantly higher in Chicago, Cleveland, and Sturgeon Point. The concentrations of PBEB and HBB were relatively high at Eagle Harbor and Sturgeon Point, respectively, and the concentrations of DP were relatively high at Cleveland and Sturgeon Point, the two sites closest to this compound's production site. The data were analyzed using a multiple linear regression model to determine significant temporal trends in these atmospheric concentrations. The concentrations of PBDEs were decreasing at the urban sites, Chicago and Cleveland, but were generally unchanging at the remote sites, Sleeping Bear Dunes and Eagle Harbor. The concentrations of PBEB were decreasing at almost all sites except for Eagle Harbor, where the highest PBEB levels were observed. HBB concentrations were decreasing at all sites except for Sturgeon Point, where HBB levels were the highest. DP concentrations were increasing with doubling times of 3-9years at all sites except those closest to its source (Cleveland and Sturgeon Point). The levels of 1,2-bis(2,4,6-tribromophenoxy)ethane (TBE) were unchanging at the urban sites, Chicago and Cleveland, but decreasing at the suburban and remote sites, Sturgeon Point and Eagle Harbor. The atmospheric concentrations of 2-ethylhexyl-2,3,4,5-tetrabromobenzoate (EHTBB) and bis(2-ethylhexyl)-tetrabromophthalate (BEHTBP) were increasing at almost every site with doubling times of 3-6years. Copyright © 2016 Elsevier Ltd. All rights reserved.
Where and when should sensors move? Sampling using the expected value of information.
de Bruin, Sytze; Ballari, Daniela; Bregt, Arnold K
2012-11-26
In case of an environmental accident, initially available data are often insufficient for properly managing the situation. In this paper, new sensor observations are iteratively added to an initial sample by maximising the global expected value of information of the points for decision making. This is equivalent to minimizing the aggregated expected misclassification costs over the study area. The method considers measurement error and different costs for class omissions and false class commissions. Constraints imposed by a mobile sensor web are accounted for using cost distances to decide which sensor should move to the next sample location. The method is demonstrated using synthetic examples of static and dynamic phenomena. This allowed computation of the true misclassification costs and comparison with other sampling approaches. The probability of local contamination levels being above a given critical threshold were computed by indicator kriging. In the case of multiple sensors being relocated simultaneously, a genetic algorithm was used to find sets of suitable new measurement locations. Otherwise, all grid nodes were searched exhaustively, which is computationally demanding. In terms of true misclassification costs, the method outperformed random sampling and sampling based on minimisation of the kriging variance.
Where and When Should Sensors Move? Sampling Using the Expected Value of Information
de Bruin, Sytze; Ballari, Daniela; Bregt, Arnold K.
2012-01-01
In case of an environmental accident, initially available data are often insufficient for properly managing the situation. In this paper, new sensor observations are iteratively added to an initial sample by maximising the global expected value of information of the points for decision making. This is equivalent to minimizing the aggregated expected misclassification costs over the study area. The method considers measurement error and different costs for class omissions and false class commissions. Constraints imposed by a mobile sensor web are accounted for using cost distances to decide which sensor should move to the next sample location. The method is demonstrated using synthetic examples of static and dynamic phenomena. This allowed computation of the true misclassification costs and comparison with other sampling approaches. The probability of local contamination levels being above a given critical threshold were computed by indicator kriging. In the case of multiple sensors being relocated simultaneously, a genetic algorithm was used to find sets of suitable new measurement locations. Otherwise, all grid nodes were searched exhaustively, which is computationally demanding. In terms of true misclassification costs, the method outperformed random sampling and sampling based on minimisation of the kriging variance. PMID:23443379
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.; Green, David
2005-03-29
Methods and apparatus for analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically coupled with the vessel body. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Atila-Pektaş, B; Yurdakul, P; Gülmez, D; Görduysus, O
2013-05-01
To compare the antimicrobial activities of Activ Point (Roeko, Langenau, Germany), Calcium Hydroxide Plus Point (Roeko, Langenau, Germany), calcium hydroxide, 1% chlorhexidine gel and bioactive glass (S53P4) against Enterococcus faecalis and Streptococcus mutans. One hundred and twenty extracted single-rooted human teeth were used. After removing the crowns, root canals were prepared by using the Protaper rotary system. Following autoclave sterilization, root canals were incubated at 37 °C with E. faecalis ATCC 29212 and S. mutans RSHM 676 for 1 week. The specimens, which were divided into five treatment groups for each microorganism according to the intracanal medicament used, were tested in 10 experimental runs. In each experimental run, 10 roots were included as treatment, one root as positive control and one root as sterility control. Sterile paper points were utilized to take samples from root canals after the incubation of teeth in thioglycollate medium at 37 °C for 1 week. Samples taken from teeth by sterile paper points were inoculated onto sheep blood agar, and following an overnight incubation, the colonies grown on sheep blood agar were counted and interpreted as colony-forming units. Results were tested statistically by using Kruskal-Wallis and Conover's nonparametric multiple comparison tests. CHX gel (P < 0.001 and P < 0.001), Activ Point (P = 0.003 and P = 0.002) and Ca(OH)₂ (P = 0.010 and P = 0.005) were significantly more effective against E. faecalis than that of Ca(OH)₂ Plus Point and bioactive glass, respectively. On the other hand, compared with Ca(OH)₂ , CHX gel (P < 0.001), and Activ Point (P < 0.001), bioactive glass (P = 0.014) produced significantly lower colony counts of S. mutans. When compared with the positive control, treatment with Ca(OH)₂ Plus Point (P = 0.085 and P = 0.066) did not produce significantly lower colony counts of E. faecalis and S. mutans, respectively. Compared with the medicaments having an antimicrobial effect because of their alkaline pH, the medicaments containing chlorhexidine were effective against both E. faecalis and S. mutans. © 2012 International Endodontic Journal.
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
ERIC Educational Resources Information Center
Chandler, Terrell N.
1996-01-01
The System for Training of Aviation Regulations (STAR) provides comprehensive training in understanding and applying Federal aviation regulations. STAR gives multiple vantage points with multimedia presentations and storytelling within four categories of learning environments: overviews, scenarios, challenges, and resources. Discusses the…
Multiwavelength counterparts of the point sources in the Chandra Source Catalog
NASA Astrophysics Data System (ADS)
Reynolds, Michael; Civano, Francesca Maria; Fabbiano, Giuseppina; D'Abrusco, Raffaele
2018-01-01
The most recent release of the Chandra Source Catalog (CSC) version 2.0 comprises more than $\\sim$350,000 point sources, down to fluxes of $\\sim$10$^{-16}$ erg/cm$^2$/s, covering $\\sim$500 deg$^2$ of the sky, making it one of the best available X-ray catalogs to date. There are many reasons to have multiwavelength counterparts for sources, one such reason is that X-ray information alone is not enough to identify the sources and divide them between galactic and extragalactic origin, therefore multiwavelength data associated to each X-ray source is crucial for classification and scientific analysis of the sample. To perform this multiwavelength association, we are going to employ the recently released versatile tool NWAY (Salvato et al. 2017), based on a Bayesian algorithm for cross-matching multiple catalogs. NWAY allows the combination of multiple catalogs at the same time, provides a probability for the matches, even in case of non-detection due to different depth of the matching catalogs, and it can be used by including priors on the nature of the sources (e.g. colors, magnitudes, etc). In this poster, we are presenting the preliminary analysis using the CSC sources above the galactic plane matched to the WISE All-Sky catalog, SDSS, Pan-STARRS and GALEX.
Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard
2014-01-01
Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use. PMID:25046636
Effective population sizes of a major vector of human diseases, Aedes aegypti.
Saarman, Norah P; Gloria-Soria, Andrea; Anderson, Eric C; Evans, Benjamin R; Pless, Evlyn; Cosme, Luciano V; Gonzalez-Acosta, Cassandra; Kamgang, Basile; Wesson, Dawn M; Powell, Jeffrey R
2017-12-01
The effective population size ( N e ) is a fundamental parameter in population genetics that determines the relative strength of selection and random genetic drift, the effect of migration, levels of inbreeding, and linkage disequilibrium. In many cases where it has been estimated in animals, N e is on the order of 10%-20% of the census size. In this study, we use 12 microsatellite markers and 14,888 single nucleotide polymorphisms (SNPs) to empirically estimate N e in Aedes aegypti , the major vector of yellow fever, dengue, chikungunya, and Zika viruses. We used the method of temporal sampling to estimate N e on a global dataset made up of 46 samples of Ae. aegypti that included multiple time points from 17 widely distributed geographic localities. Our N e estimates for Ae. aegypti fell within a broad range (~25-3,000) and averaged between 400 and 600 across all localities and time points sampled. Adult census size (N c ) estimates for this species range between one and five thousand, so the N e / N c ratio is about the same as for most animals. These N e values are lower than estimates available for other insects and have important implications for the design of genetic control strategies to reduce the impact of this species of mosquito on human health.
A multichannel smartphone optical biosensor for high-throughput point-of-care diagnostics.
Wang, Li-Ju; Chang, Yu-Chung; Sun, Rongrong; Li, Lei
2017-01-15
Current reported smartphone spectrometers are only used to monitor or measure one sample at a time. For the first time, we demonstrate a multichannel smartphone spectrometer (MSS) as an optical biosensor that can simultaneously optical sense multiple samples. In this work, we developed a novel method to achieve the multichannel optical spectral sensing with nanometer resolution on a smartphone. A 3D printed cradle held the smartphone integrated with optical components. This optical sensor performed accurate and reliable spectral measurements by optical intensity changes at specific wavelength or optical spectral shifts. A custom smartphone multi-view App was developed to control the optical sensing parameters and to align each sample to the corresponding channel. The captured images were converted to the transmission spectra in the visible wavelength range from 400nm to 700nm with the high resolution of 0.2521nm per pixel. We validated the performance of this MSS via measuring the concentrations of protein and immunoassaying a type of human cancer biomarker. Compared to the standard laboratory instrument, the results sufficiently showed that this MSS can achieve the comparative analysis detection limits, accuracy and sensitivity. We envision that this multichannel smartphone optical biosensor will be useful in high-throughput point-of-care diagnostics with its minimizing size, light weight, low cost and data transmission function. Copyright © 2016 Elsevier B.V. All rights reserved.
Marchewka, W.; Mohamed, K.; Addis, J.; Karnack, F.
2015-01-01
A tube bundle system (TBS) is a mechanical system for continuously drawing gas samples through tubes from multiple monitoring points located in an underground coal mine. The gas samples are drawn via vacuum pump to the surface and are typically analyzed for oxygen, methane, carbon dioxide and carbon monoxide. Results of the gas analyses are displayed and recorded for further analysis. Trends in the composition of the mine atmosphere, such as increasing methane or carbon monoxide concentration, can be detected early, permitting rapid intervention that prevents problems, such as a potentially explosive atmosphere behind seals, fire or spontaneous combustion. TBS is a well-developed technology and has been used in coal mines around the world for more than 50 years. Most longwall coal mines in Australia deploy a TBS, usually with 30 to 40 monitoring points as part of their atmospheric monitoring. The primary uses of a TBS are detecting spontaneous combustion and maintaining sealed areas inert. The TBS might also provide mine atmosphere gas composition data after a catastrophe occurs in an underground mine, if the sampling tubes are not damaged. TBSs are not an alternative to statutory gas and ventilation airflow monitoring by electronic sensors or people; rather, they are an option to consider in an overall mine atmosphere monitoring strategy. This paper describes the hardware, software and operation of a TBS and presents one example of typical data from a longwall coal mine PMID:26306052
Development of an add-on kit for scanning confocal microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Guo, Kaikai; Zheng, Guoan
2017-03-01
Scanning confocal microscopy is a standard choice for many fluorescence imaging applications in basic biomedical research. It is able to produce optically sectioned images and provide acquisition versatility to address many samples and application demands. However, scanning a focused point across the specimen limits the speed of image acquisition. As a result, scanning confocal microscope only works well with stationary samples. Researchers have performed parallel confocal scanning using digital-micromirror-device (DMD), which was used to project a scanning multi-point pattern across the sample. The DMD based parallel confocal systems increase the imaging speed while maintaining the optical sectioning ability. In this paper, we report the development of an add-on kit for high-speed and low-cost confocal microscopy. By adapting this add-on kit to an existing regular microscope, one can convert it into a confocal microscope without significant hardware modifications. Compared with current DMD-based implementations, the reported approach is able to recover multiple layers along the z axis simultaneously. It may find applications in wafer inspection and 3D metrology of semiconductor circuit. The dissemination of the proposed add-on kit under $1000 budget could also lead to new types of experimental designs for biological research labs, e.g., cytology analysis in cell culture experiments, genetic studies on multicellular organisms, pharmaceutical drug profiling, RNA interference studies, investigation of microbial communities in environmental systems, and etc.
Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard
2014-07-18
Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use.
Tube bundle system: for monitoring of coal mine atmosphere.
Zipf, R Karl; Marchewka, W; Mohamed, K; Addis, J; Karnack, F
2013-05-01
A tube bundle system (TBS) is a mechanical system for continuously drawing gas samples through tubes from multiple monitoring points located in an underground coal mine. The gas samples are drawn via vacuum pump to the surface and are typically analyzed for oxygen, methane, carbon dioxide and carbon monoxide. Results of the gas analyses are displayed and recorded for further analysis. Trends in the composition of the mine atmosphere, such as increasing methane or carbon monoxide concentration, can be detected early, permitting rapid intervention that prevents problems, such as a potentially explosive atmosphere behind seals, fire or spontaneous combustion. TBS is a well-developed technology and has been used in coal mines around the world for more than 50 years. Most longwall coal mines in Australia deploy a TBS, usually with 30 to 40 monitoring points as part of their atmospheric monitoring. The primary uses of a TBS are detecting spontaneous combustion and maintaining sealed areas inert. The TBS might also provide mine atmosphere gas composition data after a catastrophe occurs in an underground mine, if the sampling tubes are not damaged. TBSs are not an alternative to statutory gas and ventilation airflow monitoring by electronic sensors or people; rather, they are an option to consider in an overall mine atmosphere monitoring strategy. This paper describes the hardware, software and operation of a TBS and presents one example of typical data from a longwall coal mine.
Grey W. Pendleton
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
A multiple-point spatially weighted k-NN method for object-based classification
NASA Astrophysics Data System (ADS)
Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.
2016-10-01
Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.
A portable foot-parameter-extracting system
NASA Astrophysics Data System (ADS)
Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan
2016-03-01
In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Systems and Methods for Imaging of Falling Objects
NASA Technical Reports Server (NTRS)
Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)
2014-01-01
Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Acoustic field in unsteady moving media
NASA Technical Reports Server (NTRS)
Bauer, F.; Maestrello, L.; Ting, L.
1995-01-01
In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.
Omori, Yoshinori; Honmou, Osamu; Harada, Kuniaki; Suzuki, Junpei; Houkin, Kiyohiro; Kocsis, Jeffery D
2008-10-21
The systemic injection of human mesenchymal stem cells (hMSCs) prepared from adult bone marrow has therapeutic benefits after cerebral artery occlusion in rats, and may have multiple therapeutic effects at various sites and times within the lesion as the cells respond to a particular pathological microenvironment. However, the comparative therapeutic benefits of multiple injections of hMSCs at different time points after cerebral artery occlusion in rats remain unclear. In this study, we induced middle cerebral artery occlusion (MCAO) in rats using intra-luminal vascular occlusion, and infused hMSCs intravenously at a single 6 h time point (low and high cell doses) and various multiple time points after MCAO. From MRI analyses lesion volume was reduced in all hMSC cell injection groups as compared to serum alone injections. However, the greatest therapeutic benefit was achieved following a single high cell dose injection at 6 h post-MCAO, rather than multiple lower cell infusions over multiple time points. Three-dimensional analysis of capillary vessels in the lesion indicated that the capillary volume was equally increased in all of the cell-injected groups. Thus, differences in functional outcome in the hMSC transplantation subgroups are not likely the result of differences in angiogenesis, but rather from differences in neuroprotective effects.
Effects of developmental training of basketball cadets realised in the competitive period.
Trninić, S; Marković, G; Heimer, S
2001-12-01
The analysis of effects of a two-month developmental training cycle realised within a basketball season revealed statistically significant positive changes at the multivariate level in components of motor-functional conditioning (fitness) status of the sample of talented basketball cadets (15-16 years). The greatest correlations with discriminant function were found in variables with statistically significant changes at the univariate level, more explicitly in variables of explosive and repetitive power of the upper body and trunk, anaerobic lactic endurance, as well as in jumping type explosive leg power. The presented developmental conditioning training programme, although implemented within the competitive period, induced multiple positive fitness effects between the two control time points in this sample of basketball players. The authors suggest that, to assess power of shoulders and upper back, the test overgrip pull-up should not be applied to basketball players of this age due to its poor sensitivity. Instead, they propose the undergrip pull-up test, which is a facilitated version of the same test. The results presented in this article reinforce experienced opinion of experts that, in the training process with youth teams, the developmental conditioning training programme is effectively applicable throughout the entire competitive season. The proposed training model is a system of various training procedures, operating synergistically, aimed at enhancing integral fitness (preparedness) of basketball players. Further investigations should be focused on assessing effects of both the proposed and other developmental training cycle programmes, by means of assessing and monitoring actual quality (overall performance) of players, on the one hand, and, on the other, by following-up hormonal and biochemical changes over multiple time points.
Ionospheric Scintillation Explorer (ISX)
NASA Astrophysics Data System (ADS)
Iuliano, J.; Bahcivan, H.
2015-12-01
NSF has recently selected Ionospheric Scintillation Explorer (ISX), a 3U Cubesat mission to explore the three-dimensional structure of scintillation-scale ionospheric irregularities associated with Equatorial Spread F (ESF). ISX is a collaborative effort between SRI International and Cal Poly. This project addresses the science question: To what distance along a flux tube does an irregularity of certain transverse-scale extend? It has been difficult to measure the magnetic field-alignment of scintillation-scale turbulent structures because of the difficulty of sampling a flux tube at multiple locations within a short time. This measurement is now possible due to the worldwide transition to DTV, which presents unique signals of opportunity for remote sensing of ionospheric irregularities from numerous vantage points. DTV spectra, in various formats, contain phase-stable, narrowband pilot carrier components that are transmitted simultaneously. A 4-channel radar receiver will simultaneously record up to 4 spatially separated transmissions from the ground. Correlations of amplitude and phase scintillation patterns corresponding to multiple points on the same flux tube will be a measure of the spatial extent of the structures along the magnetic field. A subset of geometries where two or more transmitters are aligned with the orbital path will be used to infer the temporal development of the structures. ISX has the following broad impact. Scintillation of space-based radio signals is a space weather problem that is intensively studied. ISX is a step toward a CubeSat constellation to monitor worldwide TEC variations and radio wave distortions on thousands of ionospheric paths. Furthermore, the rapid sampling along spacecraft orbits provides a unique dataset to deterministically reconstruct ionospheric irregularities at scintillation-scale resolution using diffraction radio tomography, a technique that enables prediction of scintillations at other radio frequencies, and potentially, mitigation of phase distortions.
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Boltzmann sampling from the Ising model using quantum heating of coupled nonlinear oscillators.
Goto, Hayato; Lin, Zhirong; Nakamura, Yasunobu
2018-05-08
A network of Kerr-nonlinear parametric oscillators without dissipation has recently been proposed for solving combinatorial optimization problems via quantum adiabatic evolution through its bifurcation point. Here we investigate the behavior of the quantum bifurcation machine (QbM) in the presence of dissipation. Our numerical study suggests that the output probability distribution of the dissipative QbM is Boltzmann-like, where the energy in the Boltzmann distribution corresponds to the cost function of the optimization problem. We explain the Boltzmann distribution by generalizing the concept of quantum heating in a single nonlinear oscillator to the case of multiple coupled nonlinear oscillators. The present result also suggests that such driven dissipative nonlinear oscillator networks can be applied to Boltzmann sampling, which is used, e.g., for Boltzmann machine learning in the field of artificial intelligence.
Simultaneous determination of three anticonvulsants using hydrophilic interaction LC-MS.
Oertel, Reinhard; Arenz, Norman; Pietsch, Jörg; Kirch, Wilhelm
2009-01-01
A specific and automated method was developed to quantify the anticonvulsants gabapentin, pregabalin and vigabatrin simultaneously in human serum. Samples were prepared with a protein precipitation. The hydrophilic interaction chromatography (HILIC) with a mobile phase gradient was used to divide off ions of the matrix and for separation of the analytes. Four different HILIC-columns and two different column temperatures were tested. The Tosoh-Amid column gave the best results: single small peaks. The anticonvulsants were detected in the multiple reaction monitoring mode (MRM) with ESI-MS-MS. Using a volume of 100 microL biological sample the lowest point of the standard curve, i.e. the lower LOQs were 312 ng/mL. The described HILIC-MS-MS method is suitable for therapeutic drug monitoring and for clinical and pharmcokinetical investigations of the anticonvulsives.
Why Quantify Uncertainty in Ecosystem Studies: Obligation versus Discovery Tool?
NASA Astrophysics Data System (ADS)
Harmon, M. E.
2016-12-01
There are multiple motivations for quantifying uncertainty in ecosystem studies. One is as an obligation; the other is as a tool useful in moving ecosystem science toward discovery. While reporting uncertainty should become a routine expectation, a more convincing motivation involves discovery. By clarifying what is known and to what degree it is known, uncertainty analyses can point the way toward improvements in measurements, sampling designs, and models. While some of these improvements (e.g., better sampling designs) may lead to incremental gains, those involving models (particularly model selection) may require large gains in knowledge. To be fully harnessed as a discovery tool, attitudes toward uncertainty may have to change: rather than viewing uncertainty as a negative assessment of what was done, it should be viewed as positive, helpful assessment of what remains to be done.
Molinaro, Ross J; Ritchie, James C
2010-01-01
The following chapter describes a method to measure iothalamate in plasma and urine samples using high performance liquid chromatography combined with electrospray positive ionization tandem mass spectrometry (HPLC-ESI-MS/MS). Methanol and water are spiked with the internal standard (IS) iohexol. Iothalamate is isolated from plasma after IS spiked methanol extraction and from urine by IS spiked water addition and quick-spin filtration. The plasma extractions are dried under a stream of nitrogen. The residue is reconstituted in ammonium acetate-formic acid-water. The reconstituted plasma and filtered urine are injected into the HPLC-ESI-MS/MS. Iothalamate and iohexol show similar retention times in plasma and urine. Quantification of iothalamate in the samples is made by multiple reaction monitoring using the hydrogen adduct mass transitions, from a five-point calibration curve.
Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P
2015-07-30
A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iqbal, Muhammad Javed, E-mail: mjiqauchem@yahoo.com; Ahmad, Zahoor; Meydan, Turgut
2012-02-15
Graphical abstract: Variation of saturation magnetization (M{sub S}) and magnetocrystalline anisotropy coefficient (K{sub 1}) with Ni-Cr content for Mg{sub 1-x}Ni{sub x}Cr{sub x}Fe{sub 2-x}O{sub 4} (x = 0.0-0.5). Highlights: Black-Right-Pointing-Pointer Mg{sub 1-x}Ni{sub x}Cr{sub x}Fe{sub 2-x}O{sub 4} are synthesized by novel PEG assisted microemulsion method. Black-Right-Pointing-Pointer High field regime of M-H loops are modeled using Law of Approach to saturation. Black-Right-Pointing-Pointer A considerable increase in the value of M{sub S} from 148 kA/m to 206 kA/m is achieved Black-Right-Pointing-Pointer {rho}{sup RT} enhanced to the order of 10{sup 9} {Omega}cm at potential operational range around 300 K. -- Abstract: The effect of variationmore » of composition on the structural, morphological, magnetic and electric properties of Mg{sub 1-x}Ni{sub x}Cr{sub x}Fe{sub 2-x}O{sub 4} (x = 0.0-0.5) nanocrystallites is presented. The samples were prepared by novel polyethylene glycol (PEG) assisted microemulsion method with average crystallite size of 15-47 nm. The microstructure, chemical, and phase analyses of the samples were studied by the scanning electron microscopy (SEM), atomic force microscopy (AFM), energy dispersive X-ray fluorescence (ED-XRF), and X-ray diffraction (XRD). Compositional variation greatly affected the magnetic and structural properties. The high-field regimes of the magnetic loops are modelled using the Law of Approach (LOA) to saturation in order to extract information about their anisotropy and the saturation magnetization. Thermal demagnetization measurements are carried out using VSM and significant enhancement of the Curie temperature from 681 K to 832 K has been achieved by substitution of different contents of Ni-Cr. The dc-electrical resistivity ({rho}{sup RT}) at potential operational range around 300 K is increased from 7.5 Multiplication-Sign 10{sup 8} to 4.85 Multiplication-Sign 10{sup 9} {Omega}cm with the increase in Ni-Cr contents. Moreover, the results of the present study provide sufficient evidence to show that the electric and magnetic properties of Mg-ferrite have been improved significantly by substituting low contents of Ni-Cr.« less
NASA Astrophysics Data System (ADS)
Giannone, Domenico; Kazmierczak, Andrzej; Dortu, Fabian; Vivien, Laurent; Sohlström, Hans
2010-04-01
We present here research work on two optical biosensors which have been developed within two separate European projects (6th and 7th EU Framework Programmes). The biosensors are based on the idea of a disposable biochip, integrating photonics and microfluidics, optically interrogated by a multichannel interrogation platform. The objective is to develop versatile tools, suitable for performing screening tests at Point of Care or for example, at schools or in the field. The two projects explore different options in terms of optical design and different materials. While SABIO used Si3N4/SiO2 ring resonators structures, P3SENS aims at the use of photonic crystal devices based on polymers, potentially a much more economical option. We discuss both approaches to show how they enable high sensitivity and multiple channel detection. The medium term objective is to develop a new detection system that has low cost and is portable but at the same time offering high sensitivity, selectivity and multiparametric detection from a sample containing various components (e.g. blood, serum, saliva, etc.). Most biological sensing devices already present on the market suffer from limitations in multichannel operation capability (either the detection of multiple analytes indicating a given pathology or the simultaneous detection of multiple pathologies). In other words, the number of different analytes that can be detected on a single chip is very limited. This limitation is a main issue addressed by the two projects. The excessive cost per test of conventional bio sensing devices is a second issue that is addressed.
Association of a novel point mutation in MSH2 gene with familial multiple primary cancers.
Hu, Hai; Li, Hong; Jiao, Feng; Han, Ting; Zhuo, Meng; Cui, Jiujie; Li, Yixue; Wang, Liwei
2017-10-03
Multiple primary cancers (MPC) have been identified as two or more cancers without any subordinate relationship that occur either simultaneously or metachronously in the same or different organs of an individual. Lynch syndrome is an autosomal dominant genetic disorder that increases the risk of many types of cancers. Lynch syndrome patients who suffer more than two cancers can also be considered as MPC; patients of this kind provide unique resources to learn how genetic mutation causes MPC in different tissues. We performed a whole genome sequencing on blood cells and two tumor samples of a Lynch syndrome patient who was diagnosed with five primary cancers. The mutational landscape of the tumors, including somatic point mutations and copy number alternations, was characterized. We also compared Lynch syndrome with sporadic cancers and proposed a model to illustrate the mutational process by which Lynch syndrome progresses to MPC. We revealed a novel pathologic mutation on the MSH2 gene (G504 splicing) that associates with Lynch syndrome. Systematical comparison of the mutation landscape revealed that multiple cancers in the proband were evolutionarily independent. Integrative analysis showed that truncating mutations of DNA mismatch repair (MMR) genes were significantly enriched in the patient. A mutation progress model that included germline mutations of MMR genes, double hits of MMR system, mutations in tissue-specific driver genes, and rapid accumulation of additional passenger mutations was proposed to illustrate how MPC occurs in Lynch syndrome patients. Our findings demonstrate that both germline and somatic alterations are driving forces of carcinogenesis, which may resolve the carcinogenic theory of Lynch syndrome.
Subclonal diversification of primary breast cancer revealed by multiregion sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yates, Lucy R.; Gerstung, Moritz; Knappskog, Stian
Sequencing cancer genomes may enable tailoring of therapeutics to the underlying biological abnormalities driving a particular patient's tumor. However, sequencing-based strategies rely heavily on representative sampling of tumors. To understand the subclonal structure of primary breast cancer, we applied whole-genome and targeted sequencing to multiple samples from each of 50 patients' tumors (303 samples in total). The extent of subclonal diversification varied among cases and followed spatial patterns. No strict temporal order was evident, with point mutations and rearrangements affecting the most common breast cancer genes, including PIK3CA, TP53, PTEN, BRCA2 and MYC, occurring early in some tumors and latemore » in others. In 13 out of 50 cancers, potentially targetable mutations were subclonal. Landmarks of disease progression, such as resistance to chemotherapy and the acquisition of invasive or metastatic potential, arose within detectable subclones of antecedent lesions. These findings highlight the importance of including analyses of subclonal structure and tumor evolution in clinical trials of primary breast cancer.« less
Subclonal diversification of primary breast cancer revealed by multiregion sequencing
Yates, Lucy R.; Gerstung, Moritz; Knappskog, Stian; ...
2015-06-22
Sequencing cancer genomes may enable tailoring of therapeutics to the underlying biological abnormalities driving a particular patient's tumor. However, sequencing-based strategies rely heavily on representative sampling of tumors. To understand the subclonal structure of primary breast cancer, we applied whole-genome and targeted sequencing to multiple samples from each of 50 patients' tumors (303 samples in total). The extent of subclonal diversification varied among cases and followed spatial patterns. No strict temporal order was evident, with point mutations and rearrangements affecting the most common breast cancer genes, including PIK3CA, TP53, PTEN, BRCA2 and MYC, occurring early in some tumors and latemore » in others. In 13 out of 50 cancers, potentially targetable mutations were subclonal. Landmarks of disease progression, such as resistance to chemotherapy and the acquisition of invasive or metastatic potential, arose within detectable subclones of antecedent lesions. These findings highlight the importance of including analyses of subclonal structure and tumor evolution in clinical trials of primary breast cancer.« less
Watershed-based survey designs
Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.
2005-01-01
Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.
A Tube Seepage Meter for In Situ Measurement of Seepage Rate and Groundwater Sampling.
Solder, John E; Gilmore, Troy E; Genereux, David P; Solomon, D Kip
2016-07-01
We designed and evaluated a "tube seepage meter" for point measurements of vertical seepage rates (q), collecting groundwater samples, and estimating vertical hydraulic conductivity (K) in streambeds. Laboratory testing in artificial streambeds show that seepage rates from the tube seepage meter agreed well with expected values. Results of field testing of the tube seepage meter in a sandy-bottom stream with a mean seepage rate of about 0.5 m/day agreed well with Darcian estimates (vertical hydraulic conductivity times head gradient) when averaged over multiple measurements. The uncertainties in q and K were evaluated with a Monte Carlo method and are typically 20% and 60%, respectively, for field data, and depend on the magnitude of the hydraulic gradient and the uncertainty in head measurements. The primary advantages of the tube seepage meter are its small footprint, concurrent and colocated assessments of q and K, and that it can also be configured as a self-purging groundwater-sampling device. © 2015, National Ground Water Association.
A tube seepage meter for in situ measurement of seepage rate and groundwater sampling
Solder, John; Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip
2016-01-01
We designed and evaluated a “tube seepage meter” for point measurements of vertical seepage rates (q), collecting groundwater samples, and estimating vertical hydraulic conductivity (K) in streambeds. Laboratory testing in artificial streambeds show that seepage rates from the tube seepage meter agreed well with expected values. Results of field testing of the tube seepage meter in a sandy-bottom stream with a mean seepage rate of about 0.5 m/day agreed well with Darcian estimates (vertical hydraulic conductivity times head gradient) when averaged over multiple measurements. The uncertainties in q and K were evaluated with a Monte Carlo method and are typically 20% and 60%, respectively, for field data, and depend on the magnitude of the hydraulic gradient and the uncertainty in head measurements. The primary advantages of the tube seepage meter are its small footprint, concurrent and colocated assessments of q and K, and that it can also be configured as a self-purging groundwater-sampling device.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-15
... interest. Accordingly, at the $22.00 price point, both the entire amount of B4 and the remaining balance of...-side interest, Exchange systems would cancel the remaining balance of the incoming STPN order that... STPN could execute at multiple price points, the incoming STPN would execute at the multiple prices...
Ilizaliturri, Victor M; Suarez-Ahedo, Carlos; Acuña, Marco
2015-10-01
To report the frequency of presentation of bifid or multiple iliopsoas tendons in patients who underwent endoscopic release for internal snapping hip syndrome (ISHS) and to compare both groups. A consecutive series of patients with ISHS were treated with endoscopic transcapsular release of the iliopsoas tendon at the central compartment and prospectively followed up. The inclusion criteria were patients with a diagnosis of ISHS with failure of conservative treatment. During the procedure, the presence of a bifid tendon was intentionally looked for. Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores were evaluated preoperatively and at last follow-up. Four patients presented with a bifid tendon and one patient had 3 tendons. At a minimum of 12 months' follow-up, the presence of snapping recurrence was evaluated and the WOMAC scores were compared between both groups. Among 279 hip arthroscopies, 28 patients underwent central transcapsular iliopsoas tendon release. The mean age was 29.25 years (range, 16 to 65 years; 6 left and 22 right hips). Group 1 included 5 patients with multiple tendons; the remaining patients formed group 2 (n = 23). None of the patients presented with ISHS recurrence. The mean WOMAC score in group 1 was 39 points (95% confidence interval [CI], 26.2 to 55.4 points) preoperatively and 73.6 points (95% CI, 68.4 to 79.6 points) at last follow-up. In group 2 the mean WOMAC score was 47.21 points (95% CI, 44.4 to 58.2 points) preoperatively and 77.91 points (95% CI, 67.8 to 83.4 points) at last follow-up. We identified a bifid tendon retrospectively on magnetic resonance arthrograms in 3 of the 5 cases that were found to have multiple tendons during surgery. None of these were recognized before the procedures. In this series the surgeon intentionally looked for multiple tendons, which were found in 17.85% of the cases. Clinical results in patients with single- and multiple-tendon snapping seem to be similarly adequate. However, the possibility of a type II error should be considered given the small number of patients. Level IV. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Soil specific re-calibration of water content sensors for a field-scale sensor network
NASA Astrophysics Data System (ADS)
Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.
2015-04-01
Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting point) were represented in the sensor readings. We anticipate that obtaining water retention curves for field soils will improve the re-calibration accuracy by providing more precise estimates of saturation, field capacity, and wilting point. This approach may serve as an alternative method for sensor calibration in lieu of or to complement pre-installation calibration.
Bernard, Elyse D; Nguyen, Kathy C; DeRosa, Maria C; Tayabali, Azam F; Aranda-Rodriguez, Rocio
2017-01-01
Aptamers are short oligonucleotide sequences used in detection systems because of their high affinity binding to a variety of macromolecules. With the introduction of aptamers over 25 years ago came the exploration of their use in many different applications as a substitute for antibodies. Aptamers have several advantages; they are easy to synthesize, can bind to analytes for which it is difficult to obtain antibodies, and in some cases bind better than antibodies. As such, aptamer applications have significantly expanded as an adjunct to a variety of different immunoassay designs. The Multiple-Analyte Profiling (xMAP) technology developed by Luminex Corporation commonly uses antibodies for the detection of analytes in small sample volumes through the use of fluorescently coded microbeads. This technology permits the simultaneous detection of multiple analytes in each sample tested and hence could be applied in many research fields. Although little work has been performed adapting this technology for use with apatmers, optimizing aptamer-based xMAP assays would dramatically increase the versatility of analyte detection. We report herein on the development of an xMAP bead-based aptamer/antibody sandwich assay for a biomarker of inflammation (C-reactive protein or CRP). Protocols for the coupling of aptamers to xMAP beads, validation of coupling, and for an aptamer/antibody sandwich-type assay for CRP are detailed. The optimized conditions, protocols and findings described in this research could serve as a starting point for the development of new aptamer-based xMAP assays.
Somatic Coliphage Profiles of Produce and Environmental Samples from Farms in Northern México.
Bartz, Faith E; Hodge, Domonique Watson; Heredia, Norma; de Aceituno, Anna Fabiszewski; Solís, Luisa; Jaykus, Lee-Ann; Garcia, Santos; Leon, Juan S
2016-09-01
Somatic coliphages were quantified in 459 produce and environmental samples from 11 farms in Northern Mexico to compare amounts of somatic coliphages among different types of fresh produce and environmental samples across the production steps on farms. Rinsates from cantaloupe melons, jalapeño peppers, tomatoes, and the hands of workers, soil, and water were collected during 2011-2012 at four successive steps on each farm, from the field before harvest through the packing facility, and assayed by FastPhage MPN Quanti-tray method. Cantaloupe farm samples contained more coliphages than jalapeño or tomato (p range <0.01-0.03). Across production steps, jalapeños had higher coliphage percentages before harvest than during packing (p = 0.03), while tomatoes had higher coliphage concentrations at packing than all preceding production steps (p range <0.01-0.02). These findings support the use of targeted produce-specific interventions at multiple points in the process of growing and packing produce to reduce the risk of enteric virus contamination and improve food safety during fruit and vegetable production.
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Simmons, Sandra F; Bell, Susan; Saraf, Avantika A; Coelho, Chris S; Long, Emily A; Jacobsen, J M L; Schnelle, John F; Vasilevskis, Eduard E
2016-10-01
To assess multiple geriatric syndromes in a sample of older hospitalized adults discharged to skilled nursing facilities (SNFs) and subsequently to home to determine the prevalence and stability of each geriatric syndrome at the point of these care transitions. Descriptive, prospective study. One large university-affiliated hospital and four area SNFs. Fifty-eight hospitalized Medicare beneficiaries discharged to SNFs (N = 58). Research personnel conducted standardized assessments of the following geriatric syndromes at hospital discharge and 2 weeks after SNF discharge to home: cognitive impairment, depression, incontinence, unintentional weight loss, loss of appetite, pain, pressure ulcers, history of falls, mobility impairment, and polypharmacy. The average number of geriatric syndromes per participant was 4.4 ± 1.2 at hospital discharge and 3.8 ± 1.5 after SNF discharge. There was low to moderate stability for most syndromes. On average, participants had 2.9 syndromes that persisted across both care settings, 1.4 syndromes that resolved, and 0.7 new syndromes that developed between hospital and SNF discharge. Geriatric syndromes were prevalent at the point of each care transition but also reflected significant within-individual variability. These findings suggest that multiple geriatric syndromes present during a hospital stay are not transient and that most syndromes are not resolved before SNF discharge. These results underscore the importance of conducting standardized screening assessments at the point of each care transition and effectively communicating this information to the next provider to support the management of geriatric conditions. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Simmons, Sandra F.; Bell, Susan; Saraf, Avantika A.; Coelho, Chris Simon; Long, Emily A.; Jacobsen, J. Mary Lou; Schnelle, John F.; Vasilevskis, Eduard E.
2016-01-01
Objectives The purpose of this study was to assess multiple geriatric syndromes in a sample of older hospitalized patients discharged to skilled nursing facilities and, subsequently, to home to determine the prevalence and stability of each geriatric syndrome at the point of these care transitions. Design Descriptive, prospective study. Setting One large university-affiliated hospital and four area SNFs. Participants Fifty-eight hospitalized Medicare beneficiaries discharged to SNF. Measurements Research personnel conducted standardized assessments of the following geriatric syndromes at hospital discharge and two weeks following SNF discharge to home: cognitive impairment, depression, incontinence, unintentional weight loss, loss of appetite, pain, pressure ulcers, history of falls, mobility impairment and polypharmacy. Results The average number of geriatric syndromes per patient was 4.4 (± 1.2) at hospital discharge and 3.8 (±1.5) following SNF discharge. There was low to moderate stability for most syndromes. On average, participants had 2.9 syndromes that persisted across both care settings, 1.4 syndromes that resolved, and 0.7 new syndromes that developed between hospital and SNF discharge. Conclusion Geriatric syndromes were prevalent at the point of each care transition but also reflected significant within-individual variability. These findings suggest that multiple geriatric syndromes present during a hospital stay are not transient nor are most syndromes resolved prior to SNF discharge. These results underscore the importance of conducting standardized screening assessments at the point of each care transition and effectively communicating this information to the next provider to support the management of geriatric conditions. PMID:27590032
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.
Cone penetrometer testing (CPT), combined with discrete-depth ground water sampling methods, can significantly reduce the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can then be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs). To expedite characterization, a five-week field screening program was implemented that consisted of a shallow ground water survey, CPT soundings and pore-pressure measurements, and discrete-depth ground water sampling. Based on continuous lithologic informationmore » provided by the CPT soundings, four predominantly coarse-grained, water yielding stratigraphic packages were identified. Seventy-nine discrete-depth ground water samples were collected using either shallow ground water survey techniques, the BAT Enviroprobe, or the QED HydroPunch I, depending on subsurface conditions. Using results from these efforts, a 20-well monitoring network was designed and installed to monitor critical points within each stratigraphic package. Good correlation was found for hydraulic head and chemical results between discrete-depth screening data and monitoring well data. Understanding the vertical VOC distribution and concentrations produced substantial time and cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings that had to be installed. Additionally, significant long-term cost savings will result from reduced sampling costs, because fewer wells comprise the monitoring network. The authors estimate these savings to be 50% for site characterization costs, 65% for site characterization time, and 60% for long-term monitoring costs.« less
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Strain Level Streptococcus Colonization Patterns during the First Year of Life
Wright, Meredith S.; McCorrison, Jamison; Gomez, Andres M.; Beck, Erin; Harkins, Derek; Shankar, Jyoti; Mounaud, Stephanie; Segubre-Mercado, Edelwisa; Mojica, Aileen May R.; Bacay, Brian; Nzenze, Susan A.; Kimaro, Sheila Z. M.; Adrian, Peter; Klugman, Keith P.; Lucero, Marilla G.; Nelson, Karen E.; Madhi, Shabir; Sutton, Granger G.; Nierman, William C.; Losada, Liliana
2017-01-01
Pneumococcal pneumonia has decreased significantly since the implementation of the pneumococcal conjugate vaccine (PCV), nevertheless, in many developing countries pneumonia mortality in infants remains high. We have undertaken a study of the nasopharyngeal (NP) microbiome during the first year of life in infants from The Philippines and South Africa. The study entailed the determination of the Streptococcus sp. carriage using a lytA qPCR assay, whole metagenomic sequencing, and in silico serotyping of Streptococcus pneumoniae, as well as 16S rRNA amplicon based community profiling. The lytA carriage in both populations increased with infant age and lytA+ samples ranged from 24 to 85% of the samples at each sampling time point. We next developed informatic tools for determining Streptococcus community composition and pneumococcal serotype from metagenomic sequences derived from a subset of longitudinal lytA-positive Streptococcus enrichment cultures from The Philippines (n = 26 infants, 50% vaccinated) and South African (n = 7 infants, 100% vaccinated). NP samples from infants were passaged in enrichment media, and metagenomic DNA was purified and sequenced. In silico capsular serotyping of these 51 metagenomic assemblies assigned known serotypes in 28 samples, and the co-occurrence of serotypes in 5 samples. Eighteen samples were not typeable using known serotypes but did encode for capsule biosynthetic cluster genes similar to non-encapsulated reference sequences. In addition, we performed metagenomic assembly and 16S rRNA amplicon profiling to understand co-colonization dynamics of Streptococcus sp. and other NP genera, revealing the presence of multiple Streptococcus species as well as potential respiratory pathogens in healthy infants. A range of virulence and drug resistant elements were identified as circulating in the NP microbiomes of these infants. This study revealed the frequent co-occurrence of multiple S. pneumoniae strains along with Streptococcus sp. and other potential pathogens such as S. aureus in the NP microbiome of these infants. In addition, the in silico serotype analysis proved powerful in determining the serotypes in S. pneumoniae carriage, and may lead to developing better targeted vaccines to prevent invasive pneumococcal disease (IPD) in these countries. These findings suggest that NP colonization by S. pneumoniae during the first years of life is a dynamic process involving multiple serotypes and species. PMID:28932211
Serum levels of perfluoroalkyl compounds in human maternal and umbilical cord blood samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monroy, Rocio; Morrison, Katherine; Teo, Koon
2008-09-15
Perfluoroalkyl compounds (PFCs) are end-stage metabolic products from industrial flourochemicals used in the manufacture of plastics, textiles, and electronics that are widely distributed in the environment. The objective of the present study was to quantify exposure to perfluorooctane sulfonate (PFOS), perfluorooctanoate (PFOA), perfluorodecanoic acid (PFDeA), perfluorohexane sulfonate (PFHxS), perfluoroheptanoic acid (PFHpA), and perfluorononanoic acid (PFNA) in serum samples collected from pregnant women and the umbilical cord at delivery. Pregnant women (n=101) presenting for second trimester ultrasound were recruited and PFC residue levels were quantified in maternal serum at 24-28 weeks of pregnancy, at delivery, and in umbilical cord blood (UCB;more » n=105) by liquid chromatography-mass spectrometry. Paired t-test and multiple regression analysis were performed to determine the relationship between the concentrations of each analyte at different sample collection time points. PFOA and PFOS were detectable in all serum samples analyzed including the UCB. PFOS serum levels (mean{+-}S.D.) were significantly higher (p<0.001) in second trimester maternal serum (18.1{+-}10.9 ng/mL) than maternal serum levels at delivery (16.2{+-}10.4 ng/mL), which were higher than the levels found in UCB (7.3{+-}5.8 ng/mL; p<0.001). PFHxS was quantifiable in 46/101 (45.5%) maternal and 21/105 (20%) UCB samples with a mean concentration of 4.05{+-}12.3 and 5.05{+-}12.9 ng/mL, respectively. There was no association between serum PFCs at any time point studied and birth weight. Taken together our data demonstrate that although there is widespread exposure to PFCs during development, these exposures do not affect birth weight.« less
Erdman, William L.; Lettenmaier, Terry M.
2006-07-04
An approach to wind farm design using variable speed wind turbines with low pulse number electrical output. The output of multiple wind turbines are aggregated to create a high pulse number electrical output at a point of common coupling with a utility grid network. Power quality at each individual wind turbine falls short of utility standards, but the aggregated output at the point of common coupling is within acceptable tolerances for utility power quality. The approach for aggregating low pulse number electrical output from multiple wind turbines relies upon a pad mounted transformer at each wind turbine that performs phase multiplication on the output of each wind turbine. Phase multiplication converts a modified square wave from the wind turbine into a 6 pulse output. Phase shifting of the 6 pulse output from each wind turbine allows the aggregated output of multiple wind turbines to be a 24 pulse approximation of a sine wave. Additional filtering and VAR control is embedded within the wind farm to take advantage of the wind farm's electrical impedence characteristics to further enhance power quality at the point of common coupling.
Purdy, P H; Tharp, N; Stewart, T; Spiller, S F; Blackburn, H D
2010-10-15
Boar semen is typically collected, diluted and cooled for AI use over numerous days, or frozen immediately after shipping to capable laboratories. The storage temperature and pH of the diluted, cooled boar semen could influence the fertility of boar sperm. Therefore, the purpose of this study was to determine the effects of pH and storage temperature on fresh and frozen-thawed boar sperm motility end points. Semen samples (n = 199) were collected, diluted, cooled and shipped overnight to the National Animal Germplasm Program laboratory for freezing and analysis from four boar stud facilities. The temperature, pH and motility characteristics, determined using computer automated semen analysis, were measured at arrival. Samples were then cryopreserved and post-thaw motility determined. The commercial stud was a significant source of variation for mean semen temperature and pH, as well as total and progressive motility, and numerous other sperm motility characteristics. Based on multiple regression analysis, pH was not a significant source of variation for fresh or frozen-thawed boar sperm motility end points. However, significant models were derived which demonstrated that storage temperature, boar, and the commercial stud influenced sperm motility end points and the potential success for surviving cryopreservation. We inferred that maintaining cooled boar semen at approximately 16 °C during storage will result in higher fresh and frozen-thawed boar sperm quality, which should result in greater fertility. Copyright © 2010 Elsevier Inc. All rights reserved.
Hornig, Katlin J; Byers, Stacey R; Callan, Robert J; Holt, Timothy; Field, Megan; Han, Hyungchul
2013-08-01
To compare β-hydroxybutyrate (BHB) and glucose concentrations measured with a dual-purpose point-of-care (POC) meter designed for use in humans and a laboratory biochemical analyzer (LBA) to determine whether the POC meter would be reliable for on-farm measurement of blood glucose and BHB concentrations in sheep in various environmental conditions and nutritional states. 36 pregnant mixed-breed ewes involved in a maternal feed restriction study. Blood samples were collected from each sheep at multiple points throughout gestation and lactation to allow for tracking of gradually increasing metabolic hardship. Whole blood glucose and BHB concentrations were measured with the POC meter and compared with serum results obtained with an LBA. 464 samples were collected. Whole blood BHB concentrations measured with the POC meter compared well with LBA results, and error grid analysis showed the POC values were acceptable. Whole blood glucose concentrations measured with the POC meter had more variation, compared with LBA values, over the glucose ranges evaluated. Results of error grid analysis of POC-measured glucose concentrations were not acceptable, indicating errors likely to result in needless treatment with glucose or other supplemental energy sources in normoglycemic sheep. The POC meter was user-friendly and performed well across a wide range of conditions. The meter was adequate for detection of pregnancy toxemia in sheep via whole blood BHB concentration. Results should be interpreted with caution when the POC meter is used to measure blood glucose concentrations.
THE CHANDRA COSMOS SURVEY. III. OPTICAL AND INFRARED IDENTIFICATION OF X-RAY POINT SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Civano, F.; Elvis, M.; Aldcroft, T.
2012-08-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that has imaged the central 0.9 deg{sup 2} of the COSMOS field down to limiting depths of 1.9 Multiplication-Sign 10{sup -16} erg cm{sup -2} s{sup -1} in the soft (0.5-2 keV) band, 7.3 Multiplication-Sign 10{sup -16} erg cm{sup -2} s{sup -1} in the hard (2-10 keV) band, and 5.7 Multiplication-Sign 10{sup -16} erg cm{sup -2} s{sup -1} in the full (0.5-10 keV) band. In this paper we report the i, K, and 3.6 {mu}m identifications of the 1761 X-ray point sources. We use the likelihood ratio technique tomore » derive the association of optical/infrared counterparts for 97% of the X-ray sources. For most of the remaining 3%, the presence of multiple counterparts or the faintness of the possible counterpart prevented a unique association. For only 10 X-ray sources we were not able to associate a counterpart, mostly due to the presence of a very bright field source close by. Only two sources are truly empty fields. The full catalog, including spectroscopic and photometric redshifts and classification described here in detail, is available online. Making use of the large number of X-ray sources, we update the 'classic locus' of active galactic nuclei (AGNs) defined 20 years ago in soft X-ray surveys and define a new locus containing 90% of the AGNs in the survey with full-band luminosity >10{sup 42} erg s{sup -1}. We present the linear fit between the total i-band magnitude and the X-ray flux in the soft and hard bands, drawn over two orders of magnitude in X-ray flux, obtained using the combined C-COSMOS and XMM-COSMOS samples. We focus on the X-ray to optical flux ratio (X/O) and we test its known correlation with redshift and luminosity, and a recently introduced anti-correlation with the concentration index (C). We find a strong anti-correlation (though the dispersion is of the order of 0.5 dex) between X/O computed in the hard band and C and that 90% of the obscured AGNs in the sample with morphological information live in galaxies with regular morphology (bulgy and disky/spiral), suggesting that secular processes govern a significant fraction of the black hole growth at X-ray luminosities of 10{sup 43}-10{sup 44.5} erg s{sup -1}. We also investigate the degree of obscuration of the sample using the hardness ratio, and we compare the X-ray color with the near-infrared to optical color.« less
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
A review of radiative detachment studies in tokamak advanced magnetic divertor configurations
Soukhanovskii, V. A.
2017-04-28
The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less
A review of radiative detachment studies in tokamak advanced magnetic divertor configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soukhanovskii, V. A.
The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Imaging graphite in air by scanning tunneling microscopy - Role of the tip
NASA Technical Reports Server (NTRS)
Colton, R. J.; Baker, S. M.; Driscoll, R. J.; Youngquist, M. G.; Baldeschwieler, J. D.; Kaiser, W. J.
1988-01-01
Atomically resolved images of highly oriented pyrolytic graphite (HOPG) in air at point contact have been obtained. Direct contact between tip and sample or contact through a contamination layer provides a conduction mechanism in addition to the exponential tunneling mechanism responsible for scanning tunneling microscopy (STM) imaging. Current-voltage (I-V) spectra were obtained while scanning in the current imaging mode with the feedback circuit interrupted in order to study the graphite imaging mechanism. Multiple tunneling tips are probably responsible for images without the expected hexagonal or trigonal symmetry. The observations indicate that the use of HOPG for testing and calibration of STM instrumentation may be misleading.
Array Biosensor for Toxin Detection: Continued Advances
Taitt, Chris Rowe; Shriver-Lake, Lisa C.; Ngundi, Miriam M.; Ligler, Frances S.
2008-01-01
The following review focuses on progress made in the last five years with the NRL Array Biosensor, a portable instrument for rapid and simultaneous detection of multiple targets. Since 2003, the Array Biosensor has been automated and miniaturized for operation at the point-of-use. The Array Biosensor has also been used to demonstrate (1) quantitative immunoassays against an expanded number of toxins and toxin indicators in food and clinical fluids, and (2) the efficacy of semi-selective molecules as alternative recognition moieties. Blind trials, with unknown samples in a variety of matrices, have demonstrated the versatility, sensitivity, and reliability of the automated system. PMID:27873991
Biostatistics Series Module 3: Comparing Groups: Numerical Variables.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics
Eskinazi, Ilan
2016-01-01
Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order
Favalli, Andrea; Croft, Stephen; Santi, Peter
2015-06-15
Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less
Sinhal, Tapati Manohar; Shah, Ruchi Rani Purvesh; Jais, Pratik Subhas; Shah, Nimisha Chinmay; Hadwani, Krupali Dhirubhai; Rothe, Tushar; Sinhal, Neha Nilesh
2018-01-01
The aim of this study is to compare and to evaluate sealing ability of newly introduced C-point system, cold lateral condensation, and thermoplasticized gutta-percha obturating technique using a dye extraction method. Sixty extracted maxillary central incisors were decoronated below the cementoenamel junction. Working length was established, and biomechanical preparation was done using K3 rotary files with standard irrigation protocol. Teeth were divided into three groups according to the obturation protocol; Group I-Cold lateral condensation, Group II-Thermoplasticized gutta-percha, and Group III-C-Point obturating system. After obturation all samples were subjected to microleakage assessment using dye extraction method. Obtained scores will be statistical analyzed using ANOVA test and post hoc Tukey's test. One-way analysis of variance revealed that there is significant difference among the three groups with P value (0.000 < 0.05). Tukey's HSD post hoc tests for multiple comparisons test shows that the Group II and III perform significantly better than Group I. Group III performs better than Group II with no significant difference. All the obturating technique showed some degree of microleakage. Root canals filled with C-point system showed least microleakage followed by thermoplasticized obturating technique with no significant difference among them. C-point obturation system could be an alternative to the cold lateral condensation technique.
Automatic multiple-sample applicator and electrophoresis apparatus
NASA Technical Reports Server (NTRS)
Grunbaum, B. W. (Inventor)
1977-01-01
An apparatus for performing electrophoresis and a multiple-sample applicator is described. Electrophoresis is a physical process in which electrically charged molecules and colloidal particles, upon the application of a dc current, migrate along a gel or a membrane that is wetted with an electrolyte. A multiple-sample applicator is provided which coacts with a novel tank cover to permit an operator either to depress a single button, thus causing multiple samples to be deposited on the gel or on the membrane simultaneously, or to depress one or more sample applicators separately by means of a separate button for each applicator.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
Miller, Arthur L; Drake, Pamela L; Murphy, Nathaniel C; Cauda, Emanuele G; LeBouf, Ryan F; Markevicius, Gediminas
Miners are exposed to silica-bearing dust which can lead to silicosis, a potentially fatal lung disease. Currently, airborne silica is measured by collecting filter samples and sending them to a laboratory for analysis. Since this may take weeks, a field method is needed to inform decisions aimed at reducing exposures. This study investigates a field-portable Fourier transform infrared (FTIR) method for end-of-shift (EOS) measurement of silica on filter samples. Since the method entails localized analyses, spatial uniformity of dust deposition can affect accuracy and repeatability. The study, therefore, assesses the influence of radial deposition uniformity on the accuracy of the method. Using laboratory-generated Minusil and coal dusts and three different types of sampling systems, multiple sets of filter samples were prepared. All samples were collected in pairs to create parallel sets for training and validation. Silica was measured by FTIR at nine locations across the face of each filter and the data analyzed using a multiple regression analysis technique that compared various models for predicting silica mass on the filters using different numbers of "analysis shots." It was shown that deposition uniformity is independent of particle type (kaolin vs. silica), which suggests the role of aerodynamic separation is negligible. Results also reflected the correlation between the location and number of shots versus the predictive accuracy of the models. The coefficient of variation (CV) for the models when predicting mass of validation samples was 4%-51% depending on the number of points analyzed and the type of sampler used, which affected the uniformity of radial deposition on the filters. It was shown that using a single shot at the center of the filter yielded predictivity adequate for a field method, (93% return, CV approximately 15%) for samples collected with 3-piece cassettes.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-15
... interest. Accordingly, at the $22.00 price point, both the entire amount of B4 and the remaining balance of...-STP opposite-side interest, Exchange systems would cancel the remaining balance of the incoming STPN.... If an STPN could execute at multiple price points, the incoming STPN would execute at the multiple...
ERIC Educational Resources Information Center
Shih, Ching-Hsiang
2013-01-01
This study provided that people with multiple disabilities can have a collaborative working chance in computer operations through an Enhanced Multiple Cursor Dynamic Pointing Assistive Program (EMCDPAP, a new kind of software that replaces the standard mouse driver, changes a mouse wheel into a thumb/finger poke detector, and manages mouse…
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Shih, Ching-Tien; Peng, Chin-Ling
2011-01-01
This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance through an Automatic Target Acquisition Program (ATAP) and a newly developed mouse driver (i.e. a new mouse driver replaces standard mouse driver, and is able to monitor mouse movement and intercept click action). Initially, both…
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.
Thomas B. Lynch; Jeffrey H. Gove
2013-01-01
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).
Path optimization method for the sign problem
NASA Astrophysics Data System (ADS)
Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji
2018-03-01
We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.
Multiscale study on stochastic reconstructions of shale samples
NASA Astrophysics Data System (ADS)
Lili, J.; Lin, M.; Jiang, W. B.
2016-12-01
Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).
NASA Astrophysics Data System (ADS)
Khodabakhshi, M.; Jafarpour, B.
2013-12-01
Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Ayaz, Shirazi Muhammad; Kim, Min Young
2018-01-01
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552
Assisting People with Multiple Disabilities to Use Computers with Multiple Mice
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Shih, Ching-Tien
2009-01-01
This study assessed the combination of multiple mice aid with two persons with multiple disabilities. Complete mouse operation which needed the physically functional sound, was distributed among their limbs with remaining ability. Through these decentralized operations, they could still reach complete mouse pointing control. Initially, both…
Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules
Panzeri, Francesco
2017-01-01
We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142
Confirmatory Factor Analysis of the Minnesota Nicotine Withdrawal Scale
Toll, Benjamin A.; O’Malley, Stephanie S.; McKee, Sherry A.; Salovey, Peter; Krishnan-Sarin, Suchitra
2008-01-01
The authors examined the factor structure of the Minnesota Nicotine Withdrawal Scale (MNWS) using confirmatory factor analysis in clinical research samples of smokers trying to quit (n = 723). Three confirmatory factor analytic models, based on previous research, were tested with each of the 3 study samples at multiple points in time. A unidimensional model including all 8 MNWS items was found to be the best explanation of the data. This model produced fair to good internal consistency estimates. Additionally, these data revealed that craving should be included in the total score of the MNWS. Factor scores derived from this single-factor, 8-item model showed that increases in withdrawal were associated with poor smoking outcome for 2 of the clinical studies. Confirmatory factor analyses of change scores showed that the MNWS symptoms cohere as a syndrome over time. Future investigators should report a total score using all of the items from the MNWS. PMID:17563141
Electrochemical Detection in Stacked Paper Networks.
Liu, Xiyuan; Lillehoj, Peter B
2015-08-01
Paper-based electrochemical biosensors are a promising technology that enables rapid, quantitative measurements on an inexpensive platform. However, the control of liquids in paper networks is generally limited to a single sample delivery step. Here, we propose a simple method to automate the loading and delivery of liquid samples to sensing electrodes on paper networks by stacking multiple layers of paper. Using these stacked paper devices (SPDs), we demonstrate a unique strategy to fully immerse planar electrodes by aqueous liquids via capillary flow. Amperometric measurements of xanthine oxidase revealed that electrochemical sensors on four-layer SPDs generated detection signals up to 75% higher compared with those on single-layer paper devices. Furthermore, measurements could be performed with minimal user involvement and completed within 30 min. Due to its simplicity, enhanced automation, and capability for quantitative measurements, stacked paper electrochemical biosensors can be useful tools for point-of-care testing in resource-limited settings. © 2015 Society for Laboratory Automation and Screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less
Psychosocial correlates of suicidal ideation in rural South African adolescents.
Shilubane, Hilda N; Ruiter, Robert A C; Bos, Arjan E R; van den Borne, Bart; James, Shamagonam; Reddy, Priscilla S
2014-01-01
Suicide is a prevalent problem among young people in Southern Africa, but prevention programs are largely absent. This survey aimed to identify the behavioral and psychosocial correlates of suicidal ideation among adolescents in Limpopo. A two-stage cluster sample design was used to establish a representative sample of 591 adolescents. Bivariate correlations and multiple linear regression analyses were conducted. Findings show that suicidal ideation is prevalent among adolescents. The psychosocial factors perceived social support and negative feelings about the family and the behavioral factors forced sexual intercourse and physical violence by the partner were found to increase the risk of suicidal ideation. Depression mediated the relationship between these psychosocial and behavioral risk factors and suicidal ideation. This study increased our understanding of the psychosocial and behavioral predictors of adolescent suicidal ideation. The findings provide target points for future intervention programs and call for supportive structures to assist adolescents with suicidal ideation.
Odontological approach to sexual dimorphism in southeastern France.
Lladeres, Emilie; Saliba-Serre, Bérengère; Sastre, Julien; Foti, Bruno; Tardivo, Delphine; Adalian, Pascal
2013-01-01
The aim of this study was to establish a prediction formula to allow for the determination of sex among the southeastern French population using dental measurements. The sample consisted of 105 individuals (57 males and 48 females, aged between 18 and 25 years). Dental measurements were calculated using Euclidean distances, in three-dimensional space, from point coordinates obtained by a Microscribe. A multiple logistic regression analysis was performed to establish the prediction formula. Among 12 selected dental distances, a stepwise logistic regression analysis highlighted the two most significant discriminate predictors of sex: one located at the mandible and the other at the maxilla. A cutpoint was proposed to prediction of true sex. The prediction formula was then tested on a validation sample (20 males and 34 females, aged between 18 and 62 years and with a history of orthodontics or restorative care) to evaluate the accuracy of the method. © 2012 American Academy of Forensic Sciences.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Jalava, Katri; Rintala, Hanna; Ollgren, Jukka; Maunula, Leena; Gomez-Alvarez, Vicente; Revez, Joana; Palander, Marja; Antikainen, Jenni; Kauppinen, Ari; Räsänen, Pia; Siponen, Sallamaari; Nyholm, Outi; Kyyhkynen, Aino; Hakkarainen, Sirpa; Merentie, Juhani; Pärnänen, Martti; Loginov, Raisa; Ryu, Hodon; Kuusi, Markku; Siitonen, Anja; Miettinen, Ilkka; Santo Domingo, Jorge W.; Hänninen, Marja-Liisa; Pitkänen, Tarja
2014-01-01
Failures in the drinking water distribution system cause gastrointestinal outbreaks with multiple pathogens. A water distribution pipe breakage caused a community-wide waterborne outbreak in Vuorela, Finland, July 2012. We investigated this outbreak with advanced epidemiological and microbiological methods. A total of 473/2931 inhabitants (16%) responded to a web-based questionnaire. Water and patient samples were subjected to analysis of multiple microbial targets, molecular typing and microbial community analysis. Spatial analysis on the water distribution network was done and we applied a spatial logistic regression model. The course of the illness was mild. Drinking untreated tap water from the defined outbreak area was significantly associated with illness (RR 5.6, 95% CI 1.9–16.4) increasing in a dose response manner. The closer a person lived to the water distribution breakage point, the higher the risk of becoming ill. Sapovirus, enterovirus, single Campylobacter jejuni and EHEC O157:H7 findings as well as virulence genes for EPEC, EAEC and EHEC pathogroups were detected by molecular or culture methods from the faecal samples of the patients. EPEC, EAEC and EHEC virulence genes and faecal indicator bacteria were also detected in water samples. Microbial community sequencing of contaminated tap water revealed abundance of Arcobacter species. The polyphasic approach improved the understanding of the source of the infections, and aided to define the extent and magnitude of this outbreak. PMID:25147923
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced
[Development of the automatic dental X-ray film processor].
Bai, J; Chen, H
1999-07-01
This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.
ERIC Educational Resources Information Center
Murayama, Taku
2016-01-01
This paper focuses on a project in teacher education through art activities at the undergraduate level. The main theme is art activities by university students and multiple and severe handicapped students. This project has two significant points for the preparation of special education teachers. One point is the opportunity for field work. Even…
Cumulative risk assessment (CRA) methods promote the use of a conceptual site model (CSM) to apportion exposures and integrate risk from multiple stressors. While CSMs may encompass multiple species, evaluating end points across taxa can be challenging due to data availability an...
Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.
NASA Astrophysics Data System (ADS)
Stossel, Bryan Joseph
1995-01-01
Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
Frndak, Seth E; Kordovski, Victoria M; Cookfair, Diane; Rodgers, Jonathan D; Weinstock-Guttman, Bianca; Benedict, Ralph H B
2015-02-01
Unemployment is common in multiple sclerosis (MS) and detrimental to quality of life. Studies suggest disclosure of diagnosis is an adaptive strategy for patients. However, the role of cognitive deficits and psychiatric symptoms in disclosure are not well studied. The goals of this paper were to (a) determine clinical factors most predictive of disclosure, and (b) measure the effects of disclosure on workplace problems and accommodations in employed patients. We studied two overlapping cohorts: a cross-sectional sample (n = 143) to determine outcomes associated with disclosure, and a longitudinal sample (n = 103) compared at four time points over one year on reported problems and accommodations. A case study of six patients, disclosing during monitoring, was also included. Disclosure was associated with greater physical disability but not cognitive impairment. Logistic regression predicting disclosure status retained physical disability, accommodations and years of employment (p < 0.0001). Disclosed patients reported more work problems and accommodations over time. The case study revealed that reasons for disclosing are multifaceted, including connection to employer, decreased mobility and problems at work. Although cognitive impairment is linked to unemployment, it does not appear to inform disclosure decisions. Early disclosure may help maintain employment if followed by appropriate accommodations. © The Author(s), 2014.
Klingenberg, Jennifer M; McFarland, Kevin L; Friedman, Aaron J; Boyce, Steven T; Aronow, Bruce J; Supp, Dorothy M
2010-02-01
Bioengineered skin substitutes can facilitate wound closure in severely burned patients, but deficiencies limit their outcomes compared with native skin autografts. To identify gene programs associated with their in vivo capabilities and limitations, we extended previous gene expression profile analyses to now compare engineered skin after in vivo grafting with both in vitro maturation and normal human skin. Cultured skin substitutes were grafted on full-thickness wounds in athymic mice, and biopsy samples for microarray analyses were collected at multiple in vitro and in vivo time points. Over 10,000 transcripts exhibited large-scale expression pattern differences during in vitro and in vivo maturation. Using hierarchical clustering, 11 different expression profile clusters were partitioned on the basis of differential sample type and temporal stage-specific activation or repression. Analyses show that the wound environment exerts a massive influence on gene expression in skin substitutes. For example, in vivo-healed skin substitutes gained the expression of many native skin-expressed genes, including those associated with epidermal barrier and multiple categories of cell-cell and cell-basement membrane adhesion. In contrast, immunological, trichogenic, and endothelial gene programs were largely lacking. These analyses suggest important areas for guiding further improvement of engineered skin for both increased homology with native skin and enhanced wound healing.
Wardell, Jeffrey D.; Rogers, Michelle L.; Simms, Leonard J.; Jackson, Kristina M.; Read, Jennifer P.
2014-01-01
This study investigated inconsistent responding to survey items by participants involved in longitudinal, web-based substance use research. We also examined cross-sectional and prospective predictors of inconsistent responding. Middle school (N = 1,023) and college students (N = 995) from multiple sites in the United States responded to online surveys assessing substance use and related variables in three waves of data collection. We applied a procedure for creating an index of inconsistent responding at each wave that involved identifying pairs of items with considerable redundancy and calculating discrepancies in responses to these items. Inconsistent responding was generally low in the Middle School sample and moderate in the College sample, with individuals showing only modest stability in inconsistent responding over time. Multiple regression analyses identified several baseline variables—including demographic, personality, and behavioral variables—that were uniquely associated with inconsistent responding both cross-sectionally and prospectively. Alcohol and substance involvement showed some bivariate associations with inconsistent responding, but these associations largely were accounted for by other factors. The results suggest that high levels of carelessness or inconsistency do not appear to characterize participants’ responses to longitudinal web-based surveys of substance use and support the use of inconsistency indices as a tool for identifying potentially problematic responders. PMID:24092819
Zhao, Ziyan; Henowitz, Liza; Zweifach, Adam
2018-05-01
We previously developed a flow cytometry assay that monitored lytic granule exocytosis in cytotoxic T lymphocytes stimulated by contacting beads coated with activating anti-CD3 antibodies. That assay was multiplexed in that responses of cells that did or did not receive the activating stimulus were distinguished via changes in light scatter accompanying binding of cells to beads, allowing us to discriminate compounds that activate responses on their own from compounds that enhance responses in cells that received the activating stimulus, all within a single sample. Here we add a second dimension of multiplexing by developing means to assess in a single sample the effects of treating cells with test compounds for different times. Bar-coding cells before adding them to test wells lets us determine compound treatment time while also monitoring activation status and response amplitude at the point of interrogation. This multiplexed assay is suitable for screening 96-well plates. We used it to screen compounds from the National Cancer Institute, identifying several compounds that enhance anti-LAMP1 responses. Multiple-treatment-time (MTT) screening enabled by bar-coding and read via high-throughput flow cytometry may be a generally useful method for facilitating the discovery of compounds of interest.
Spectroscopic sensitivity of real-time, rapidly induced phytochemical change in response to damage.
Couture, John J; Serbin, Shawn P; Townsend, Philip A
2013-04-01
An ecological consequence of plant-herbivore interactions is the phytochemical induction of defenses in response to insect damage. Here, we used reflectance spectroscopy to characterize the foliar induction profile of cardenolides in Asclepias syriaca in response to damage, tracked in vivo changes and examined the influence of multiple plant traits on cardenolide concentrations. Foliar cardenolide concentrations were measured at specific time points following damage to capture their induction profile. Partial least-squares regression (PLSR) modeling was employed to calibrate cardenolide concentrations to reflectance spectroscopy. In addition, subsets of plants were either repeatedly sampled to track in vivo changes or modified to reduce latex flow to damaged areas. Cardenolide concentrations and the induction profile of A. syriaca were well predicted using models derived from reflectance spectroscopy, and this held true for repeatedly sampled plants. Correlations between cardenolides and other foliar-related variables were weak or not significant. Plant modification for latex reduction inhibited an induced cardenolide response. Our findings show that reflectance spectroscopy can characterize rapid phytochemical changes in vivo. We used reflectance spectroscopy to identify the mechanisms behind the production of plant secondary metabolites, simultaneously characterizing multiple foliar constituents. In this case, cardenolide induction appears to be largely driven by enhanced latex delivery to leaves following damage. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Fischer, Helen; Schütte, Stefanie; Depoux, Anneliese; Amelung, Dorothee; Sauerborn, Rainer
2018-04-27
Graphs are prevalent in the reports of the Intergovernmental Panel on Climate Change (IPCC), often depicting key points and major results. However, the popularity of graphs in the IPCC reports contrasts with a neglect of empirical tests of their understandability. Here we put the understandability of three graphs taken from the Health chapter of the Fifth Assessment Report to an empirical test. We present a pilot study where we evaluate objective understanding (mean accuracy in multiple-choice questions) and subjective understanding (self-assessed confidence in accuracy) in a sample of attendees of the United Nations Climate Change Conference in Marrakesh, 2016 (COP22), and a student sample. Results show a mean objective understanding of M = 0.33 for the COP sample, and M = 0.38 for the student sample. Subjective and objective understanding were unrelated for the COP22 sample, but associated for the student sample. These results suggest that (i) understandability of the IPCC health chapter graphs is insufficient, and that (ii) particularly COP22 attendees lacked insight into which graphs they did, and which they did not understand. Implications for the construction of graphs to communicate health impacts of climate change to decision-makers are discussed.
A Sustainable Architecture for Lunar Resource Prospecting from an EML-based Exploration Platform
NASA Astrophysics Data System (ADS)
Klaus, K.; Post, K.; Lawrence, S. J.
2012-12-01
Introduction - We present a point of departure architecture for prospecting for Lunar Resources from an Exploration Platform at the Earth - Moon Lagrange points. Included in our study are launch vehicle, cis-lunar transportation architecture, habitat requirements and utilization, lander/rover concepts and sample return. Different transfer design techniques can be explored by mission designers, testing various propulsive systems, maneuvers, rendezvous, and other in-space and surface operations. Understanding the availability of high and low energy trajectory transfer options opens up the possibility of exploring the human and logistics support mission design space and deriving solutions never before contemplated. For sample return missions from the lunar surface, low-energy transfers could be utilized between EML platform and the surface as well as return of samples to EML-based spacecraft. Human Habitation at the Exploration Platform - Telerobotic and telepresence capabilities are considered by the agency to be "grand challenges" for space technology. While human visits to the lunar surface provide optimal opportunities for field geologic exploration, on-orbit telerobotics may provide attractive early opportunities for geologic exploration, resource prospecting, and other precursor activities in advance of human exploration campaigns and ISRU processing. The Exploration Platform provides a perfect port for a small lander which could be refueled and used for multiple missions including sample return. The EVA and robotic capabilities of the EML Exploration Platform allow the lander to be serviced both internally and externally, based on operational requirements. The placement of the platform at an EML point allows the lander to access any site on the lunar surface, thus providing the global lunar surface access that is commonly understood to be required in order to enable a robust lunar exploration program. Designing the sample return lander for low-energy trajectories would reduce the overall mass and potentially increase the sample return mass. The Initial Lunar Mission -Building upon Apollo sample investigations, the recent results of the LRO/LCROSS, international missions such as Chandrayaan-1, and legacy missions including Lunar Prospector, and Clementine, among the most important science and exploration goals is surface prospecting for lunar resources and to provide ground truth for orbital observations. Being able to constrain resource production potential will allow us to estimate the prospect for reducing the size of payloads launched from Earth required for Solar System exploration. Flight opportunities for something like the NASA RESOLVE instrument suite to areas of high science and exploration interest could be used to refine and improve future Exploration architectures, reducing the outlays required for cis-lunar operations. Summary - EML points are excellent for placement of a semi-permanent human-tended Exploration Platform both in the near term, while providing important infrastructure and deep-space experience that will be built upon to gradually increase long-term operational capabilities.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
... FAA aeronautical database as compulsory reporting points. Additionally, this action also requires... aeronautical database. DATES: Effective date 0901 UTC July 10, 2012. The Director of the Federal Register... FAA's aeronautical database as reporting points. The reporting points included five Domestic Reporting...
Geochemical and physical drivers of microbial community structure in hot spring ecosystems
NASA Astrophysics Data System (ADS)
Havig, J. R.; Hamilton, T. L.; Boyd, E. S.; Meyer-Dombard, D. R.; Shock, E.
2012-12-01
Microbial communities in natural systems are typically characterized using samples collected from a single time point, thereby neglecting the temporal dynamics that characterize natural systems. The composition of these communities obtained from single point samples is then related to the geochemistry and physical parameters of the environment. Since most microbial life is adapted to a relatively narrow ecological niche (multiplicity of physical and chemical parameters that characterize a local habitat), these assessments provide only modest insight into the controls on community composition. Temporal variation in temperature or geochemical composition would be expected to add another dimension to the complexity of niche space available to support microbial diversity, with systems that experience greater variation supporting a greater biodiversity until a point where the variability is too extreme. . Hot springs often exhibit significant temporal variation, both in physical as well as chemical characteristics. This is a result of subsurface processes including boiling, phase separation, and differential mixing of liquid and vapor phase constituents. These characteristics of geothermal systems, which vary significantly over short periods of time, provide ideal natural laboratories for investigating how i) the extent of microbial community biodiversity and ii) the composition of those communities are shaped by temporal fluctuations in geochemistry. Geochemical and molecular samples were collected from 17 temporally variable hot springs across Yellowstone National Park, Wyoming. Temperature measurements using data-logging thermocouples, allowing accurate determination of temperature maximums, minimums, and ranges for each collection site, were collected in parallel, along with multiple geochemical characterizations as conditions varied. There were significant variations in temperature maxima (54.5 to 90.5°C), minima (12.5 to 82.5°C), and range (3.5 to 77.5°C) for the hot spring environments that spanned ranges of pH values (2.2 to 9.0) and geochemical compositions. We characterized the abundance, composition, and phylogenetic diversity of bacterial and archaeal 16S rRNA gene assemblages in sediment/biofilm samples collected from each site. 16S data can be used as proxy for metabolic dissimilarity. We predict that temporally fluctuating environments should provide additional complexity to the system (additional niche space) capable of supporting additional taxa, which should lead to greater 16S rRNA gene diversity. However, systems with too much variability should collapse the diversity. Thus, one would expect an optimal system for variability, with respect to 16S phylogenetic diversity. Community ecology tools were then applied to model the relative influence of physical and chemical characteristics (including temperature dynamics) on the local biodiversity. The results reveal unique insight into the role of temporal environmental variation in the development of biodiverse communities and provide a platform for predicting the response of an ecosystem to temperature perturbation.
Centric scan SPRITE for spin density imaging of short relaxation time porous materials.
Chen, Quan; Halse, Meghan; Balcom, Bruce J
2005-02-01
The single-point ramped imaging with T1 enhancement (SPRITE) imaging technique has proven to be a very robust and flexible method for the study of a wide range of systems with short signal lifetimes. As a pure phase encoding technique, SPRITE is largely immune to image distortions generated by susceptibility variations, chemical shift and paramagnetic impurities. In addition, it avoids the line width restrictions on resolution common to time-based sampling, frequency encoding methods. The standard SPRITE technique is however a longitudinal steady-state imaging method; the image intensity is related to the longitudinal steady state, which not only decreases the signal-to-noise ratio, but also introduces many parameters into the image signal equation. A centric scan strategy for SPRITE removes the longitudinal steady state from the image intensity equation and increases the inherent image intensity. Two centric scan SPRITE methods, that is, Spiral-SPRITE and Conical-SPRITE, with fast acquisition and greatly reduced gradient duty cycle, are outlined. Multiple free induction decay (FID) points may be acquired during SPRITE sampling for signal averaging to increase signal-to-noise ratio or for T2* and spin density mapping without an increase in acquisition time. Experimental results show that most porous sedimentary rock and concrete samples have a single exponential T2* decay due to susceptibility difference-induced field distortion. Inhomogeneous broadening thus dominates, which suggests that spin density imaging can be easily obtained by SPRITE.
Dissolved Organic Carbon Degradation in Response to Nutrient Amendments in Southwest Greenland Lakes
NASA Astrophysics Data System (ADS)
Burpee, B. T.; Northington, R.; Simon, K. S.; Saros, J. E.
2014-12-01
Aquatic ecosystems across the Arctic are currently experiencing rapid shifts in biotic, chemical, and physical factors in response to climate change. Preliminary data from multiple lakes in southwestern Greenland indicate decreasing dissolved organic carbon (DOC) concentrations over the past decade. Though several factors may be contributing to this phenomenon, this study attempts to elucidate the potential of heterotrophic bacteria to degrade DOC in the presence of increasing nutrient concentrations. In certain Arctic regions, nutrient subsidies have been released into lakes due to permafrost thaw. If this is occurring in southwestern Greenland, we hypothesized that increased nutrient concentrations will relieve nutrient limitation, thereby allowing heterotrophic bacteria to utilize DOC as an energy source. This prediction was tested using experimental DOC degradation assays from four sample lakes. Four nutrient amendment treatments (control, N, P, and N + P) were used to simulate in situ subsidies. Five time points were sampled during the incubation: days 0, 3, 6, 14, and 60. Total organic carbon (TOC) and parallel factor (PARAFAC) analysis were used to monitor the relative concentrations of different DOC fractions over time. In addition, samples for extracellular enzyme activity (EEA) analysis were collected at every time point. Early analysis of fulvic and humic pools of DOC do not indicate any significant change from days 0 to 14. This could be due to the fact that these DOC fractions are relatively recalcitrant. This study will be important in determining whether bacterial degradation could be a contributing factor to DOC decline in arctic lakes.
Point-of-Care Hemoglobin A1c Testing: An Evidence-Based Analysis
2014-01-01
Background The increasing prevalence of diabetes in Ontario means that there will be growing demand for hemoglobin A1c (HbA1c) testing to monitor glycemic control for the management of this chronic disease. Testing HbA1c where patients receive their diabetes care may improve system efficiency if the results from point-of-care HbA1c testing are comparable to those from laboratory HbA1c measurements. Objectives To review the correlation between point-of-care HbA1c testing and laboratory HbA1c measurement in patients with diabetes in clinical settings. Data Sources The literature search included studies published between January 2003 and June 2013. Search terms included glycohemoglobin, hemoglobin A1c, point of care, and diabetes. Review Methods Studies were included if participants had diabetes; if they compared point-of-care HbA1c devices (licensed by Health Canada and available in Canada) with laboratory HbA1c measurement (reference method); if they performed point-of-care HbA1c testing using capillary blood samples (finger pricks) and laboratory HbA1c measurement using venous blood samples within 7 days; and if they reported a correlation coefficient between point-of-care HbA1c and laboratory HbA1c results. Results Three point-of-care HbA1c devices were reviewed in this analysis: Bayer's A1cNow+, Bio-Rad's In2it, and Siemens’ DCA Vantage. Five observational studies met the inclusion criteria. The pooled results showed a positive correlation between point-of-care HbA1c testing and laboratory HbA1c measurement (correlation coefficient, 0.967; 95% confidence interval, 0.960–0.973). Limitations Outcomes were limited to the correlation coefficient, as this was a commonly reported measure of analytical performance in the literature. Results should be interpreted with caution due to risk of bias related to selection of participants, reference standards, and the multiple steps involved in POC HbA1c testing. Conclusions Moderate quality evidence showed a positive correlation between point-of-care HbA1c testing and laboratory HbA1c measurement. Five observational studies compared 3 point-of-care HbA1c devices with laboratory HbA1c assays, and all reported strong correlation between the 2 tests. PMID:26316922
Cyanide binding to human plasma heme-hemopexin: A comparative study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ascenzi, Paolo, E-mail: ascenzi@uniroma3.it; Istituto Nazionale di Biostrutture e Biosistemi, Roma; Leboffe, Loris
Highlights: Black-Right-Pointing-Pointer Cyanide binding to ferric HHPX-heme-Fe. Black-Right-Pointing-Pointer Cyanide binding to ferrous HHPX-heme-Fe. Black-Right-Pointing-Pointer Dithionite-mediated reduction of ferric HHPX-heme-Fe-cyanide. Black-Right-Pointing-Pointer Cyanide binding to HHPX-heme-Fe is limited by ligand deprotonation. Black-Right-Pointing-Pointer Cyanide dissociation from HHPX-heme-Fe-cyanide is limited by ligand protonation. -- Abstract: Hemopexin (HPX) displays a pivotal role in heme scavenging and delivery to the liver. In turn, heme-Fe-hemopexin (HPX-heme-Fe) displays heme-based spectroscopic and reactivity properties. Here, kinetics and thermodynamics of cyanide binding to ferric and ferrous hexa-coordinate human plasma HPX-heme-Fe (HHPX-heme-Fe(III) and HHPX-heme-Fe(II), respectively), and for the dithionite-mediated reduction of the HHPX-heme-Fe(III)-cyanide complex, at pH 7.4 and 20.0 Degree-Sign C,more » are reported. Values of thermodynamic and kinetic parameters for cyanide binding to HHPX-heme-Fe(III) and HHPX-heme-Fe(II) are K = (4.1 {+-} 0.4) Multiplication-Sign 10{sup -6} M, k{sub on} = (6.9 {+-} 0.5) Multiplication-Sign 10{sup 1} M{sup -1} s{sup -1}, and k{sub off} = 2.8 Multiplication-Sign 10{sup -4} s{sup -1}; and H = (6 {+-} 1) Multiplication-Sign 10{sup -1} M, h{sub on} = 1.2 Multiplication-Sign 10{sup -1} M{sup -1} s{sup -1}, and h{sub off} = (7.1 {+-} 0.8) Multiplication-Sign 10{sup -2} s{sup -1}, respectively. The value of the rate constant for the dithionite-mediated reduction of the HHPX-heme-Fe(III)-cyanide complex is l = 8.9 {+-} 0.8 M{sup -1/2} s{sup -1}. HHPX-heme-Fe reactivity is modulated by proton acceptor/donor amino acid residue(s) (e.g., His236) assisting the deprotonation and protonation of the incoming and outgoing ligand, respectively.« less
NASA Astrophysics Data System (ADS)
Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan
2018-03-01
We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.
Pure phase encode magnetic field gradient monitor.
Han, Hui; MacGregor, Rodney P; Balcom, Bruce J
2009-12-01
Numerous methods have been developed to measure MRI gradient waveforms and k-space trajectories. The most promising new strategy appears to be magnetic field monitoring with RF microprobes. Multiple RF microprobes may record the magnetic field evolution associated with a wide variety of imaging pulse sequences. The method involves exciting one or more test samples and measuring the time evolution of magnetization through the FIDs. Two critical problems remain. The gradient waveform duration is limited by the sample T(2)*, while the k-space maxima are limited by gradient dephasing. The method presented is based on pure phase encode FIDs and solves the above two problems in addition to permitting high strength gradient measurement. A small doped water phantom (1-3 mm droplet, T(1), T(2), T(2)* < 100 micros) within a microprobe is excited by a series of closely spaced broadband RF pulses each followed by FID single point acquisition. Two trial gradient waveforms have been chosen to illustrate the technique, neither of which could be measured by the conventional RF microprobe measurement. The first is an extended duration gradient waveform while the other illustrates the new method's ability to measure gradient waveforms with large net area and/or high amplitude. The new method is a point monitor with simple implementation and low cost hardware requirements.
Optimisation of the zinc sulphate turbidity test for the determination of immune status.
Hogan, I; Doherty, M; Fagan, J; Kennedy, E; Conneely, M; Crowe, B; Lorenz, I
2016-02-13
Failure of passive transfer of maternal immunity occurs in calves that fail to absorb sufficient immunoglobulins from ingested colostrum. The zinc sulphate turbidity test has been developed to test bovine neonates for this failure. The specificity of this test has been shown to be less than ideal. The objective was to examine how parameters of the zinc sulphate turbidity test may be manipulated in order to improve its diagnostic accuracy. One hundred and five blood samples were taken from calves of dairy cows receiving various rates of colostrum feeding. The zinc sulphate turbidity test was carried out multiple times on each sample, varying the solution strength, time of reaction and wavelength of light used and the results compared with those of a radial immunodiffusion test, which is the reference method for measuring immunoglobulin concentration in serum. Reducing the time over which the reaction occurs, or increasing the wavelength of light used to read the turbidity, resulted in decreased specificity without improving sensitivity. Increasing the concentration of the zinc sulphate solution used in the test was shown to improve the specificity without decreasing sensitivity. Examination of the cut-off points suggested that a lower cut-off point would improve the performance. British Veterinary Association.
Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A
2007-12-01
The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.
HerMES: ALMA Imaging of Herschel-selected Dusty Star-forming Galaxies
NASA Astrophysics Data System (ADS)
Bussmann, R. S.; Riechers, D.; Fialkov, A.; Scudder, J.; Hayward, C. C.; Cowley, W. I.; Bock, J.; Calanog, J.; Chapman, S. C.; Cooray, A.; De Bernardis, F.; Farrah, D.; Fu, Hai; Gavazzi, R.; Hopwood, R.; Ivison, R. J.; Jarvis, M.; Lacey, C.; Loeb, A.; Oliver, S. J.; Pérez-Fournon, I.; Rigopoulou, D.; Roseboom, I. G.; Scott, Douglas; Smith, A. J.; Vieira, J. D.; Wang, L.; Wardlow, J.
2015-10-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) has identified large numbers of dusty star-forming galaxies (DSFGs) over a wide range in redshift. A detailed understanding of these DSFGs is hampered by the limited spatial resolution of Herschel. We present 870 μm 0.″45 resolution imaging obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) of a sample of 29 HerMES DSFGs that have far-infrared (FIR) flux densities that lie between the brightest of sources found by Herschel and fainter DSFGs found via ground-based surveys in the submillimeter region. The ALMA imaging reveals that these DSFGs comprise a total of 62 sources (down to the 5σ point-source sensitivity limit in our ALMA sample; σ ≈ 0.2 {mJy}). Optical or near-infrared imaging indicates that 36 of the ALMA sources experience a significant flux boost from gravitational lensing (μ \\gt 1.1), but only six are strongly lensed and show multiple images. We introduce and make use of uvmcmcfit, a general-purpose and publicly available Markov chain Monte Carlo visibility-plane analysis tool to analyze the source properties. Combined with our previous work on brighter Herschel sources, the lens models presented here tentatively favor intrinsic number counts for DSFGs with a break near 8 {mJy} at 880 μ {{m}} and a steep fall-off at higher flux densities. Nearly 70% of the Herschel sources break down into multiple ALMA counterparts, consistent with previous research indicating that the multiplicity rate is high in bright sources discovered in single-dish submillimeter or FIR surveys. The ALMA counterparts to our Herschel targets are located significantly closer to each other than ALMA counterparts to sources found in the LABOCA ECDFS Submillimeter Survey. Theoretical models underpredict the excess number of sources with small separations seen in our ALMA sample. The high multiplicity rate and small projected separations between sources seen in our sample argue in favor of interactions and mergers plausibly driving both the prodigious emission from the brightest DSFGs as well as the sharp downturn above {S}880=8 {mJy}. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Multiple melt bodies fed the AD 2011 eruption of Puyehue-Cordón Caulle, Chile.
Alloway, B V; Pearce, N J G; Villarosa, G; Outes, V; Moreno, P I
2015-12-02
Within the volcanological community there is a growing awareness that many large- to small-scale, point-source eruptive events can be fed by multiple melt bodies rather than from a single magma reservoir. In this study, glass shard major- and trace-element compositions were determined from tephra systematically sampled from the outset of the Puyehue-Cordón Caulle (PCC) eruption (~1 km(3)) in southern Chile which commenced on June 4(th), 2011. Three distinct but cogenetic magma bodies were simultaneously tapped during the paroxysmal phase of this eruption. These are readily identified by clear compositional gaps in CaO, and by Sr/Zr and Sr/Y ratios, resulting from dominantly plagioclase extraction at slightly different pressures, with incompatible elements controlled by zircon crystallisation. Our results clearly demonstrate the utility of glass shard major- and trace-element data in defining the contribution of multiple magma bodies to an explosive eruption. The complex spatial association of the PCC fissure zone with the Liquiñe-Ofqui Fault zone was likely an influential factor that impeded the ascent of the parent magma and allowed the formation of discrete melt bodies within the sub-volcanic system that continued to independently fractionate.
Fining of Red Wine Monitored by Multiple Light Scattering.
Ferrentino, Giovanna; Ramezani, Mohsen; Morozova, Ksenia; Hafner, Daniela; Pedri, Ulrich; Pixner, Konrad; Scampicchio, Matteo
2017-07-12
This work describes a new approach based on multiple light scattering to study red wine clarification processes. The whole spectral signal (1933 backscattering points along the length of each sample vial) were fitted by a multivariate kinetic model that was built with a three-step mechanism, implying (1) adsorption of wine colloids to fining agents, (2) aggregation into larger particles, and (3) sedimentation. Each step is characterized by a reaction rate constant. According to the first reaction, the results showed that gelatin was the most efficient fining agent, concerning the main objective, which was the clarification of the wine, and consequently the increase in its limpidity. Such a trend was also discussed in relation to the results achieved by nephelometry, total phenols, ζ-potential, color, sensory, and electronic nose analyses. Also, higher concentrations of the fining agent (from 5 to 30 g/100 L) or higher temperatures (from 10 to 20 °C) sped up the process. Finally, the advantage of using the whole spectral signal vs classical univariate approaches was demonstrated by comparing the uncertainty associated with the rate constants of the proposed kinetic model. Overall, multiple light scattering technique showed a great potential for studying fining processes compared to classical univariate approaches.
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Higher Moments of Net-Kaon Multiplicity Distributions at STAR
NASA Astrophysics Data System (ADS)
Xu, Ji;
2017-01-01
Fluctuations of conserved quantities such as baryon number (B), electric charge number (Q), and strangeness number (S), are sensitive to the correlation length and can be used to probe non-gaussian fluctuations near the critical point. Experimentally, higher moments of the multiplicity distributions have been used to search for the QCD critical point in heavy-ion collisions. In this paper, we report the efficiency-corrected cumulants and their ratios of mid-rapidity (|y| < 0.5) net-kaon multiplicity distributions in Au+Au collisions at = 7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV collected in 2010, 2011, and 2014 with STAR at RHIC. The centrality and energy dependence of the cumulants and their ratios, are presented. Furthermore, the comparisons with baseline calculations (Poisson) and non-critical-point models (UrQMD) are also discussed.
Benign multiple sclerosis: physical and cognitive impairment follow distinct evolutions.
Gajofatto, A; Turatti, M; Bianchi, M R; Forlivesi, S; Gobbin, F; Azzarà, A; Monaco, S; Benedetti, M D
2016-03-01
Benign multiple sclerosis (BMS) definitions rely on physical disability level but do not account sufficiently for cognitive impairment which, however, is not rare. To study the evolution of physical disability and cognitive performance of a group of patients with BMS followed at an University Hospital Multiple Sclerosis Center. A consecutive sample of 24 BMS cases (diagnosis according to 2005 McDonald's criteria, relapsing-remitting course, disease duration ≥ 10 years, and expanded disability status scale [EDSS] score ≤ 2.0) and 13 sex- and age-matched non-BMS patients differing from BMS cases for having EDSS score 2.5-5.5 were included. Main outcome measures were as follows: (i) baseline and 5-year follow-up cognitive impairment defined as failure of at least two tests of the administered neuropsychological battery; (ii) EDSS score worsening defined as confirmed increase ≥ 1 point (or 0.5 point if baseline EDSS score = 5.5). At inclusion, BMS subjects were 41 ± 8 years old and had median EDSS score 1.5 (range 0-2), while non-BMS patients were 46 ± 8 years old and had median EDSS score 3.0 (2.5-5.5). At baseline 16% of patients in both groups were cognitively impaired. After 5 years, EDSS score worsened in 8% of BMS and 46% of non-BMS patients (P = 0.008), while the proportion of cognitively impaired subjects increased to 25% in both groups. Patients with BMS had better physical disability outcome at 5 years compared to non-BMS cases. However, cognitive impairment frequency and decline over time appeared similar. Neuropsychological assessment is essential in patients with BMS given the distinct pathways followed by disease progression in cognitive and physical domains. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Perkins, S. J.; Marais, P. C.; Zwart, J. T. L.; Natarajan, I.; Tasse, C.; Smirnov, O.
2015-09-01
We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. χ2 values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model. As most of the elements of the RIME and χ2 calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple χ2 values. Modified model parameters are transferred to the GPU between each iteration. We implemented Montblanc as a Python package based upon NVIDIA's CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc's RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MEQTREES on a dual hexacore Intel E5-2620v2 CPU. Compared to the OSKAR simulator's GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision floating point respectively. However, OSKAR's RIME implementation is more general than Montblanc's BIRO-tailored RIME. Theoretical analysis of Montblanc's dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 caches.
Tipping points? Curvilinear associations between activity level and mental development in toddlers.
Flom, Megan; Cohen, Madeleine; Saudino, Kimberly J
2017-05-01
The Theory of Optimal Stimulation (Zentall & Zentall, Psychological Bulletin, 94, 1983, 446) posits that the relation between activity level (AL) and cognitive performance follows an inverted U shape where midrange AL predicts better cognitive performance than AL at the extremes. We explored this by fitting linear and quadratic models predicting mental development from AL assessed via multiple methods (parent ratings, observations, and actigraphs) and across multiple situations (laboratory play, laboratory test, home) in over 600 twins (2- and 3-year olds). Only observed AL in the laboratory was curvilinearly related to mental development scores. Results replicated across situations, age, and twin samples, providing strong support for the optimal stimulation model for this measure of AL in early childhood. Different measures of AL provide different information. Observations of AL which include both qualitative and quantitative aspects of AL within structured situations are able to capture beneficial aspects of normative AL as well as detriments of both low and high AL. © 2016 Association for Child and Adolescent Mental Health.
Cebolla, Ausiàs; Campos, Daniel; Galiana, Laura; Oliver, Amparo; Tomás, Jose Manuel; Feliu-Soler, Albert; Soler, Joaquim; García-Campayo, Javier; Demarzo, Marcelo; Baños, Rosa María
2017-03-01
Several meditation practices are associated with mindfulness-based interventions but little is known about their specific effects on the development of different mindfulness facets. This study aimed to assess the relations among different practice variables, types of meditation, and mindfulness facets. The final sample was composed of 185 participants who completed an on-line survey, including information on the frequency and duration of each meditation practice, lifetime practice, and the Five Facet Mindfulness Questionnaire. A Multiple Indicators Multiple Causes structural model was specified, estimated, and tested. Results showed that the Model's overall fit was adequate: χ 2 (1045)=1542.800 (p<0.001), CFI=0.902, RMSEA=0.042. Results revealed that mindfulness facets were uniquely related to the different variables and types of meditation. Our findings showed the importance of specific practices in promoting mindfulness, compared to compassion and informal practices, and they pointed out which one fits each mindfulness facet better. Copyright © 2017 Elsevier Inc. All rights reserved.
Fowler, Patrick J.; Motley, Darnell; Zhang, Jinjin; Rolls-Reutz, Jennifer; Landsverk, John
2018-01-01
In this longitudinal study, we tested whether adolescent maltreatment and out-of-home placement as a response to maltreatment altered developmental patterns of sexual risk behaviors in a nationally representative sample of youth involved in the child welfare system. Participants included adolescents aged 13 to 17 (M=15.5, SD=1.49) at baseline (n=714), followed over 18 months. Computer-assisted interviews were used to collect self-reported sexual practices and experiences of physical and psychological abuse at both time points. Latent transition analyses were used to identify three patterns of sexual risk behaviors: abstainers, safe sex with multiple partners, and unsafe sex with multiple partners. Most adolescents transitioned to safer sexual behavior patterns over time. Adolescents exhibiting the riskiest sexual practices at baseline were most likely to report subsequent abuse and less likely to be placed into out-of-home care. Findings provide a more nuanced understanding of sexual risk among child welfare–involved adolescents and inform practices to promote positive transitions within the system. PMID:25155702
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Arakawa, Mototaka; Shikama, Joe; Yoshida, Koki; Nagaoka, Ryo; Kobayashi, Kazuto; Saijo, Yoshifumi
2015-09-01
Biomechanics of the cell has been gathering much attention because it affects the pathological status in atherosclerosis and cancer. In the present study, an ultrasound microscope system combined with optical microscope for characterization of a single cell with multiple ultrasound parameters was developed. The central frequency of the transducer was 375 MHz and the scan area was 80 × 80 μm with up to 200 × 200 sampling points. An inverted optical microscope was incorporated in the design of the system, allowing for simultaneous optical observations of cultured cells. Two-dimensional mapping of multiple ultrasound parameters, such as sound speed, attenuation, and acoustic impedance, as well as the thickness, density, and bulk modulus of specimen/cell under investigation, etc., was realized by the system. Sound speed and thickness of a 3T3-L1 fibroblast cell were successfully obtained by the system. The ultrasound microscope system combined with optical microscope further enhances our understanding of cellular biomechanics.
Strong, James Asa; Elliott, Michael
2017-03-15
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extensive monitoring through multiple blood samples in professional soccer players.
Heisterberg, Mette F; Fahrenkrug, Jan; Krustrup, Peter; Storskov, Anders; Kjær, Michael; Andersen, Jesper L
2013-05-01
The aim of this study was to make a comprehensive gathering of consecutive detailed blood samples from professional soccer players and to analyze different blood parameters in relation to seasonal changes in training and match exposure. Blood samples were collected 5 times during a 6-month period and analyzed for 37 variables in 27 professional soccer players from the best Danish league. Additionally, the players were tested for body composition, V[Combining Dot Above]O2max and physical performance by the Yo-Yo intermittent endurance submax test (IE2). Multiple variations in blood parameters occurred during the observation period, including a decrease in hemoglobin and an increase in hematocrit as the competitive season progressed. Iron and transferrin were stable, whereas ferritin showed a decrease at the end of the season. The immunoglobulin A (IgA) and IgM increased in the period with basal physical training and at the end of the season. Leucocytes decreased with increased physical training. Lymphocytes decreased at the end of the season. The V[Combining Dot Above]O2max decreased toward the end of the season, whereas no significant changes were observed in the IE2 test. The regular blood samples from elite soccer players reveal significant changes that may be related to changes in training pattern, match exposure, or length of the match season. Especially the end of the preparation season and at the end of the competitive season seem to be time points were the blood-derived values indicate that the players are under excessive physical strain and might be more subjected to a possible overreaching-overtraining conditions. We suggest that regular analyses of blood samples could be an important initiative to optimize training adaptation, training load, and game participation, but sampling has to be regular, and a database has to be built for each individual player.
Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena
2009-01-30
results are scaled as floating point operations per second, obtained by counting the number of floating point additions and multiplications in the...black horizontal line. Perhaps the most striking feature at first is the fact that the memory bandwidth measured for flux lifting transcends this...theoretical peak performance values. For a suitable CPU-limited workload, this means that a single workstation equipped with multiple GPUs can do work that
Root Kustritz, Margaret V
2014-01-01
Third-year veterinary students in a required theriogenology diagnostics course were allowed to self-select attendance at a lecture in either the evening or the next morning. One group was presented with PowerPoint slides in a traditional format (T group), and the other group was presented with PowerPoint slides in the assertion-evidence format (A-E group), which uses a single sentence and a highly relevant graphic on each slide to ensure attention is drawn to the most important points in the presentation. Students took a multiple-choice pre-test, attended lecture, and then completed a take-home assignment. All students then completed an online multiple-choice post-test and, one month later, a different online multiple-choice test to evaluate retention. Groups did not differ on pre-test, assignment, or post-test scores, and both groups showed significant gains from pre-test to post-test and from pre-test to retention test. However, the T group showed significant decline from post-test to retention test, while the A-E group did not. Short-term differences between slide designs were most likely unaffected due to required coursework immediately after lecture, but retention of material was superior with the assertion-evidence slide design.
A Portable, Shock-Proof, Surface-Heated Droplet PCR System for Escherichia coli Detection
Angus, Scott V.; Cho, Soohee; Harshman, Dustin K.; Song, Jae-Young; Yoon, Jeong-Yeol
2015-01-01
A novel polymerase chain reaction (PCR) device was developed that uses wire-guided droplet manipulation (WDM) to guide a droplet over three different heating chambers. After PCR amplification, end-point detection is achieved using a smartphone-based fluorescence microscope. The device was tested for identification of the 16S rRNA gene V3 hypervariable region from Escherichia coli genomic DNA. The lower limit of detection was 103 genome copies per sample. The device is portable with smartphone-based end-point detection and provides the assay results quickly (15 min for a 30-cycle amplification) and accurately. The system is also shock and vibration resistant, due to the multiple points of contact between the droplet and the thermocouple and the Teflon film on the heater surfaces. The thermocouple also provides realtime droplet temperature feedback to ensure it reaches the set temperature before moving to the next chamber/step in PCR. The device is equipped to use either silicone oil or coconut oil. Coconut oil provides additional portability and ease of transportation by eliminating spilling because its high melting temperature means it is solid at room temperature. PMID:26164008
Point Analysis in Java applied to histological images of the perforant pathway: a user's account.
Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán
2008-01-01
The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
Positron Emission Mammography with Multiple Angle Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark F. Smith; Stan Majewski; Raymond R. Raylman
2002-11-01
Positron emission mammography (PEM) of F-18 fluorodeoxyglucose (FbG) uptake in breast tumors with dedicated detectors typically has been accomplished with two planar detectors in a fixed position with the breast under compression. The potential use of PEM imaging at two detector positions to guide stereotactic breast biopsy has motivated us to use PEM coincidence data acquired at two or more detector positions together in a single image reconstruction. Multiple angle PEM acquisition and iterative image reconstruction were investigated using point source and compressed breast phantom acquisitions with 5, 9, 12 and 15 mm diameter spheres and a simulated tumor:background activitymore » concentration ratio of 6:1. Image reconstruction was performed with an iterative MLEM algorithm that used coincidence events between any two detector pixels on opposed detector heads at each detector position. This present study compared two acquisition protocols: 2 angle acquisition with detector angular positions of -15 and +15 degrees and 11 angle acquisition with detector positions spaced at 3 degree increments over the range -15 to +15 degrees. Three-dimensional image resolution was assessed for the point source acquisitions, and contrast and signal-to-noise metrics were evaluated for the compressed breast phantom with different simulated tumor sizes. Radial and tangential resolutions were similar for the two protocols, while normal resolution was better for the 2 angle acquisition. Analysis is complicated by the asymmetric point spread functions. Signal- to-noise vs. contrast tradeoffs were better for 11 angle acquisition for the smallest visible 9 mm sphere, while tradeoff results were mixed for the larger and more easily visible 12 mm and 15 mm diameter spheres. Additional study is needed to better understand the performance of limited angle tomography for PEM. PEM tomography experiments with complete angular sampling are planned.« less
Boycheva, Elina; Contador, Israel; Fernández-Calvo, Bernardino; Ramos-Campos, Francisco; Puertas-Martín, Verónica; Villarejo-Galende, Alberto; Bermejo-Pareja, Félix
2018-06-01
We aimed to analyse the clinical utility of the Mattis Dementia Rating Scale (MDRS-2) for early detection of Alzheimer's disease (AD) and amnestic mild cognitive impairment (MCI) in a sample of Spanish older adults. A total of 125 participants (age = 75.12 ± 6.83, years of education =7.08 ± 3.57) were classified in three diagnostic groups: 45 patients with mild AD, 37 with amnestic MCI-single and multiple domain and 43 cognitively healthy controls (HCs). Reliability, criterion validity and diagnostic accuracy of the MDRS-2 (total and subscales) were analysed. The MDRS-2 scores, adjusted by socio-demographic characteristics, were calculated through hierarchical multiple regression analysis. The global scale had adequate reliability (α = 0.736) and good criterion validity (r = 0.760, p < .001) with the Mini-Mental State Examination. The optimal cut-off point between AD patients and HCs was 124 (sensitivity [Se] = 97% and specificity [Sp] = 95%), whereas 131 (Se = 89%, Sp = 81%) was the optimal cut-off point between MCI and HCs. An optimal cut-off point of 123 had good Se (0.97), but poor Sp (0.56) to differentiate AD and MCI groups. The Memory and Initiation/Perseveration subscales had the highest discriminative capacity between the groups. The MDRS-2 is a reliable and valid instrument for the assessment of cognitive impairment in Spanish older adults. In particular, optimal capacity emerged for the detection of early AD and MCI. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Novel laboratory methods for determining the fine scale electrical resistivity structure of core
NASA Astrophysics Data System (ADS)
Haslam, E. P.; Gunn, D. A.; Jackson, P. D.; Lovell, M. A.; Aydin, A.; Prance, R. J.; Watson, P.
2014-12-01
High-resolution electrical resistivity measurements are made on saturated rocks using novel laboratory instrumentation and multiple electrical voltage measurements involving in principle a four-point electrode measurement but with a single, moving electrode. Flat, rectangular core samples are scanned by varying the electrode position over a range of hundreds of millimetres with an accuracy of a tenth of a millimetre. Two approaches are tested involving a contact electrode and a non-contact electrode arrangement. The first galvanic method uses balanced cycle switching of a floating direct current (DC) source to minimise charge polarisation effects masking the resistivity distribution related to fine scale structure. These contacting electrode measurements are made with high common mode noise rejection via differential amplification with respect to a reference point within the current flow path. A computer based multifunction data acquisition system logs the current through the sample and voltages along equipotentials from which the resistivity measurements are derived. Multiple measurements are combined to create images of the surface resistivity structure, with variable spatial resolution controlled by the electrode spacing. Fine scale sedimentary features and open fractures in saturated rocks are interpreted from the measurements with reference to established relationships between electrical resistivity and porosity. Our results successfully characterise grainfall lamination and sandflow cross-stratification in a brine saturated, dune bedded core sample representative of a southern North Sea reservoir sandstone, studied using the system in constant current, variable voltage mode. In contrast, in a low porosity marble, identification of open fracture porosity against a background very low matrix porosity is achieved using the constant voltage, variable current mode. This new system is limited by the diameter of the electrode that for practical reasons can only be reduced to between 0.5 and 0.75 mm. Improvements to this resolution may be achieved by further reducing the electrode footprint to 0.1 mm × 0.1 mm using a novel high-impedance, non-contact potential probe. Initial results with this non-contact electric potential sensor indicate the possibility for generating images with grain-scale resolution.
Zeiger, Sean; Hubbart, Jason A
2016-01-15
Suspended sediment (SS) remains the most pervasive water quality problem globally and yet, despite progress, SS process understanding remains relatively poor in watersheds with mixed-land-use practices. The main objective of the current work was to investigate relationships between suspended sediment and land use types at multiple spatial scales (n=5) using four years of suspended sediment data collected in a representative urbanized mixed-land-use (forest, agriculture, urban) watershed. Water samples were analyzed for SS using a nested-scale experimental watershed study design (n=836 samples×5 gauging sites). Kruskal-Wallis and Dunn's post-hoc multiple comparison tests were used to test for significant differences (CI=95%, p<0.05) in SS levels between gauging sites. Climate extremes (high precipitation/drought) were observed during the study period. Annual maximum SS concentrations exceeded 2387.6 mg/L. Median SS concentrations decreased by 60% from the agricultural headwaters to the rural/urban interface, and increased by 98% as urban land use increased. Multiple linear regression analysis results showed significant relationships between SS, annual total precipitation (positive correlate), forested land use (negative correlate), agricultural land use (negative correlate), and urban land use (negative correlate). Estimated annual SS yields ranged from 16.1 to 313.0 t km(-2) year(-1) mainly due to differences in annual total precipitation. Results highlight the need for additional studies, and point to the need for improved best management practices designed to reduce anthropogenic SS loading in mixed-land-use watersheds. Copyright © 2015 Elsevier B.V. All rights reserved.
Keyes, Katherine M.; Pratt, Charissa; Galea, Sandro; McLaughlin, Katie A.; Koenen, Karestan C.; Shear, M. Katherine
2014-01-01
Background Unexpected death of a loved one is common and associated with subsequent elevations in symptoms of multiple forms of psychopathology. Determining whether this experience predicts novel onset of psychiatric disorders and whether these associations vary across the life course has important clinical implications. Aims To examine associations of a loved one’s unexpected death with first onset of common anxiety, mood, and substance disorders in a population-based sample. Methods Relation between unexpected death and first onset of lifetime DSM-IV disorders estimated using a structured interview of adults in the US general population (analytic sample size=27,534). Models controlled for prior occurrence of any disorder, other traumatic event experiences, and demographics. Results Unexpected death was the most common traumatic experience and most likely to be rated as the respondent’s worst, regardless of other traumatic experiences. Increased incidence after unexpected death was observed at every point across the life course for major depressive episodes, panic disorder, and post-traumatic stress disorder. Increased incidence was clustered in later adult age groups for manic episodes, phobias, alcohol disorders, and generalized anxiety disorder. Conclusions The bereavement period is associated with elevated risk for the onset of multiple psychiatric disorders, consistently across the life course and coincident with the experience of the loved one’s death. Novel associations between unexpected death and onset of several disorders, including mania, confirm multiple case reports and small studies, and suggest an important emerging area for clinical research and practice. PMID:24832609
Learmonth, Yvonne C; Dlugonski, Deirdre D; Pilutti, Lara A; Sandroff, Brian M; Motl, Robert W
2013-11-01
Assessing walking impairment in those with multiple sclerosis (MS) is common, however little is known about the reliability, precision and clinically important change of walking outcomes. The purpose of this study was to determine the reliability, precision and clinically important change of the Timed 25-Foot Walk (T25FW), Six-Minute Walk (6MW), Multiple Sclerosis Walking Scale-12 (MSWS-12) and accelerometry. Data were collected from 82 persons with MS at two time points, six months apart. Analyses were undertaken for the whole sample and stratified based on disability level and usage of walking aids. Intraclass correlation coefficient (ICC) analyses established reliability: standard error of measurement (SEM) and coefficient of variation (CV) determined precision; and minimal detectable change (MDC) defined clinically important change. All outcome measures were reliable with precision and MDC varying between measures in the whole sample: T25FW: ICC=0.991; SEM=1 s; CV=6.2%; MDC=2.7 s (36%), 6MW: ICC=0.959; SEM=32 m; CV=6.2%; MDC=88 m (20%), MSWS-12: ICC=0.927; SEM=8; CV=27%; MDC=22 (53%), accelerometry counts/day: ICC=0.883; SEM=28450; CV=17%; MDC=78860 (52%), accelerometry steps/day: ICC=0.907; SEM=726; CV=16%; MDC=2011 (45%). Variation in these estimates was seen based on disability level and walking aid. The reliability of these outcomes is good and falls within acceptable ranges. Precision and clinically important change estimates provide guidelines for interpreting these outcomes in clinical and research settings.
Alghnam, Suliman; Castillo, Renan
2017-04-01
Although opioid abuse is a rising epidemic in the USA, there are no studies to date on the incidence of persistent opioid use following injuries. Therefore, the aims of this study are: (1) to examine the incidence of persistent opioid use among a nationally representative sample of injured and non-injured populations; (2) to evaluate whether an injury is an independent predictor of persistent opioid use. Data from the Medical Expenditure Panel Survey were pooled (years 2009-2012). Adults were followed for about 2 years, during which they were surveyed about injury status and opioid use every 4-5 months. To determine whether injuries are associated with persistent opioid use, weighted multiple logistic regressions were constructed. While 2.3 million injured individuals received any opioid during the follow-up, 371 170 (15.6%) individuals became persistent opioid users (defined as opioid use across multiple time points). In a multiple logistic regression analysis adjusting for sociodemographic characteristics and self-reported health, those who sustained injuries were 1.4 times (95% CI 1.1 to 1.9) more likely to report persistent opioid use than those without injuries. We found injuries to be significantly associated with persistent opioid use in a nationally representative sample. Further investment in injury prevention may facilitate reduction of persistent opioid use and, thus, improve population health and reduce health expenditures. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Knowledge, data and interests: Challenges in participation of diverse stakeholders in HIA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negev, Maya, E-mail: negevm@bgu.ac.il
2012-02-15
Stakeholder participation is considered an integral part of HIA. However, the challenges that participation implies in a multi-disciplinary and multi-ethnic society are less studied. This paper presents the manifestations of the multiplicity of sectors and population groups in HIA and discusses the challenges that such diversity imposes. Specifically, there is no common ground between participants, as their positions entail contradictory knowledge regarding the current situation, reliance on distinct data and conflicting interests. This entails usage of multiple professional and ethnic languages, disagreements regarding the definition of health and prioritizing health issues in HIA, and divergent perceptions of risk. These differencesmore » between participants are embedded culturally, socially, individually and, maybe most importantly, professionally. This complex picture of diverse stakeholder attributes is grounded in a case study of stakeholder participation in HIA, regarding zoning of a hazardous industry site in Israel. The implication is that participatory HIAs should address the multiplicity of stakeholders and types of knowledge, data and interests in a more comprehensive way. - Highlights: Black-Right-Pointing-Pointer This paper analyses challenges in participation of diverse stakeholders in HIA. Black-Right-Pointing-Pointer The multiplicity of disciplines and population groups raises fundamental challenges. Black-Right-Pointing-Pointer Stakeholders possess distinct and often contradictory knowledge, data and interests. Black-Right-Pointing-Pointer They speak different languages, and differ on approaches to health and risk perceptions. Black-Right-Pointing-Pointer Substantial amendments to diverse participation are needed, in HIA and generally.« less
Murillo, Gabriel H; You, Na; Su, Xiaoquan; Cui, Wei; Reilly, Muredach P; Li, Mingyao; Ning, Kang; Cui, Xinping
2016-05-15
Single nucleotide variant (SNV) detection procedures are being utilized as never before to analyze the recent abundance of high-throughput DNA sequencing data, both on single and multiple sample datasets. Building on previously published work with the single sample SNV caller genotype model selection (GeMS), a multiple sample version of GeMS (MultiGeMS) is introduced. Unlike other popular multiple sample SNV callers, the MultiGeMS statistical model accounts for enzymatic substitution sequencing errors. It also addresses the multiple testing problem endemic to multiple sample SNV calling and utilizes high performance computing (HPC) techniques. A simulation study demonstrates that MultiGeMS ranks highest in precision among a selection of popular multiple sample SNV callers, while showing exceptional recall in calling common SNVs. Further, both simulation studies and real data analyses indicate that MultiGeMS is robust to low-quality data. We also demonstrate that accounting for enzymatic substitution sequencing errors not only improves SNV call precision at low mapping quality regions, but also improves recall at reference allele-dominated sites with high mapping quality. The MultiGeMS package can be downloaded from https://github.com/cui-lab/multigems xinping.cui@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Direct sampling from N dimensions to N dimensions applied to porous media
NASA Astrophysics Data System (ADS)
Adler, Pierre; Nguyen, Thang; Coelho, Daniel; Robinet, Jean Charles; Wendling, Jacques
2014-05-01
The reconstruction of porous media starting from some experimental data is still a very challenging problem in terms of random geometry and a very attractive one because of its innumerable industrial applications. The developments of Computed Microtomography (CMT) have not diminished the need for reconstruction methods and the availability of three dimensional data has considerably facilitated the reconstruction of porous media. In the past, several techniques were used such as thresholded Gaussian fields [1], simulated annealing [2] and Boolean models where polydisperse and penetrable spheres are generated randomly (see [3] for a combination with correlation function). Recently, [4] developed the Direct Sampling method (DSM) as an alternative to multiple-point simulations. The purpose of the present work is to develop DSM and to apply it to the reconstruction of porous media made of one or several minerals [5]. Application of this method only necessitates a sample of the medium to reproduce called Training Image (TI). The main feature of DSM can be summarized as follows. Suppose that n points (x1,…,xn) are already known in the Simulated Medium (SM) and that one wants to determine the value of an extra point x; the TI is searched in order to find a configuration (y1,…,yn) where these points have the same colors and relative positions as (x1,…,xn) in the SM; then, the value of the point y in the TI which is in the same relative position with respect to (y1,…,yn) than x with respect to (x1,…,xn) is given to x in the SM. The algorithm and its main features are briefly described. Important advantages of DSM are that it can easily generate media with several phases which are spatially periodic or not. The searching process - i.e. the selected points y in the TI and the corresponding determined points x in the SM - will be illustrated by some short movies. The properties of the resulting SMs (such as the phase probabilities and the correlation functions) will be qualitatively and quantitatively compared to the ones of the TI. The major numerical parameters which influence the results and the calculation time, are the size of the TI, the radius of the selection window and the acceptance threshold. They are studied and recommendations are made for their choice. For instance, the size of the TI should be at least twice the largest correlation length found in it. Some features necessitate a special analysis such as the number of isolated points of one phase in another phase, the influence of the choice of the initial points, the influence of a modified voxel in the course of the simulation and the generation of phases with a small probability in the TI. For the real TI which were analysed, the number of isolated points was always smaller than 0.5%; they can be suppressed with a very small influence on the statistical characteristics of the SM. The choice of the initial points has no consequences in a statistical sense. Finally, some initial tests show that the permeabilities of simulated samples and of the TI are close. REFERENCES [1] Adler P.M., Jacquin. C.G. & Quiblier J.A., Int. J. Multiphase Flow, 16 (1990), 691. [2] Hazlett, R.D., Math. Geol. 29 (1997), 801. [3] J.-F.Thovert, P.M.Adler, Phys. Rev. E, 83 (2011), 031104. [4] Mariethoz,G. and Renard,P. and Straubhaar,J., Water Resour. Res., 46,10.1029/2008WR007621 (2010). [5] Nguyen Kim, T, Direct sampling applied to porous media. Ph.D. Thesis, University P. and M. Curie, Paris (2013).
Reanalysis of RNA-Sequencing Data Reveals Several Additional Fusion Genes with Multiple Isoforms
Kangaspeska, Sara; Hultsch, Susanne; Edgren, Henrik; Nicorici, Daniel; Murumägi, Astrid; Kallioniemi, Olli
2012-01-01
RNA-sequencing and tailored bioinformatic methodologies have paved the way for identification of expressed fusion genes from the chaotic genomes of solid tumors. We have recently successfully exploited RNA-sequencing for the discovery of 24 novel fusion genes in breast cancer. Here, we demonstrate the importance of continuous optimization of the bioinformatic methodology for this purpose, and report the discovery and experimental validation of 13 additional fusion genes from the same samples. Integration of copy number profiling with the RNA-sequencing results revealed that the majority of the gene fusions were promoter-donating events that occurred at copy number transition points or involved high-level DNA-amplifications. Sequencing of genomic fusion break points confirmed that DNA-level rearrangements underlie selected fusion transcripts. Furthermore, a significant portion (>60%) of the fusion genes were alternatively spliced. This illustrates the importance of reanalyzing sequencing data as gene definitions change and bioinformatic methods improve, and highlights the previously unforeseen isoform diversity among fusion transcripts. PMID:23119097
Reanalysis of RNA-sequencing data reveals several additional fusion genes with multiple isoforms.
Kangaspeska, Sara; Hultsch, Susanne; Edgren, Henrik; Nicorici, Daniel; Murumägi, Astrid; Kallioniemi, Olli
2012-01-01
RNA-sequencing and tailored bioinformatic methodologies have paved the way for identification of expressed fusion genes from the chaotic genomes of solid tumors. We have recently successfully exploited RNA-sequencing for the discovery of 24 novel fusion genes in breast cancer. Here, we demonstrate the importance of continuous optimization of the bioinformatic methodology for this purpose, and report the discovery and experimental validation of 13 additional fusion genes from the same samples. Integration of copy number profiling with the RNA-sequencing results revealed that the majority of the gene fusions were promoter-donating events that occurred at copy number transition points or involved high-level DNA-amplifications. Sequencing of genomic fusion break points confirmed that DNA-level rearrangements underlie selected fusion transcripts. Furthermore, a significant portion (>60%) of the fusion genes were alternatively spliced. This illustrates the importance of reanalyzing sequencing data as gene definitions change and bioinformatic methods improve, and highlights the previously unforeseen isoform diversity among fusion transcripts.
Eta Carinae: Viewed from Multiple Vantage Points
NASA Technical Reports Server (NTRS)
Gull, Theodore
2007-01-01
The central source of Eta Carinae and its ejecta is a massive binary system buried within a massive interacting wind structure which envelops the two stars. However the hot, less massive companion blows a small cavity in the very massive primary wind, plus ionizes a portion of the massive wind just beyond the wind-wind boundary. We gain insight on this complex structure by examining the spatially-resolved Space Telescope Imaging Spectrograph (STIS) spectra of the central source (0.1") with the wind structure which extends out to nearly an arcsecond (2300AU) and the wind-blown boundaries, plus the ejecta of the Little Homunculus. Moreover, the spatially resolved Very Large Telescope/UltraViolet Echelle Spectrograph (VLT/UVES) stellar spectrum (one arcsecond) and spatially sampled spectra across the foreground lobe of the Homunculus provide us vantage points from different angles relative to line of sight. Examples of wind line profiles of Fe II, and the.highly excited [Fe III], [Ne III], [Ar III] and [S III)], plus other lines will be presented.
Rapid non-invasive tests for diagnostics of infectious diseases
NASA Astrophysics Data System (ADS)
Malamud, Daniel
2014-06-01
A rapid test for an infectious disease that can be used at point-of-care at a physician's office, a pharmacy, or in the field is critical for the prompt and appropriate therapeutic intervention. Ultimately by treating infections early on will decrease transmission of the pathogen. In contrast to metabolic diseases or cancer where multiple biomarkers are required, infectious disease targets (e.g. antigen, antibody, nucleic acid) are simple and specific for the pathogen causing the disease. Our laboratory has focused on three major infectious disease; HIV, Tuberculosis, and Malaria. These diseases are pandemic in much of the world thus putting natives, tourists and military personnel at risk for becoming infected, and upon returning to the U.S., transmitting these diseases to their contacts. Our devices are designed to detect antigens, antibodies or nucleic acids in blood or saliva samples in less than 30 minutes. An overview describing the current status of each of the three diagnostic platforms is presented. These microfluidic point-of-care devices will be relatively inexpensive, disposable, and user friendly.
A study of model deflection measurement techniques applicable within the national transonic facility
NASA Technical Reports Server (NTRS)
Hildebrand, B. P.; Doty, J. L.
1982-01-01
Moire contouring, scanning interferometry, and holographic contouring were examined to determine their practicality and potential to meet performance requirements for a model deflection sensor. The system envisioned is to be nonintrusive, and is to be capable of mapping or contouring the surface of a 1-meter by 1-meter model with a resolution of 50 to 100 points. The available literature was surveyed, and computations and analyses were performed to establish specific performance requirements, as well as the capabilities and limitations of such a sensor within the geometry of the NTF section test section. Of the three systems examined, holographic contouring offers the most promise. Unlike Moire, it is not hampered by limited contour spacing and extraneous fringes. Its transverse resolution can far exceed the limited point sampling resolution of scanning heterodyne interferometry. The availability of the ruby laser as a high power, pulsed, multiple wavelength source makes such a system feasible within the NTF.
Cui, Shan; He, Lan -Po; Hong, Xiao -Chen; ...
2016-06-09
It was found that selenium doping can suppress the charge-density-wave (CDW) order and induce bulk superconductivity in ZrTe 3. The observed superconducting dome suggests the existence of a CDW quantum critical point (QCP) in ZrTe 3–x Se x near x ≈ 0.04. To elucidate the superconducting state near the CDW QCP, we measure the thermal conductivity of two ZrTe 3–x Se x single crystals (x = 0.044 and 0.051) down to 80 mK. For both samples, the residual linear term κ 0/T at zero field is negligible, which is a clear evidence for nodeless superconducting gap. Furthermore, the field dependencemore » of κ 0/T manifests a multigap behavior. Lastly, these results demonstrate multiple nodeless superconducting gaps in ZrTe 3–x Se x, which indicates conventional superconductivity despite of the existence of a CDW QCP.« less
Fast, axis-agnostic, dynamically summarized storage and retrieval for mass spectrometry data.
Handy, Kyle; Rosen, Jebediah; Gillan, André; Smith, Rob
2017-01-01
Mass spectrometry, a popular technique for elucidating the molecular contents of experimental samples, creates data sets comprised of millions of three-dimensional (m/z, retention time, intensity) data points that correspond to the types and quantities of analyzed molecules. Open and commercial MS data formats are arranged by retention time, creating latency when accessing data across multiple m/z. Existing MS storage and retrieval methods have been developed to overcome the limitations of retention time-based data formats, but do not provide certain features such as dynamic summarization and storage and retrieval of point meta-data (such as signal cluster membership), precluding efficient viewing applications and certain data-processing approaches. This manuscript describes MzTree, a spatial database designed to provide real-time storage and retrieval of dynamically summarized standard and augmented MS data with fast performance in both m/z and RT directions. Performance is reported on real data with comparisons against related published retrieval systems.
Inspection with Robotic Microscopic Imaging
NASA Technical Reports Server (NTRS)
Pedersen, Liam; Deans, Matthew; Kunz, Clay; Sargent, Randy; Chen, Alan; Mungas, Greg
2005-01-01
Future Mars rover missions will require more advanced onboard autonomy for increased scientific productivity and reduced mission operations cost. One such form of autonomy can be achieved by targeting precise science measurements to be made in a single command uplink cycle. In this paper we present an overview of our solution to the subproblems of navigating a rover into place for microscopic imaging, mapping an instrument target point selected by an operator using far away science camera images to close up hazard camera images, verifying the safety of placing a contact instrument on a sample or finding nearby safe points, and analyzing the data that comes back from the rover. The system developed includes portions used in the Multiple Target Single Cycle Instrument Placement demonstration at NASA Ames in October 2004, and portions of the MI Toolkit delivered to the Athena Microscopic Imager Instrument Team for the MER mission still operating on Mars today. Some of the component technologies are also under consideration for MSL mission infusion.
NASA Astrophysics Data System (ADS)
Blancquaert, Yoann; Dezauzier, Christophe; Depre, Jerome; Miqyass, Mohamed; Beltman, Jan
2013-04-01
Continued tightening of overlay control budget in semiconductor lithography drives the need for improved metrology capabilities. Aggressive improvements are needed for overlay metrology speed, accuracy and precision. This paper is dealing with the on product metrology results of a scatterometry based platform showing excellent production results on resolution, precision, and tool matching for overlay. We will demonstrate point to point matching between tool generations as well as between target sizes and types. Nowadays, for the advanced process nodes a lot of information is needed (Higher order process correction, Reticle fingerprint, wafer edge effects) to quantify process overlay. For that purpose various overlay sampling schemes are evaluated: ultra- dense, dense and production type. We will show DBO results from multiple target type and shape for on product overlay control for current and future node down to at least 14 nm node. As overlay requirements drive metrology needs, we will evaluate if the new metrology platform meets the overlay requirements.
Multiple Intelligences for Differentiated Learning
ERIC Educational Resources Information Center
Williams, R. Bruce
2007-01-01
There is an intricate literacy to Gardner's multiple intelligences theory that unlocks key entry points for differentiated learning. Using a well-articulated framework, rich with graphic representations, Williams provides a comprehensive discussion of multiple intelligences. He moves the teacher and students from curiosity, to confidence, to…
Serological evidence of Ebola virus infection in Indonesian orangutans.
Nidom, Chairul A; Nakayama, Eri; Nidom, Reviany V; Alamudi, Mohamad Y; Daulay, Syafril; Dharmayanti, Indi N L P; Dachlan, Yoes P; Amin, Mohamad; Igarashi, Manabu; Miyamoto, Hiroko; Yoshida, Reiko; Takada, Ayato
2012-01-01
Ebola virus (EBOV) and Marburg virus (MARV) belong to the family Filoviridae and cause severe hemorrhagic fever in humans and nonhuman primates. Despite the discovery of EBOV (Reston virus) in nonhuman primates and domestic pigs in the Philippines and the serological evidence for its infection of humans and fruit bats, information on the reservoirs and potential amplifying hosts for filoviruses in Asia is lacking. In this study, serum samples collected from 353 healthy Bornean orangutans (Pongo pygmaeus) in Kalimantan Island, Indonesia, during the period from December 2005 to December 2006 were screened for filovirus-specific IgG antibodies using a highly sensitive enzyme-linked immunosorbent assay (ELISA) with recombinant viral surface glycoprotein (GP) antigens derived from multiple species of filoviruses (5 EBOV and 1 MARV species). Here we show that 18.4% (65/353) and 1.7% (6/353) of the samples were seropositive for EBOV and MARV, respectively, with little cross-reactivity among EBOV and MARV antigens. In these positive samples, IgG antibodies to viral internal proteins were also detected by immunoblotting. Interestingly, while the specificity for Reston virus, which has been recognized as an Asian filovirus, was the highest in only 1.4% (5/353) of the serum samples, the majority of EBOV-positive sera showed specificity to Zaire, Sudan, Cote d'Ivoire, or Bundibugyo viruses, all of which have been found so far only in Africa. These results suggest the existence of multiple species of filoviruses or unknown filovirus-related viruses in Indonesia, some of which are serologically similar to African EBOVs, and transmission of the viruses from yet unidentified reservoir hosts into the orangutan populations. Our findings point to the need for risk assessment and continued surveillance of filovirus infection of human and nonhuman primates, as well as wild and domestic animals, in Asia.
Serological Evidence of Ebola Virus Infection in Indonesian Orangutans
Nidom, Reviany V.; Alamudi, Mohamad Y.; Daulay, Syafril; Dharmayanti, Indi N. L. P.; Dachlan, Yoes P.; Amin, Mohamad; Igarashi, Manabu; Miyamoto, Hiroko; Yoshida, Reiko; Takada, Ayato
2012-01-01
Ebola virus (EBOV) and Marburg virus (MARV) belong to the family Filoviridae and cause severe hemorrhagic fever in humans and nonhuman primates. Despite the discovery of EBOV (Reston virus) in nonhuman primates and domestic pigs in the Philippines and the serological evidence for its infection of humans and fruit bats, information on the reservoirs and potential amplifying hosts for filoviruses in Asia is lacking. In this study, serum samples collected from 353 healthy Bornean orangutans (Pongo pygmaeus) in Kalimantan Island, Indonesia, during the period from December 2005 to December 2006 were screened for filovirus-specific IgG antibodies using a highly sensitive enzyme-linked immunosorbent assay (ELISA) with recombinant viral surface glycoprotein (GP) antigens derived from multiple species of filoviruses (5 EBOV and 1 MARV species). Here we show that 18.4% (65/353) and 1.7% (6/353) of the samples were seropositive for EBOV and MARV, respectively, with little cross-reactivity among EBOV and MARV antigens. In these positive samples, IgG antibodies to viral internal proteins were also detected by immunoblotting. Interestingly, while the specificity for Reston virus, which has been recognized as an Asian filovirus, was the highest in only 1.4% (5/353) of the serum samples, the majority of EBOV-positive sera showed specificity to Zaire, Sudan, Cote d’Ivoire, or Bundibugyo viruses, all of which have been found so far only in Africa. These results suggest the existence of multiple species of filoviruses or unknown filovirus-related viruses in Indonesia, some of which are serologically similar to African EBOVs, and transmission of the viruses from yet unidentified reservoir hosts into the orangutan populations. Our findings point to the need for risk assessment and continued surveillance of filovirus infection of human and nonhuman primates, as well as wild and domestic animals, in Asia. PMID:22815803