Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)
NASA Astrophysics Data System (ADS)
Aksoy, A.; Yenilmez, F.; Duzgun, S.
2013-12-01
Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water quality in the reservoir due to inflows, point and diffuse inputs, and reservoir hydromorphology. Moreover, hot spots were determined based on kriging and standard error maps. Locations of minimum number of sampling points that represent the actual spatial structure of DO distribution in the Porsuk Dam Reservoir
Diversity of human small intestinal Streptococcus and Veillonella populations.
van den Bogert, Bartholomeus; Erkus, Oylum; Boekhorst, Jos; de Goffau, Marcus; Smid, Eddy J; Zoetendal, Erwin G; Kleerebezem, Michiel
2013-08-01
Molecular and cultivation approaches were employed to study the phylogenetic richness and temporal dynamics of Streptococcus and Veillonella populations in the small intestine. Microbial profiling of human small intestinal samples collected from four ileostomy subjects at four time points displayed abundant populations of Streptococcus spp. most affiliated with S. salivarius, S. thermophilus, and S. parasanguinis, as well as Veillonella spp. affiliated with V. atypica, V. parvula, V. dispar, and V. rogosae. Relative abundances varied per subject and time of sampling. Streptococcus and Veillonella isolates were cultured using selective media from ileostoma effluent samples collected at two time points from a single subject. The richness of the Streptococcus and Veillonella isolates was assessed at species and strain level by 16S rRNA gene sequencing and genetic fingerprinting, respectively. A total of 160 Streptococcus and 37 Veillonella isolates were obtained. Genetic fingerprinting differentiated seven Streptococcus lineages from ileostoma effluent, illustrating the strain richness within this ecosystem. The Veillonella isolates were represented by a single phylotype. Our study demonstrated that the small intestinal Streptococcus populations displayed considerable changes over time at the genetic lineage level because only representative strains of a single Streptococcus lineage could be cultivated from ileostoma effluent at both time points. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
Temporal Variability of Microplastic Concentrations in Freshwater Streams
NASA Astrophysics Data System (ADS)
Watkins, L.; Walter, M. T.
2016-12-01
Plastic pollution, specifically the size fraction less than 5mm known as microplastics, is an emerging contaminant in waterways worldwide. The ability of microplastics to adsorb and transport contaminants and microbes, as well as be ingested by organisms, makes them a concern in both freshwater and marine ecosystems. Recent efforts to determine the extent of microplastic pollution are increasingly focused on freshwater systems, but most studies have reported concentrations at a single time-point; few have begun to uncover how plastic concentrations in riverine systems may change through time. We hypothesize the time of day and season of sampling influences the concentrations of microplastics in water samples and more specifically, that daytime stormflow samples contain the highest microplastic concentrations due to maximized runoff and wastewater discharge. In order to test this hypothesis, we sampled in two similar streams in Ithaca, New York using a 333µm mesh net deployed within the thalweg. Repeat samples were collected to identify diurnal patterns as well as monthly variation. Samples were processed in the laboratory following the NOAA wet peroxide oxidation protocol. This work improves our ability to interpret existing single-time-point survey results by providing information on how microplastic concentrations change over time and whether concentrations in existing stream studies are likely representative of their location. Additionally, these results will inform future studies by providing insight into representative sample timing and capturing temporal trends for the purposes of modeling and of developing regulations for microplastic pollution.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
NASA Astrophysics Data System (ADS)
Elliott, Emily A.; Monbureau, Elaine; Walters, Glenn W.; Elliott, Mark A.; McKee, Brent A.; Rodriguez, Antonio B.
2017-12-01
Identifying the source and abundance of sediment transported within tidal creeks is essential for studying the connectivity between coastal watersheds and estuaries. The fine-grained suspended sediment load (SSL) makes up a substantial portion of the total sediment load carried within an estuarine system and efficient sampling of the SSL is critical to our understanding of nutrient and contaminant transport, anthropogenic influence, and the effects of climate. Unfortunately, traditional methods of sampling the SSL, including instantaneous measurements and automatic samplers, can be labor intensive, expensive and often yield insufficient mass for comprehensive geochemical analysis. In estuaries this issue is even more pronounced due to bi-directional tidal flow. This study tests the efficacy of a time-integrated mass sediment sampler (TIMS) design, originally developed for uni-directional flow within the fluvial environment, modified in this work for implementation the tidal environment under bi-directional flow conditions. Our new TIMS design utilizes an 'L' shaped outflow tube to prevent backflow, and when deployed in mirrored pairs, each sampler collects sediment uniquely in one direction of tidal flow. Laboratory flume experiments using dye and particle image velocimetry (PIV) were used to characterize the flow within the sampler, specifically, to quantify the settling velocities and identify stagnation points. Further laboratory tests of sediment indicate that bidirectional TIMS capture up to 96% of incoming SSL across a range of flow velocities (0.3-0.6 m s-1). The modified TIMS design was tested in the field at two distinct sampling locations within the tidal zone. Single-time point suspended sediment samples were collected at high and low tide and compared to time-integrated suspended sediment samples collected by the bi-directional TIMS over the same four-day period. Particle-size composition from the bi-directional TIMS were representative of the array of single time point samples, but yielded greater mass, representative of flow and sediment-concentration conditions at the site throughout the deployment period. This work proves the efficacy of the modified bi-directional TIMS design, offering a novel tool for collection of suspended sediment in the tidally-dominated portion of the watershed.
Noguera, Martín E.; Vazquez, Diego S.; Ferrer-Sueta, Gerardo; Agudelo, William A.; Howard, Eduardo; Rasia, Rodolfo M.; Manta, Bruno; Cousido-Siah, Alexandra; Mitschler, André; Podjarny, Alberto; Santos, Javier
2017-01-01
Thioredoxin is a ubiquitous small protein that catalyzes redox reactions of protein thiols. Additionally, thioredoxin from E. coli (EcTRX) is a widely-used model for structure-function studies. In a previous paper, we characterized several single-point mutants of the C-terminal helix (CTH) that alter global stability of EcTRX. However, spectroscopic signatures and enzymatic activity for some of these mutants were found essentially unaffected. A comprehensive structural characterization at the atomic level of these near-invariant mutants can provide detailed information about structural variability of EcTRX. We address this point through the determination of the crystal structures of four point-mutants, whose mutations occurs within or near the CTH, namely L94A, E101G, N106A and L107A. These structures are mostly unaffected compared with the wild-type variant. Notably, the E101G mutant presents a large region with two alternative traces for the backbone of the same chain. It represents a significant shift in backbone positions. Enzymatic activity measurements and conformational dynamics studies monitored by NMR and molecular dynamic simulations show that E101G mutation results in a small effect in the structural features of the protein. We hypothesize that these alternative conformations represent samples of the native-state ensemble of EcTRX, specifically the magnitude and location of conformational heterogeneity. PMID:28181556
NASA Astrophysics Data System (ADS)
Noguera, Martín E.; Vazquez, Diego S.; Ferrer-Sueta, Gerardo; Agudelo, William A.; Howard, Eduardo; Rasia, Rodolfo M.; Manta, Bruno; Cousido-Siah, Alexandra; Mitschler, André; Podjarny, Alberto; Santos, Javier
2017-02-01
Thioredoxin is a ubiquitous small protein that catalyzes redox reactions of protein thiols. Additionally, thioredoxin from E. coli (EcTRX) is a widely-used model for structure-function studies. In a previous paper, we characterized several single-point mutants of the C-terminal helix (CTH) that alter global stability of EcTRX. However, spectroscopic signatures and enzymatic activity for some of these mutants were found essentially unaffected. A comprehensive structural characterization at the atomic level of these near-invariant mutants can provide detailed information about structural variability of EcTRX. We address this point through the determination of the crystal structures of four point-mutants, whose mutations occurs within or near the CTH, namely L94A, E101G, N106A and L107A. These structures are mostly unaffected compared with the wild-type variant. Notably, the E101G mutant presents a large region with two alternative traces for the backbone of the same chain. It represents a significant shift in backbone positions. Enzymatic activity measurements and conformational dynamics studies monitored by NMR and molecular dynamic simulations show that E101G mutation results in a small effect in the structural features of the protein. We hypothesize that these alternative conformations represent samples of the native-state ensemble of EcTRX, specifically the magnitude and location of conformational heterogeneity.
User's manual for the Graphical Constituent Loading Analysis System (GCLAS)
Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.
2006-01-01
This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
21 CFR 111.80 - What representative samples must you collect?
Code of Federal Regulations, 2010 CFR
2010-04-01
... Process Control System § 111.80 What representative samples must you collect? The representative samples... unique lot within each unique shipment); (b) Representative samples of in-process materials for each manufactured batch at points, steps, or stages, in the manufacturing process as specified in the master...
SPIRAL-SPRITE: a rapid single point MRI technique for application to porous media.
Szomolanyi, P; Goodyear, D; Balcom, B; Matheson, D
2001-01-01
This study presents the application of a new, rapid, single point MRI technique which samples k space with spiral trajectories. The general principles of the technique are outlined along with application to porous concrete samples, solid pharmaceutical tablets and gas phase imaging. Each sample was chosen to highlight specific features of the method.
Instance-based learning: integrating sampling and repeated decisions from experience.
Gonzalez, Cleotilde; Dutt, Varun
2011-10-01
In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Fairbrother, Nichole; Hutton, Eileen K; Stoll, Kathrin; Hall, Wendy; Kluka, Sandy
2008-06-01
Although fatigue is a common experience for pregnant women and new mothers, few measures of fatigue have been validated for use with this population. To address this gap, the authors assessed psychometric properties of the Multidimensional Assessment of Fatigue (MAF) scale, which was used in 2 independent samples of pregnant women. Results indicated that the psychometric properties of the scale were very similar across samples and time points. The scale possesses a high level of internal consistency, has good convergent validity with measures of sleep quality and depression, and discriminates well from a measure of social support. Contrary to previous evaluations of the MAF, data strongly suggest that the scale represents a unidimensional construct best represented by a single factor. Results indicate that the MAF is a useful measure of fatigue among pregnant and postpartum women.
Sampling device for withdrawing a representative sample from single and multi-phase flows
Apley, Walter J.; Cliff, William C.; Creer, James M.
1984-01-01
A fluid stream sampling device has been developed for the purpose of obtaining a representative sample from a single or multi-phase fluid flow. This objective is carried out by means of a probe which may be inserted into the fluid stream. Individual samples are withdrawn from the fluid flow by sampling ports with particular spacings, and the sampling parts are coupled to various analytical systems for characterization of the physical, thermal, and chemical properties of the fluid flow as a whole and also individually.
Electrically generated eddies at an eightfold stagnation point within a nanopore
Sherwood, J. D.; Mao, M.; Ghosal, S.
2014-01-01
Electrically generated flows around a thin dielectric plate pierced by a cylindrical hole are computed numerically. The geometry represents that of a single nanopore in a membrane. When the membrane is uncharged, flow is due solely to induced charge electroosmosis, and eddies are generated by the high fields at the corners of the nanopore. These eddies meet at stagnation points. If the geometry is chosen correctly, the stagnation points merge to form a single stagnation point at which four streamlines cross at a point and eight eddies meet. PMID:25489206
Reconstruction of three-dimensional porous media using a single thin section
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2012-06-01
The purpose of any reconstruction method is to generate realizations of two- or multiphase disordered media that honor limited data for them, with the hope that the realizations provide accurate predictions for those properties of the media for which there are no data available, or their measurement is difficult. An important example of such stochastic systems is porous media for which the reconstruction technique must accurately represent their morphology—the connectivity and geometry—as well as their flow and transport properties. Many of the current reconstruction methods are based on low-order statistical descriptors that fail to provide accurate information on the properties of heterogeneous porous media. On the other hand, due to the availability of high resolution two-dimensional (2D) images of thin sections of a porous medium, and at the same time, the high cost, computational difficulties, and even unavailability of complete 3D images, the problem of reconstructing porous media from 2D thin sections remains an outstanding unsolved problem. We present a method based on multiple-point statistics in which a single 2D thin section of a porous medium, represented by a digitized image, is used to reconstruct the 3D porous medium to which the thin section belongs. The method utilizes a 1D raster path for inspecting the digitized image, and combines it with a cross-correlation function, a grid splitting technique for deciding the resolution of the computational grid used in the reconstruction, and the Shannon entropy as a measure of the heterogeneity of the porous sample, in order to reconstruct the 3D medium. It also utilizes an adaptive technique for identifying the locations and optimal number of hard (quantitative) data points that one can use in the reconstruction process. The method is tested on high resolution images for Berea sandstone and a carbonate rock sample, and the results are compared with the data. To make the comparison quantitative, two sets of statistical tests consisting of the autocorrelation function, histogram matching of the local coordination numbers, the pore and throat size distributions, multiple-points connectivity, and single- and two-phase flow permeabilities are used. The comparison indicates that the proposed method reproduces the long-range connectivity of the porous media, with the computed properties being in good agreement with the data for both porous samples. The computational efficiency of the method is also demonstrated.
Porter, Charlotte A; Bradley, Kevin M; McGowan, Daniel R
2018-05-01
The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
Grekin, Rebecca; Brock, Rebecca L; O'Hara, Michael W
2017-08-15
Research suggests that trauma exposure is associated with perinatal depression; however, little is known about the nature of the relation between trauma history and trajectory of depression, as well as the predictive power of trauma history beyond other risk factors. Additionally, more research is needed in at-risk samples that are likely to experience severe traumatic exposure. Secondary data analysis was conducted using demographic and depression data from the Healthy Start and Empowerment Family Support programs in Des Moines, Iowa. Hierarchical linear modeling was used to examine trajectories of perinatal depressive symptoms, from pregnancy to 24 months postpartum, and clarify whether trauma exposure, relationship status, and substance use uniquely contribute to trajectories of symptoms over time. On average, depressive symptoms decreased from pregnancy to 24 months postpartum; however, trajectories varied across women. Single relationship status, substance use, and trauma history were each predictors of higher depression levels at several points in time across the observed perinatal period. Single relationship status was also associated with decline in depressive symptoms followed by a rebound of symptoms at 22 months postpartum. These data were not collected for research purposes and thus did not undergo the rigorous data collection strategies typically implemented in an established research study. History of trauma, substance use and single relationship status represent unique risk factors for perinatal depression. For single women, depressive symptoms rebound late in the postpartum period. Single women are at greater risk for substance use and traumatic exposure and represent a sample with cumulative risk. Eliciting social support may be an important intervention for women presenting with these risk factors. Copyright © 2017 Elsevier B.V. All rights reserved.
[Microbiological quality of the air in "small gastronomy point"].
Wójcik-Stopczyńska, Barbara
2006-01-01
The aim of this work was the estimation of microbial contamination of the air in "small gastronomy point". The study included three places, which have been separated on the ground of their function: 1. area of subsidiaries, 2. area of distribution (sale and serving meal), 3. area of consumption. The total numbers of aerobic mesophilic bacteria, yeasts and moulds were determined by sedimentation method. Taxonomy units of fungal aerosol were also estimated. The samples of air were collected in 16 investigation points in the morning (8-8.30) and in the afternoon (14-14.30). Four series of measurements were carried out and in general 128 of air samples were tested. The results showed that numbers of bacteria, yeasts and moulds were variable and received respectively 30-3397, 0-254 and 0-138 cfu x m(-3). Microbial contamination of air changed depending on area character (the highest average count of bacteria occurred in the air of consumption area and fungi in subsidiaries area), time of a day (contamination of the air increased in the afternoon) and determination date. Only in single samples the numbers of bacteria and fungi were higher than recommended level. Pigmentary bacteria had high participation in total count of bacteria and filamentous fungi were represented mostly by Penicillium sp. and Cladosporium sp.
La, Moonwoo; Park, Sang Min; Kim, Dong Sung
2015-01-01
In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516
"The Effect of Alternative Representations of Lake ...
Lakes can play a significant role in regional climate, modulating inland extremes in temperature and enhancing precipitation. Representing these effects becomes more important as regional climate modeling (RCM) efforts focus on simulating smaller scales. When using the Weather Research and Forecasting (WRF) model to downscale future global climate model (GCM) projections into RCM simulations, model users typically must rely on the GCM to represent temperatures at all water points. However, GCMs have insufficient resolution to adequately represent even large inland lakes, such as the Great Lakes. Some interpolation methods, such as setting lake surface temperatures (LSTs) equal to the nearest water point, can result in inland lake temperatures being set from sea surface temperatures (SSTs) that are hundreds of km away. In other cases, a single point is tasked with representing multiple large, heterogeneous lakes. Similar consequences can result from interpolating ice from GCMs to inland lake points, resulting in lakes as large as Lake Superior freezing completely in the space of a single timestep. The use of a computationally-efficient inland lake model can improve RCM simulations where the input data is too coarse to adequately represent inland lake temperatures and ice (Gula and Peltier 2012). This study examines three scenarios under which ice and LSTs can be set within the WRF model when applied as an RCM to produce 2-year simulations at 12 km gri
NASA Astrophysics Data System (ADS)
Russo, David; Laufer, Asher; Shapira, Roi H.; Kurtzman, Daniel
2013-02-01
Detailed numerical simulations were used to analyze water flow and transport of nitrate, chloride, and a tracer solute in a 3-D, spatially heterogeneous, variably saturated soil, originating from a citrus orchard irrigated with treated sewage water (TSW) considering realistic features of the soil-water-plant-atmosphere system. Results of this study suggest that under long-term irrigation with TSW, because of nitrate uptake by the tree roots and nitrogen transformations, the vadose zone may provide more capacity for the attenuation of the nitrate load in the groundwater than for the chloride load in the groundwater. Results of the 3-D simulations were used to assess their counterparts based on a simplified, deterministic, 1-D vertical simulation and on limited soil monitoring. Results of the analyses suggest that the information that may be gained from a single sampling point (located close to the area active in water uptake by the tree roots) or from the results of the 1-D simulation is insufficient for a quantitative description of the response of the complicated, 3-D flow system. Both might considerably underestimate the movement and spreading of a pulse of a tracer solute and also the groundwater contamination hazard posed by nitrate and particularly by chloride moving through the vadose zone. This stems mainly from the rain that drove water through the flow system away from the rooted area and could not be represented by the 1-D model or by the single sampling point. It was shown, however, that an additional sampling point, located outside the area active in water uptake, may substantially improve the quantitative description of the response of the complicated, 3-D flow system.
NASA Astrophysics Data System (ADS)
Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.
2017-12-01
Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pochan, M.J.; Massey, M.J.
1979-02-01
This report discusses the results of actual raw product gas sampling efforts and includes: Rationale for raw product gas sampling efforts; design and operation of the CMU gas sampling train; development and analysis of a sampling train data base; and conclusions and future application of results. The results of sampling activities at the CO/sub 2/-Acceptor and Hygas pilot plants proved that: The CMU gas sampling train is a valid instrument for characterization of environmental parameters in coal gasification gas-phase process streams; depending on the particular process configuration, the CMU gas sampling train can reduce gasifier effluent characterization activity to amore » single location in the raw product gas line; and in contrast to the slower operation of the EPA SASS Train, CMU's gas sampling train can collect representative effluent data at a rapid rate (approx. 2 points per hour) consistent with the rate of change of process variables, and thus function as a tool for process engineering-oriented analysis of environmental characteristics.« less
Tetranuclear cluster-based Pb(II)-MOF: Synthesis, crystal structure and luminescence sensing for CS2
NASA Astrophysics Data System (ADS)
Dong, Yanli
2018-05-01
A new Pb(II) coordination polymer, namely [Pb2(bptc)(DMA)]n (1, H4bptc = biphenyl-3,3‧,5,5‧-tetracarboxylic acid, DMA = N, N‧- dimethylacetamide), has been synthesized by the combination of H4bptc with Pb(NO3)2 under solvothermal conditions. Single crystal X-ray diffraction analysis revealed that compound 1 features a 3D framework based on tetranuclear [Pb4(COO)6] subunits, and topological analysis revealed that compound represents a binodal (4, 8)-connected scu-type topological network with the point symbol of {416,612}{44,62}2. Luminescence studies indicated that 1 and 1' (1‧ represents the desolvated samples) showed intense yellow emissions. Significantly, 1‧ exhibited sensitive luminescence sensing for CS2 solvent molecules at a low concentration.
Reconstruction of three-dimensional porous media using generative adversarial neural networks
NASA Astrophysics Data System (ADS)
Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.
2017-10-01
To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.
Culture adaptation of malaria parasites selects for convergent loss-of-function mutants.
Claessens, Antoine; Affara, Muna; Assefa, Samuel A; Kwiatkowski, Dominic P; Conway, David J
2017-01-24
Cultured human pathogens may differ significantly from source populations. To investigate the genetic basis of laboratory adaptation in malaria parasites, clinical Plasmodium falciparum isolates were sampled from patients and cultured in vitro for up to three months. Genome sequence analysis was performed on multiple culture time point samples from six monoclonal isolates, and single nucleotide polymorphism (SNP) variants emerging over time were detected. Out of a total of five positively selected SNPs, four represented nonsense mutations resulting in stop codons, three of these in a single ApiAP2 transcription factor gene, and one in SRPK1. To survey further for nonsense mutants associated with culture, genome sequences of eleven long-term laboratory-adapted parasite strains were examined, revealing four independently acquired nonsense mutations in two other ApiAP2 genes, and five in Epac. No mutants of these genes exist in a large database of parasite sequences from uncultured clinical samples. This implicates putative master regulator genes in which multiple independent stop codon mutations have convergently led to culture adaptation, affecting most laboratory lines of P. falciparum. Understanding the adaptive processes should guide development of experimental models, which could include targeted gene disruption to adapt fastidious malaria parasite species to culture.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.
Li, Jilong; Cheng, Jianlin
2016-05-10
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling
Li, Jilong; Cheng, Jianlin
2016-01-01
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489
Dickson, M.L.; Broster, B.E.; Parkhill, M.A.
2004-01-01
Striations and dispersal patterns for till clasts and matrix geochemistry are used to define flow directions of glacial transport across an area of about 800km2 in the Charlo-Atholville area of north-central New Brunswick. A total of 170 clast samples and 328 till matrix samples collected for geochemical analysis across the region, were analyzed for a total of 39 elements. Major lithologic contacts used here to delineate till clast provenance were based on recent bedrock mapping. Eleven known mineral occurrences and a gossan are used to define point source targets for matrix geochemical dispersal trains and to estimate probable distance and direction of transport from unknown sources. Clast trains are traceable for distances of approximately 10 km, whereas till geochemical dispersal patterns are commonly lost within 5 km of transport. Most dispersal patterns reflect more than a single direction of glacial transport. These data indicate that a single till sheet, 1-4 m thick, was deposited as the dominant ice-flow direction fluctuated between southeastward, eastward, and northward over the study area. Directions of early flow represent changes in ice sheet dominance, first from the northwest and then from the west. Locally, eastward and northward flow represent the maximum erosive phases. The last directions of flow are likely due to late glacial ice sheet drawdown towards the valley outlet at Baie des Chaleurs.
Mayer, Stefan; Twarużek, Magdalena; Błajet-Kosicka, Anna; Grajewski, Jan
2016-03-01
Manual sorting of onions is known to be associated with a bioaerosol exposure. The study aimed to gain an initial indication as to what extent manual sorting of onions is also associated with mycotoxin exposure. Twelve representative samples of outer onion skins from different onion origins were sampled and analyzed with a multimycotoxin method comprising 40 mycotoxins using a single extraction step followed by liquid chromatography with electrospray ionization and triple quadrupole mass spectrometry. Six of the 12 samples were positive for mycotoxins. In those samples, deoxynivalenol, fumonisin B1, and B2 were observed in quantitatively detectable amounts of 3940 ng/g for fumonisin B1 and in the range of 126-587 ng/g for deoxynivalenol and 55-554 ng/g for fumonisin B2. Although the results point to a lower risk due to mycotoxins, the risk should not be completely neglected and has to be considered in the risk assessment.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Recruitment for Occupational Research: Using Injured Workers as the Point of Entry into Workplaces
Koehoorn, Mieke; Trask, Catherine M.; Teschke, Kay
2013-01-01
Objective To investigate the feasibility, costs and sample representativeness of a recruitment method that used workers with back injuries as the point of entry into diverse working environments. Methods Workers' compensation claims were used to randomly sample workers from five heavy industries and to recruit their employers for ergonomic assessments of the injured worker and up to 2 co-workers. Results The final study sample included 54 workers from the workers’ compensation registry and 72 co-workers. This sample of 126 workers was based on an initial random sample of 822 workers with a compensation claim, or a ratio of 1 recruited worker to approximately 7 sampled workers. The average recruitment cost was CND$262/injured worker and CND$240/participating worksite including co-workers. The sample was representative of the heavy industry workforce, and was successful in recruiting the self-employed (8.2%), workers from small employers (<20 workers, 38.7%), and workers from diverse working environments (49 worksites, 29 worksite types, and 51 occupations). Conclusions The recruitment rate was low but the cost per participant reasonable and the sample representative of workers in small worksites. Small worksites represent a significant portion of the workforce but are typically underrepresented in occupational research despite having distinct working conditions, exposures and health risks worthy of investigation. PMID:23826387
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Westgate, John N; Wania, Frank
2011-10-15
Air mass origin as determined by back trajectories often aids in explaining some of the short-term variability in the atmospheric concentrations of semivolatile organic contaminants. Airsheds, constructed by amalgamating large numbers of back trajectories, capture average air mass origins over longer time periods and thus have found use in interpreting air concentrations obtained by passive air samplers. To explore some of their key characteristics, airsheds for 54 locations on Earth were constructed and compared for roundness, seasonality, and interannual variability. To avoid the so-called "pole problem" and to simplify the calculation of roundness, a "geodesic grid" was used to bin the back-trajectory end points. Departures from roundness were seen to occur at all latitudes and to correlate significantly with local slope but no strong relationship between latitude and roundness was revealed. Seasonality and interannual variability vary widely enough to imply that static models of transport are not sufficient to describe the proximity of an area to potential sources of contaminants. For interpreting an air measurement an airshed should be generated specifically for the deployment time of the sampler, especially when investigating long-term trends. Samples taken in a single season may not represent the average annual atmosphere, and samples taken in linear, as opposed to round, airsheds may not represent the average atmosphere in the area. Simple methods are proposed to ascertain the significance of an airshed or individual cell. It is recommended that when establishing potential contaminant source regions only end points with departure heights of less than ∼700 m be considered.
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Obtaining representative ground water samples is important for site assessment and
remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...
NASA Astrophysics Data System (ADS)
Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.
2018-07-01
We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.
NASA Astrophysics Data System (ADS)
Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.
2018-04-01
We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.
Evaluating adequacy of the representative stream reach used in invertebrate monitoring programs
Rabeni, C.F.; Wang, N.; Sarver, R.J.
1999-01-01
Selection of a representative stream reach is implicitly or explicitly recommended in many biomonitoring protocols using benthic invertebrates. We evaluated the adequacy of sampling a single stream reach selected on the basis of its appearance. We 1st demonstrated the precision of our within-reach sampling. Then we sampled 3 or 4 reaches (each ~20x mean width) within an 8-16 km segment on each of 8 streams in 3 ecoregions and calculated 4 common metrics: 1) total taxa; 2) Ephemeroptera, Plecoptera, and Trichoptera taxa; 3) biotic index; and 4) Sharmon's diversity index. In only 6% of possible cases was the coefficient of variation for any of the metrics reduced >10% by sampling additional reaches. Sampling a 2nd reach on a stream improved the ability to detect impairment by an average of only 9.3%. Sampling a 3rd reach on a stream additionally improved ability to detect impairment by only 4.5%. We concluded that a single well-chosen reach, if adequately sampled, can be representative of an entire stream segment, and sampling additional reaches within a segment may not be cost effective.
ADVANCES IN GROUND WATER SAMPLING PROCEDURES
Obtaining representative ground water samples is important for site assessment and remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...
Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E
2017-09-01
Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media
NASA Astrophysics Data System (ADS)
Russian, Anna; Dentz, Marco; Gouze, Philippe
2017-08-01
Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.
Economic and microbiologic evaluation of single-dose vial extension for hazardous drugs.
Rowe, Erinn C; Savage, Scott W; Rutala, William A; Weber, David J; Gergen-Teague, Maria; Eckel, Stephen F
2012-07-01
The update of US Pharmacopeia Chapter <797> in 2008 included guidelines stating that single-dose vials (SDVs) opened and maintained in an International Organization for Standardization Class 5 environment can be used for up to 6 hours after initial puncture. A study was conducted to evaluate the cost of discarding vials after 6 hours and to further test sterility of vials beyond this time point, subsequently defined as the beyond-use date (BUD). Financial determination of SDV waste included 2 months of retrospective review of all doses prescribed. Additionally, actual waste log data were collected. Active and control vials (prepared using sterilized trypticase soy broth) were recovered, instead of discarded, at the defined 6-hour BUD. The institution-specific waste of 19 selected SDV medications discarded at 6 hours was calculated at $766,000 annually, and tracking waste logs for these same medications was recorded at $770,000 annually. Microbiologic testing of vial extension beyond 6 hours showed that 11 (1.86%) of 592 samples had one colony-forming unit on one of two plates. Positive plates were negative at subsequent time points, and all positives were single isolates most likely introduced during the plating process. The cost of discarding vials at 6 hours was significant for hazardous medications in a large academic medical center. On the basis of microbiologic data, vial BUD extension demonstrated a contamination frequency of 1.86%, which likely represented exogenous contamination; vial BUD extension for the tested drugs showed no growth at subsequent time points and could provide an annual cost savings of more than $600,000.
NASA Astrophysics Data System (ADS)
Lazic, V.; De Ninno, A.
2017-11-01
The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.
2010-01-01
Background Drug use is believed to be an important factor contributing to the poor health and increased mortality risk that has been widely observed among homeless individuals. The objective of this study was to determine the prevalence and characteristics of drug use among a representative sample of homeless individuals and to examine the association between drug problems and physical and mental health status. Methods Recruitment of 603 single men, 304 single women, and 284 adults with dependent children occurred at homeless shelters and meal programs in Toronto, Canada. Information was collected on demographic characteristics and patterns of drug use. The Addiction Severity Index was used to assess whether participants suffered from drug problems. Associations of drug problems with physical and mental health status (measured by the SF-12 scale) were examined using regression analyses. Results Forty percent of the study sample had drug problems in the last 30 days. These individuals were more likely to be single men and less educated than those without drug problems. They were also more likely to have become homeless at a younger age (mean 24.8 vs. 30.9 years) and for a longer duration (mean 4.8 vs. 2.9 years). Marijuana and cocaine were the most frequently used drugs in the past two years (40% and 27%, respectively). Drug problems within the last 30 days were associated with significantly poorer mental health status (-4.9 points, 95% CI -6.5 to -3.2) but not with poorer physical health status (-0.03 points, 95% CI -1.3 to 1.3)). Conclusions Drug use is common among homeless individuals in Toronto. Current drug problems are associated with poorer mental health status but not with poorer physical health status. PMID:20181248
Ignjatovic, Anita Rakic; Miljkovic, Branislava; Todorovic, Dejan; Timotijevic, Ivana; Pokrajac, Milena
2011-05-01
Because moclobemide pharmacokinetics vary considerably among individuals, monitoring of plasma concentrations lends insight into its pharmacokinetic behavior and enhances its rational use in clinical practice. The aim of this study was to evaluate whether single concentration-time points could adequately predict moclobemide systemic exposure. Pharmacokinetic data (full 7-point pharmacokinetic profiles), obtained from 21 depressive inpatients receiving moclobemide (150 mg 3 times daily), were randomly split into development (n = 18) and validation (n = 16) sets. Correlations between the single concentration-time points and the area under the concentration-time curve within a 6-hour dosing interval at steady-state (AUC(0-6)) were assessed by linear regression analyses. The predictive performance of single-point sampling strategies was evaluated in the validation set by mean prediction error, mean absolute error, and root mean square error. Plasma concentrations in the absorption phase yielded unsatisfactory predictions of moclobemide AUC(0-6). The best estimation of AUC(0-6) was achieved from concentrations at 4 and 6 hours following dosing. As the most reliable surrogate for moclobemide systemic exposure, concentrations at 4 and 6 hours should be used instead of predose trough concentrations as an indicator of between-patient variability and a guide for dose adjustments in specific clinical situations.
Anderson, Eric C; Ng, Thomas C
2016-02-01
We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.
Not simply more of the same: distinguishing between patient heterogeneity and parameter uncertainty.
Vemer, Pepijn; Goossens, Lucas M A; Rutten-van Mölken, Maureen P M H
2014-11-01
In cost-effectiveness (CE) Markov models, heterogeneity in the patient population is not automatically taken into account. We aimed to compare methods of dealing with heterogeneity on estimates of CE, using a case study in chronic obstructive pulmonary disease (COPD). We first present a probabilistic sensitivity analysis (PSA) in which we sampled only from distributions representing parameter uncertainty. This ignores any heterogeneity. Next, we explored heterogeneity by presenting results for subgroups, using a method that samples parameter uncertainty simultaneously with heterogeneity in a single-loop PSA. Finally, we distinguished parameter uncertainty from heterogeneity in a double-loop PSA by performing a nested simulation within each PSA iteration. Point estimates and uncertainty differed substantially between methods. The incremental CE ratio (ICER) ranged from € 4900 to € 13,800. The single-loop PSA led to a substantially different shape of the CE plane and an overestimation of the uncertainty compared with the other 3 methods. The CE plane for the double-loop PSA showed substantially less uncertainty and a stronger negative correlation between the difference in costs and the difference in effects compared with the other methods. This came at the cost of higher calculation times. Not accounting for heterogeneity, subgroup analysis and the double-loop PSA can be viable options, depending on the decision makers' information needs. The single-loop PSA should not be used in CE research. It disregards the fundamental differences between heterogeneity and sampling uncertainty and overestimates uncertainty as a result. © The Author(s) 2014.
Leveraging the rice genome sequence for monocot comparative and translational genomics.
Lohithaswa, H C; Feltus, F A; Singh, H P; Bacon, C D; Bailey, C D; Paterson, A H
2007-07-01
Common genome anchor points across many taxa greatly facilitate translational and comparative genomics and will improve our understanding of the Tree of Life. To add to the repertoire of genomic tools applicable to the study of monocotyledonous plants in general, we aligned Allium and Musa ESTs to Oryza BAC sequences and identified candidate Allium-Oryza and Musa-Oryza conserved intron-scanning primers (CISPs). A random sampling of 96 CISP primer pairs, representing loci from 11 of the 12 chromosomes in rice, were tested on seven members of the order Poales and on representatives of the Arecales, Asparagales, and Zingiberales monocot orders. The single-copy amplification success rates of Allium (31.3%), Cynodon (31.4%), Hordeum (30.2%), Musa (37.5%), Oryza (61.5%), Pennisetum (33.3%), Sorghum (47.9%), Zea (33.3%), Triticum (30.2%), and representatives of the palm family (32.3%) suggest that subsets of these primers will provide DNA markers suitable for comparative and translational genomics in orphan crops, as well as for applications in conservation biology, ecology, invasion biology, population biology, systematic biology, and related fields.
Comparison of chain sampling plans with single and double sampling plans
NASA Technical Reports Server (NTRS)
Stephens, K. S.; Dodge, H. F.
1976-01-01
The efficiency of chain sampling is examined through matching of operating characteristics (OC) curves of chain sampling plans (ChSP) with single and double sampling plans. In particular, the operating characteristics of some ChSP-0, 3 and 1, 3 as well as ChSP-0, 4 and 1, 4 are presented, where the number pairs represent the first and the second cumulative acceptance numbers. The fact that the ChSP procedure uses cumulative results from two or more samples and that the parameters can be varied to produce a wide variety of operating characteristics raises the question whether it may be possible for such plans to provide a given protection with less inspection than with single or double sampling plans. The operating ratio values reported illustrate the possibilities of matching single and double sampling plans with ChSP. It is shown that chain sampling plans provide improved efficiency over single and double sampling plans having substantially the same operating characteristics.
Leaps and lulls in the developmental transcriptome of Dictyostelium discoideum.
Rosengarten, Rafael David; Santhanam, Balaji; Fuller, Danny; Katoh-Kurasawa, Mariko; Loomis, William F; Zupan, Blaz; Shaulsky, Gad
2015-04-13
Development of the soil amoeba Dictyostelium discoideum is triggered by starvation. When placed on a solid substrate, the starving solitary amoebae cease growth, communicate via extracellular cAMP, aggregate by tens of thousands and develop into multicellular organisms. Early phases of the developmental program are often studied in cells starved in suspension while cAMP is provided exogenously. Previous studies revealed massive shifts in the transcriptome under both developmental conditions and a close relationship between gene expression and morphogenesis, but were limited by the sampling frequency and the resolution of the methods. Here, we combine the superior depth and specificity of RNA-seq-based analysis of mRNA abundance with high frequency sampling during filter development and cAMP pulsing in suspension. We found that the developmental transcriptome exhibits mostly gradual changes interspersed by a few instances of large shifts. For each time point we treated the entire transcriptome as single phenotype, and were able to characterize development as groups of similar time points separated by gaps. The grouped time points represented gradual changes in mRNA abundance, or molecular phenotype, and the gaps represented times during which many genes are differentially expressed rapidly, and thus the phenotype changes dramatically. Comparing developmental experiments revealed that gene expression in filter developed cells lagged behind those treated with exogenous cAMP in suspension. The high sampling frequency revealed many genes whose regulation is reproducibly more complex than indicated by previous studies. Gene Ontology enrichment analysis suggested that the transition to multicellularity coincided with rapid accumulation of transcripts associated with DNA processes and mitosis. Later development included the up-regulation of organic signaling molecules and co-factor biosynthesis. Our analysis also demonstrated a high level of synchrony among the developing structures throughout development. Our data describe D. discoideum development as a series of coordinated cellular and multicellular activities. Coordination occurred within fields of aggregating cells and among multicellular bodies, such as mounds or migratory slugs that experience both cell-cell contact and various soluble signaling regimes. These time courses, sampled at the highest temporal resolution to date in this system, provide a comprehensive resource for studies of developmental gene expression.
Gettings, M.E.; Showail, Abdullah
1982-01-01
Heat-flow measurements were made at five onland shot points of the 1978 Saudi Arabian seismic deep-refraction line, which sample major tectonic elements of the Arabian Shield along a profile from Ar Riyad to the Farasan Islands. Because of the pattern drilling at each shot point, several holes (60 m deep) could be logged for temperature at each site and thus allow a better estimate of the geothermal gradient. Each site was mapped and sampled in detail, and modal and. chemical analyses of representative specimens were made in the laboratory. Thermal conductivities were computed from the modal analyses and single-mineral conductivity data. The resulting heat-flow values, combined with published values for the Red Sea and coastal plain, indicate a three-level pattern, with a heat flow of about 4.5 heat-flow unit (HFU) over the Red Sea axial trough, about 3.0 HFU over the shelf and coastal plain, and an essentially constant 1.0 HFU over the Arabian Shield at points well away from the suture zone with the oceanic crust. At three sites where the rocks are granitic, gamma-ray spectrometry techniques were employed to estimate thorium, potassium, and uranium concentrations. The resulting plot of heat generation versus heat flow suggests that in the Arabian Shield the relationship between heat flow and heat production is not linear. More heat-flow data are essential to establish or reject this conclusion.
An Increase of Intelligence in Saudi Arabia, 1977-2010
ERIC Educational Resources Information Center
Batterjee, Adel A.; Khaleefa, Omar; Ali, Khalil; Lynn, Richard
2013-01-01
Normative data for 8-15 year olds for the Standard Progressive Matrices in Saudi Arabia were obtained in 1977 and 2010. The 2010 sample obtained higher average scores than the 1977 sample by 0.78d, equivalent to 11.7 IQ points. This represents a gain of 3.55 IQ points a decade over the 33 year period. (Contains 1 table.)
Volatiles from a rare Acer spp. honey sample from Croatia.
Jerković, Igor; Marijanović, Zvonimir; Malenica-Staver, Mladenka; Lusić, Drazen
2010-06-24
A rare sample of maple (Acer spp.) honey from Croatia was analysed. Ultrasonic solvent extraction (USE) using: 1) pentane, 2) diethyl ether, 3) a mixture of pentane and diethyl ether (1:2 v/v) and 4) dichloromethane as solvents was applied. All the extracts were analysed by GC and GC/MS. The most representative extracts were 3) and 4). Syringaldehyde was the most striking compound, being dominant in the extracts 2), 3) and 4) with percentages 34.5%, 33.1% and 35.9%, respectively. In comparison to USE results of other single Croatian tree honey samples (Robinia pseudoacacia L. nectar honey, Salix spp. nectar and honeydew honeys, Quercus frainetto Ten. honeydew as well as Abies alba Mill. and Picea abies L. honeydew) and literature data the presence of syringaldehyde, previously identified in maple sap and syrup, can be pointed out as a distinct characteristic of the Acer spp. honey sample. Headspace solid-phase microextraction (HS-SPME) combined with GC and GC/MS identified benzaldehyde (16.5%), trans-linalool oxide (20.5%) and 2-phenylethanol (14.9%) as the major compounds that are common in different honey headspace compositions.
Luka, George; Ahmadi, Ali; Najjaran, Homayoun; Alocilja, Evangelyn; DeRosa, Maria; Wolthers, Kirsten; Malki, Ahmed; Aziz, Hassan; Althani, Asmaa; Hoorfar, Mina
2015-01-01
A biosensor can be defined as a compact analytical device or unit incorporating a biological or biologically derived sensitive recognition element immobilized on a physicochemical transducer to measure one or more analytes. Microfluidic systems, on the other hand, provide throughput processing, enhance transport for controlling the flow conditions, increase the mixing rate of different reagents, reduce sample and reagents volume (down to nanoliter), increase sensitivity of detection, and utilize the same platform for both sample preparation and detection. In view of these advantages, the integration of microfluidic and biosensor technologies provides the ability to merge chemical and biological components into a single platform and offers new opportunities for future biosensing applications including portability, disposability, real-time detection, unprecedented accuracies, and simultaneous analysis of different analytes in a single device. This review aims at representing advances and achievements in the field of microfluidic-based biosensing. The review also presents examples extracted from the literature to demonstrate the advantages of merging microfluidic and biosensing technologies and illustrate the versatility that such integration promises in the future biosensing for emerging areas of biological engineering, biomedical studies, point-of-care diagnostics, environmental monitoring, and precision agriculture. PMID:26633409
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
[The NIR spectra based variety discrimination for single soybean seed].
Zhu, Da-Zhou; Wang, Kun; Zhou, Guang-Hua; Hou, Rui-Feng; Wang, Cheng
2010-12-01
With the development of soybean producing and processing, the quality breeding becomes more and more important for soybean breeders. Traditional sampling detection methods for soybean quality need to destroy the seed, and does not satisfy the requirement of earlier generation materials sieving for breeding. Near infrared (NIR) spectroscopy has been widely used for soybean quality detection. However, all these applications were referred to mass samples, and they were not suitable for little or single seed detection in breeding procedure. In the present study, the acousto--optic tunable filter (AOTF) NIR spectroscopy was used to measure the single soybean seed. Two varieties of soybean were measured, which contained 60 KENJIANDOU43 seeds and 60 ZHONGHUANG13 seeds. The results showed that NIR spectra combined with soft independent modeling of class analogy (SIMCA) could accurately discriminate the soybean varieties. The classification accuracy for KENJIANDOU43 seeds and ZHONGHUANG13 was 100%. The spectra of single soybean seed were measured at different positions, and it showed that the seed shape has significant influence on the measurement of spectra, therefore, the key point for single seed measurement was how to accurately acquire the spectra and keep their representativeness. The spectra for soybeans with glossy surface had high repeatability, while the spectra of seeds with external defects had significant difference for several measurements. For the fast sieving of earlier generation materials in breeding, one could firstly eliminate the seeds with external defects, then apply NIR spectra for internal quality detection, and in this way the influence of seed shape and external defects could be reduced.
Chen, Rui; Wang, Haotian; Shi, Jun; Hu, Pei
2016-05-01
CYP2D6 is a high polymorphic enzyme. Determining its phenotype before CYP2D6 substrate treatment can avoid dose-dependent adverse events or therapeutic failures. Alternative phenotyping methods of CYP2D6 were compared to aluate the appropriate and precise time points for phenotyping after single-dose and ultiple-dose of 30-mg controlled-release (CR) dextromethorphan (DM) and to explore the antimodes for potential sampling methods. This was an open-label, single and multiple-dose study. 21 subjects were assigned to receive a single dose of CR DM 30 mg orally, followed by a 3-day washout period prior to oral administration of CR DM 30 mg every 12 hours for 6 days. Metabolic ratios (MRs) from AUC∞ after single dosing and from AUC0-12h at steady state were taken as the gold standard. The correlations of metabolic ratios of DM to dextrorphan (MRDM/DX) values based on different phenotyping methods were assessed. Linear regression formulas were derived to calculate the antimodes for potential sample methods. In the single-dose part of the study statistically significant correlations were found between MRDM/DX from AUC∞ and from serial plasma points from 1 to 30 hours or from urine (all p-values < 0.001). In the multiple-dose part, statistically significant correlations were found between MRDM/DX from AUC0-12h on day 6 and MRDM/DX from serial plasma points from 0 to 36 hours after the last dosing (all p-values < 0.001). Based on reported urinary antimode and linear regression analysis, the antimodes of AUC and plasma points were derived to profile the trend of antimodes as the drug concentrations changed. MRDM/DX from plasma points had good correlations with MRDM/DX from AUC. Plasma points from 1 to 30 hours after single dose of 30-mg CR DM and any plasma point at steady state after multiple doses of CR DM could potentially be used for phenotyping of CYP2D6.
Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples
NASA Astrophysics Data System (ADS)
Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.
2014-12-01
Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.
USDA-ARS?s Scientific Manuscript database
The ionome, or elemental profile, of a maize kernel represents at least two distinct ideas. First, the collection of elements within the kernel are food, feed and feedstocks for people, animals and industrial processes. Second, the ionome of the kernel represents a developmental end point that can s...
Study on high-resolution representation of terraces in Shanxi Loess Plateau area
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ma, Lei
2008-10-01
A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.
Economic and Microbiologic Evaluation of Single-Dose Vial Extension for Hazardous Drugs
Rowe, Erinn C.; Savage, Scott W.; Rutala, William A.; Weber, David J.; Gergen-Teague, Maria; Eckel, Stephen F.
2012-01-01
Purpose: The update of US Pharmacopeia Chapter <797> in 2008 included guidelines stating that single-dose vials (SDVs) opened and maintained in an International Organization for Standardization Class 5 environment can be used for up to 6 hours after initial puncture. A study was conducted to evaluate the cost of discarding vials after 6 hours and to further test sterility of vials beyond this time point, subsequently defined as the beyond-use date (BUD). Methods: Financial determination of SDV waste included 2 months of retrospective review of all doses prescribed. Additionally, actual waste log data were collected. Active and control vials (prepared using sterilized trypticase soy broth) were recovered, instead of discarded, at the defined 6-hour BUD. Results: The institution-specific waste of 19 selected SDV medications discarded at 6 hours was calculated at $766,000 annually, and tracking waste logs for these same medications was recorded at $770,000 annually. Microbiologic testing of vial extension beyond 6 hours showed that 11 (1.86%) of 592 samples had one colony-forming unit on one of two plates. Positive plates were negative at subsequent time points, and all positives were single isolates most likely introduced during the plating process. Conclusion: The cost of discarding vials at 6 hours was significant for hazardous medications in a large academic medical center. On the basis of microbiologic data, vial BUD extension demonstrated a contamination frequency of 1.86%, which likely represented exogenous contamination; vial BUD extension for the tested drugs showed no growth at subsequent time points and could provide an annual cost savings of more than $600,000. PMID:23180998
NASA Astrophysics Data System (ADS)
Estrany, Joan; Martinez-Carreras, Nuria
2013-04-01
Tracers have been acknowledged as a useful tool to identify sediment sources, based upon a variety of techniques and chemical and physical sediment properties. Sediment fingerprinting supports the notion that changes in sedimentation rates are not just related to increased/reduced erosion and transport in the same areas, but also to the establishment of different pathways increasing sediment connectivity. The Na Borges is a Mediterranean lowland agricultural river basin (319 km2) where traditional soil and water conservation practices have been applied over millennia to provide effective protection of cultivated land. During the twentieth century, industrialisation and pressure from tourism activities have increased urbanised surfaces, which have impacts on the processes that control streamflow. Within this context, source material sampling was focused in Na Borges on obtaining representative samples from potential sediment sources (comprised topsoil; i.e., 0-2 cm) susceptible to mobilisation by water and subsequent routing to the river channel network, while those representing channel bank sources were collected from actively eroding channel margins and ditches. Samples of road dust and of solids from sewage treatment plants were also collected. During two hydrological years (2004-2006), representative suspended sediment samples for use in source fingerprinting studies were collected at four flow gauging stations and at eight secondary sampling points using time-integrating sampling samplers. Likewise, representative bed-channel sediment samples were obtained using the resuspension approach at eight sampling points in the main stem of the Na Borges River. These deposits represent the fine sediment temporarily stored in the bed-channel and were also used for tracing source contributions. A total of 102 individual time-integrated sediment samples, 40 bulk samples and 48 bed-sediment samples were collected. Upon return to the laboratory, source material samples were oven-dried at 40° C, disaggregated using a pestle and mortar, and dry sieved to
Microfluidic Remote Loading for Rapid Single-Step Liposomal Drug Preparation
Hood, R.R.; Vreeland, W. N.; DeVoe, D.L.
2014-01-01
Microfluidic-directed formation of liposomes is combined with in-line sample purification and remote drug loading for single step, continuous-flow synthesis of nanoscale vesicles containing high concentrations of stably loaded drug compounds. Using an on-chip microdialysis element, the system enables rapid formation of large transmembrane pH and ion gradients, followed by immediate introduction of amphipathic drug for real-time remote loading into the liposomes. The microfluidic process enables in-line formation of drug-laden liposomes with drug:lipid molar ratios of up to 1.3, and a total on-chip residence time of approximately 3 min, representing a significant improvement over conventional bulk-scale methods which require hours to days for combined liposome synthesis and remote drug loading. The microfluidic platform may be further optimized to support real-time generation of purified liposomal drug formulations with high concentrations of drugs and minimal reagent waste for effective liposomal drug preparation at or near the point of care. PMID:25003823
NASA Technical Reports Server (NTRS)
Moustafa, Samiah E.; Rennermalm, Asa K.; Roman, Miguel O.; Wang, Zhuosen; Schaaf, Crystal B.; Smith, Laurence C.; Koenig, Lora S.; Erb, Angela
2017-01-01
MODerate resolution Imaging Spectroradiometer (MODIS) albedo products have been validated over spatially uniform, snow-covered areas of the Greenland ice sheet (GrIS) using the so-called single 'point-to-pixel' method. This study expands on this methodology by applying a 'multiple-point-to-pixel' method and examination of spatial autocorrelation (here using semivariogram analysis) by using in situ observations, high-resolution World- View-2 (WV-2) surface reflectances, and MODIS Collection V006 daily blue-sky albedo over a spatially heterogeneous surfaces in the lower ablation zone in southwest Greenland. Our results using 232 ground-based samples within two MODIS pixels, one being more spatial heterogeneous than the other, show little difference in accuracy among narrow and broad band albedos (except for Band 2). Within the more homogenous pixel area, in situ and MODIS albedos were very close (error varied from -4% to +7%) and within the range of ASD standard errors. The semivariogram analysis revealed that the minimum observational footprint needed for a spatially representative sample is 30 m. In contrast, over the more spatially heterogeneous surface pixel, a minimum footprint size was not quantifiable due to spatial autocorrelation, and far exceeds the effective resolution of the MODIS retrievals. Over the high spatial heterogeneity surface pixel, MODIS is lower than ground measurements by 4-7%, partly due to a known in situ undersampling of darker surfaces that often are impassable by foot (e.g., meltwater features and shadowing effects over crevasses). Despite the sampling issue, our analysis errors are very close to the stated general accuracy of the MODIS product of 5%. Thus, our study suggests that the MODIS albedo product performs well in a very heterogeneous, low-albedo, area of the ice sheet ablation zone. Furthermore, we demonstrate that single 'point-to-pixel' methods alone are insufficient in characterizing and validating the variation of surface albedo displayed in the lower ablation area. This is true because the distribution of in situ data deviations from MODIS albedo show a substantial range, with the average values for the 10th and 90th percentiles being -0.30 and 0.43 across all bands. Thus, if only single point is taken for ground validation, and is randomly selected from either distribution tails, the error would appear to be considerable. Given the need for multiple in-situ points, concurrent albedo measurements derived from existing AWSs, (low-flying vehicles (airborne or unmanned) and high-resolution imagery (WV-2)) are needed to resolve high sub-pixel variability in the ablation zone, and thus, further improve our characterization of Greenland's surface albedo.
Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes
Hernández-Montes, Maria del Socorro; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza
2009-01-01
Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. This paper presents advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full-field-of-view characterization of nanometer scale sound-induced displacements of the surface of the TM at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical-research environment to address basic-science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad-frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and cadaveric human samples are shown, and their potential utility discussed. PMID:19566316
Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes
NASA Astrophysics Data System (ADS)
Del Socorro Hernández-Montes, Maria; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza
2009-05-01
Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. We present advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full field-of-view characterization of nanometer-scale sound-induced displacements of the TM surface at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical research environment to address basic science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and human cadaver samples are shown, and their potential utility is discussed.
Point-Sampling and Line-Sampling Probability Theory, Geometric Implications, Synthesis
L.R. Grosenbaugh
1958-01-01
Foresters concerned with measuring tree populations on definite areas have long employed two well-known methods of representative sampling. In list or enumerative sampling the entire tree population is tallied with a known proportion being randomly selected and measured for volume or other variables. In area sampling all trees on randomly located plots or strips...
NASA Astrophysics Data System (ADS)
Cathey, Henrietta E.; Nash, Barbara P.
2009-11-01
The Bruneau-Jarbidge eruptive center of the central Snake River Plain in southern Idaho, USA produced multiple rhyolite lava flows with volumes of <10 km 3 to 200 km 3 each from ~11.2 to 8.1 Ma, most of which follow its climactic phase of large-volume explosive volcanism, represented by the Cougar Point Tuff, from 12.7 to 10.5 Ma. These lavas represent the waning stages of silicic volcanism at a major eruptive center of the Yellowstone hotspot track. Here we provide pyroxene compositions and thermometry results from several lavas that demonstrate that the demise of the silicic volcanic system was characterized by sustained, high pre-eruptive magma temperatures (mostly ≥950 °C) prior to the onset of exclusively basaltic volcanism at the eruptive center. Pyroxenes display a variety of textures in single samples, including solitary euhedral crystals as well as glomerocrysts, crystal clots and annealed microgranular inclusions of pyroxene ± magnetite ± plagioclase. Pigeonite and augite crystals are unzoned, and there are no detectable differences in major and minor element compositions according to textural variety — mineral compositions in the microgranular inclusions and crystal clots are identical to those of phenocrysts in the host lavas. In contrast to members of the preceding Cougar Point Tuff that host polymodal glass and mineral populations, pyroxene compositions in each of the lavas are characterized by single rather than multiple discrete compositional modes. Collectively, the lavas reproduce and extend the range of Fe-Mg pyroxene compositional modes observed in the Cougar Point Tuff to more Mg-rich varieties. The compositionally homogeneous populations of pyroxene in each of the lavas, as well as the lack of core-to-rim zonation in individual crystals suggest that individual eruptions each were fed by compositionally homogeneous magma reservoirs, and similarities with the Cougar Point Tuff suggest consanguinity of such reservoirs to those that supplied the polymodal Cougar Point Tuff. Pyroxene thermometry results obtained using QUILF equilibria yield pre-eruptive magma temperatures of 905 to 980 °C, and individual modes consistently record higher Ca content and higher temperatures than pyroxenes with equivalent Fe-Mg ratios in the preceding Cougar Point Tuff. As is the case with the Cougar Point Tuff, evidence for up-temperature zonation within single crystals that would be consistent with recycling of sub- or near-solidus material from antecedent magma reservoirs by rapid reheating is extremely rare. Also, the absence of intra-crystal zonation, particularly at crystal rims, is not easily reconciled with cannibalization of caldera fill that subsided into pre-eruptive reservoirs. The textural, compositional and thermometric results rather are consistent with minor re-equilibration to higher temperatures of the unerupted crystalline residue from the explosive phase of volcanism, or perhaps with newly generated magmas from source materials very similar to those for the Cougar Point Tuff. Collectively, the data suggest that most of the pyroxene compositional diversity that is represented by the tuffs and lavas was produced early in the history of the eruptive center and that compositions across this range were preserved or duplicated through much of its lifetime. Mineral compositions and thermometry of the multiple lavas suggest that unerupted magmas residual to the explosive phase of volcanism may have been stored at sustained, high temperatures subsequent to the explosive phase of volcanism. If so, such persistent high temperatures and large eruptive magma volumes likewise require an abundant and persistent supply of basalt magmas to the lower and/or mid-crust, consistent with the tectonic setting of a continental hotspot.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
ERIC Educational Resources Information Center
Leonardi, Fabio; Spazzafumo, Liana; Marcellini, Fiorella
2005-01-01
Based on the constructionist point of view applied to Subjective Well-Being (SWB), five hypotheses were advanced about the predictive power of the top-down effects and bottom-up processes over a five years period. The sample consisted of 297 respondents, which represent the Italian sample of a European longitudinal survey; the first phase was…
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
Sampled control stability of the ESA instrument pointing system
NASA Astrophysics Data System (ADS)
Thieme, G.; Rogers, P.; Sciacovelli, D.
Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.
Neurosensory Deficits Vary as a Function of Point of Care in Pediatric Mild Traumatic Brain Injury.
Mayer, Andrew R; Wertz, Christopher; Ryman, Sephira G; Storey, Eileen P; Park, Grace; Phillips, John; Dodd, Andrew B; Oglesbee, Scott; Campbell, Richard; Yeo, Ronald A; Wasserott, Benjamin; Shaff, Nicholas A; Leddy, John J; Mannix, Rebekah; Arbogast, Kristy B; Meier, Timothy B; Grady, Matthew F; Master, Christina L
2018-05-15
Neurosensory abnormalities are frequently observed following pediatric mild traumatic brain injury (pmTBI) and may underlie the expression of several common concussion symptoms and delay recovery. Importantly, active evaluation of neurosensory functioning more closely approximates real-world (e.g., physical and academic) environments that provoke symptom worsening. The current study determined whether symptom provocation (i.e., during neurosensory examination) improved classification accuracy relative to pre-examination symptom levels and whether symptoms varied as a function of point of care. Eighty-one pmTBI were recruited from the pediatric emergency department (PED; n = 40) or outpatient concussion clinic (n = 41), along with matched (age, sex, and education) healthy controls (HC; n = 40). All participants completed a brief (∼ 12 min) standardized neurosensory examination and clinical questionnaires. The magnitude of symptom provocation upon neurosensory examination was significantly higher for concussion clinic than for PED patients. Symptom provocation significantly improved diagnostic classification accuracy relative to pre-examination symptom levels, although the magnitude of improvement was modest, and was greater in the concussion clinic. In contrast, PED patients exhibited worse performance on measures of balance, vision, and oculomotor functioning than the concussion clinic patients, with no differences observed between both samples and HC. Despite modest sample sizes, current findings suggest that point of care represents a critical but highly under-studied variable that may influence outcomes following pmTBI. Studies that rely on recruitment from a single point of care may not generalize to the entire pmTBI population in terms of how neurosensory deficits affect recovery.
Um, Ji-Yong; Kim, Yoon-Jee; Cho, Seong-Eun; Chae, Min-Kyun; Kim, Byungsub; Sim, Jae-Yoon; Park, Hong-June
2015-02-01
A single-chip 32-channel analog beamformer is proposed. It achieves a delay resolution of 4 ns and a maximum delay range of 768 ns. It has a focal-point based architecture, which consists of 7 sub-analog beamformers (sub-ABF). Each sub-ABF performs a RX focusing operation for a single focal point. Seven sub-ABFs perform a time-interleaving operation to achieve the maximum delay range of 768 ns. Phase interpolators are used in sub-ABFs to generate sampling clocks with the delay resolution of 4 ns from a low frequency system clock of 5 MHz. Each sub-ABF samples 32 echo signals at different times into sampling capacitors, which work as analog memory cells. The sampled 32 echo signals of each sub-ABF are originated from one target focal point at one instance. They are summed at one instance in a sub-ABF to perform the RX focusing for the target focal point. The proposed ABF chip has been fabricated in a 0.13- μ m CMOS process with an active area of 16 mm (2). The total power consumption is 287 mW. In measurement, the digital echo signals from a commercial ultrasound medical imaging machine were applied to the fabricated chip through commercial DAC chips. Due to the speed limitation of the DAC chips, the delay resolution was relaxed to 10 ns for the real-time measurement. A linear array transducer with no steering operation is used in this work.
The effects of spatial sampling choices on MR temperature measurements.
Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L
2011-02-01
The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.
Risk factors for domestic violence in Curacao.
van Wijk, N Ph L; de Bruijn, J G M
2012-10-01
One out of three people (25% of men, 38% of women) in Curacao have experienced some form of domestic violence at some point in their adult lives. The most significant risk factors for domestic violence in Curacao are the female gender, a young age, low education, and experiencing domestic violence victimization in childhood. Divorce, single parenthood, and unemployment increase the risk for women, but not for men. These findings are consistent with current literature on the subject. Further research on the context, nature, and severity of domestic violence in the Caribbean is necessary. Studies should preferably combine the strengths of national crime surveys and family conflict studies: nationally representative samples (including men and women) and questionnaires that include all possible experiences of psychological, physical, and sexual assaults by current and former partners, family, and friends.
10 CFR 75.8 - IAEA inspections.
Code of Federal Regulations, 2011 CFR
2011-01-01
... exports) or § 75.43(c) (pertaining to imports) at any place where nuclear material may be located; (3... nuclear material at key measurement points for material balance accounting are representative; (3) Verify... samples at key measurement points for material balance accounting are taken in accordance with procedures...
10 CFR 75.8 - IAEA inspections.
Code of Federal Regulations, 2010 CFR
2010-01-01
... inspection at a facility, to: (1) Examine records kept under § 75.21; (2) Observe that the measurements of nuclear material at key measurement points for material balance accounting are representative; (3) Verify... samples at key measurement points for material balance accounting are taken in accordance with procedures...
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Point detection of bacterial and viral pathogens using oral samples
NASA Astrophysics Data System (ADS)
Malamud, Daniel
2008-04-01
Oral samples, including saliva, offer an attractive alternative to serum or urine for diagnostic testing. This is particularly true for point-of-use detection systems. The various types of oral samples that have been reported in the literature are presented here along with the wide variety of analytes that have been measured in saliva and other oral samples. The paper focuses on utilizing point-detection of infectious disease agents, and presents work from our group on a rapid test for multiple bacterial and viral pathogens by monitoring a series of targets. It is thus possible in a single oral sample to identify multiple pathogens based on specific antigens, nucleic acids, and host antibodies to those pathogens. The value of such a technology for detecting agents of bioterrorism at remote sites is discussed.
The Assessment of Selectivity in Different Quadrupole-Orbitrap Mass Spectrometry Acquisition Modes
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Wegh, Robin S.; Meijer, Thijs; Nielen, Michel W. F.
2015-02-01
Selectivity of the confirmation of identity in liquid chromatography (tandem) mass spectrometry using Q-Orbitrap instrumentation was assessed using different acquisition modes based on a representative experimental data set constructed from 108 samples, including six different matrix extracts and containing over 100 analytes each. Single stage full scan, all ion fragmentation, and product ion scanning were applied. By generating reconstructed ion chromatograms using unit mass window in targeted MS2, selected reaction monitoring (SRM), regularly applied using triple-quadrupole instruments, was mimicked. This facilitated the comparison of single stage full scan, all ion fragmentation, (mimicked) SRM, and product ion scanning applying a mass window down to 1 ppm. Single factor Analysis of Variance was carried out on the variance (s2) of the mass error to determine which factors and interactions are significant parameters with respect to selectivity. We conclude that selectivity is related to the target compound (mainly the mass defect), the matrix, sample clean-up, concentration, and mass resolution. Selectivity of the different instrumental configurations was quantified by counting the number of interfering peaks observed in the chromatograms. We conclude that precursor ion selection significantly contributes to selectivity: monitoring of a single product ion at high mass accuracy with a 1 Da precursor ion window proved to be equally selective or better to monitoring two transition products in mimicked SRM. In contrast, monitoring a single fragment in all ion fragmentation mode results in significantly lower selectivity versus mimicked SRM. After a thorough inter-laboratory evaluation study, the results of this study can be used for a critical reassessment of the current identification points system and contribute to the next generation of evidence-based and robust performance criteria in residue analysis and sports doping.
Writing for Distance Education. Samples Booklet.
ERIC Educational Resources Information Center
International Extension Coll., Cambridge (England).
Approaches to the format, design, and layout of printed instructional materials for distance education are illustrated in 36 samples designed to accompany the manual, "Writing for Distance Education." Each sample is presented on a single page with a note pointing out its key features. Features illustrated include use of typescript layout, a comic…
Wang, B; Brueni, L G; Isensee, C; Meyer, T; Bock, N; Ravens-Sieberer, U; Klasen, F; Schlack, R; Becker, A; Rothenberger, A
2018-06-01
We examined whether there are certain dysregulation profile trajectories in childhood that may predict an elevated risk for mental disorders in later adolescence. Participants (N = 554) were drawn from a representative community sample of German children, 7-11 years old, who were followed over four measurement points (baseline, 1, 2 and 6 years later). Dysregulation profile, derived from the parent report of the Strengths and Difficulties Questionnaire, was measured at the first three measurement points, while symptoms of attention deficit hyperactivity disorder (ADHD), anxiety and depression were assessed at the fourth measurement point. We used latent class growth analysis to investigate developmental trajectories in the development of the dysregulation profile. The predictive value of dysregulation profile trajectories for later ADHD, anxiety and depression was examined by linear regression. For descriptive comparison, the predictive value of a single measurement (baseline) was calculated. Dysregulation profile was a stable trait during childhood. Boys and girls had similar levels of dysregulation profile over time. Two developmental subgroups were identified, namely the low dysregulation profile and the high dysregulation profile trajectory. The group membership in the high dysregulation profile trajectory (n = 102) was best predictive of later ADHD, regardless of an individual's gender and age. It explained 11% of the behavioural variance. For anxiety this was 8.7% and for depression 5.6%, including some gender effects. The single-point measurement was less predictive. An enduring high dysregulation profile in childhood showed some predictive value for psychological functioning 4 years later. Hence, it might be helpful in the preventive monitoring of children at risk.
Spatiotemporal variability of carbon dioxide and methane in a eutrophic lake
NASA Astrophysics Data System (ADS)
Loken, Luke; Crawford, John; Schramm, Paul; Stadler, Philipp; Stanley, Emily
2017-04-01
Lakes are important regulators of global carbon cycling and conduits of greenhouse gases to the atmosphere; however, most efflux estimates for individual lakes are based on extrapolations from a single location. Within-lake variability in carbon dioxide (CO2) and methane (CH4) arises from differences in water sources, physical mixing, and local transformations; all of which can be influenced by anthropogenic disturbances and vary at multiple temporal and spatial scales. During the 2016 open water season (March - December), we mapped surface water concentrations of CO2 and CH4 weekly in a eutrophic lake (Lake Mendota, WI, USA), which has a predominately agricultural and urban watershed. In total we produced 26 maps of each gas based on 10,000 point measurements distributed across the lake surface. Both gases displayed relatively consistent spatial patterns over the stratified period but exhibited remarkable heterogeneity on each sample date. CO2 was generally undersaturated (global mean: 0.84X atmospheric saturation) throughout the lake's pelagic zone and often differed near river inlets and shorelines. The lake was routinely extremely supersaturated with CH4 (global mean: 105X atmospheric saturation) with greater concentrations in littoral areas that contained organic-rich sediments. During fall mixis, both CO2 and CH4 increased substantially, and concentrations were not uniform across the lake surface. CO2 and CH4 were higher on the upwind side of the lake due to upwelling of enriched hypolimnetic water. While the lake acted as a modest sink for atmospheric CO2 during the stratified period, the lake released substantial amounts of CO2 during turnover and continually emitted CH4, offsetting any reduction in atmospheric warming potential from summertime CO2 uptake. These data-rich maps illustrate how lake-wide surface concentrations and lake-scale efflux estimates based on single point measurements diverge from spatially weighted calculations. Both gases are not well represented by a sample collected at lake's central buoy, and thus, extrapolations from a single sampling location may not be adequate to assess lake-wide CO2 and CH4 dynamics in human-dominated landscapes.
Pointing and Jitter Control for the USNA Multi-Beam Combining System
2013-05-10
previous work, an adaptive H-infinity optimal controller has been developed to control a single beam using a beam position detector for feedback... turbulence and airborne particles, platform jitter, lack of feedback from the target , and current laser technology represent just a few of these...lasers. Solid state lasers, however, cannot currently provide high enough power levels to destroy a target using a single beam. On solid-state
Maddalena, Damian; Hoffman, Forrest; Kumar, Jitendra; Hargrove, William
2014-08-01
Sampling networks rarely conform to spatial and temporal ideals, often comprised of network sampling points which are unevenly distributed and located in less than ideal locations due to access constraints, budget limitations, or political conflict. Quantifying the global, regional, and temporal representativeness of these networks by quantifying the coverage of network infrastructure highlights the capabilities and limitations of the data collected, facilitates upscaling and downscaling for modeling purposes, and improves the planning efforts for future infrastructure investment under current conditions and future modeled scenarios. The work presented here utilizes multivariate spatiotemporal clustering analysis and representativeness analysis for quantitative landscape characterization and assessment of the Fluxnet, RAINFOR, and ForestGEO networks. Results include ecoregions that highlight patterns of bioclimatic, topographic, and edaphic variables and quantitative representativeness maps of individual and combined networks.
Pairing call-response surveys and distance sampling for a mammalian carnivore
Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.
2015-01-01
Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.
Adjusted variable plots for Cox's proportional hazards regression model.
Hall, C B; Zeger, S L; Bandeen-Roche, K J
1996-01-01
Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.
A line-scan hyperspectral Raman system for spatially offset Raman spectroscopy
USDA-ARS?s Scientific Manuscript database
Conventional methods of spatially offset Raman spectroscopy (SORS) typically use single-fiber optical measurement probes to slowly and incrementally collect a series of spatially offset point measurements moving away from the laser excitation point on the sample surface, or arrays of multiple fiber ...
Point-contact Andreev reflection spectroscopy on Bi 2 Se 3 single crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granstrom, C. R.; Fridman, I.; Lei, H. -C.
In order to study how Andreev reflection (AR) occurs between a superconductor and a three-dimensional topological insulator (TI), we use superconducting Nb tips to perform point-contact AR spectroscopy at 4.2 K on as-grown single crystals of Bi 2Se 3. Scanning tunneling spectroscopy and scanning tunneling microscopy are also used to characterize the superconducting tip and both the doping level and surface condition of the TI sample. Furthermore, the point-contact measurements show clear spectral signatures of AR, as well as a depression of zero-bias conductance with decreasing junction impedance. The latter observation can be attributed to interfacial Rashba spin-orbit coupling, andmore » the presence of bulk bands at the Fermi level in our samples suggests that bulk states of Bi2Se3 are involved in the observed AR.« less
Point-contact Andreev reflection spectroscopy on Bi 2 Se 3 single crystals
Granstrom, C. R.; Fridman, I.; Lei, H. -C.; ...
2016-04-27
In order to study how Andreev reflection (AR) occurs between a superconductor and a three-dimensional topological insulator (TI), we use superconducting Nb tips to perform point-contact AR spectroscopy at 4.2 K on as-grown single crystals of Bi 2Se 3. Scanning tunneling spectroscopy and scanning tunneling microscopy are also used to characterize the superconducting tip and both the doping level and surface condition of the TI sample. Furthermore, the point-contact measurements show clear spectral signatures of AR, as well as a depression of zero-bias conductance with decreasing junction impedance. The latter observation can be attributed to interfacial Rashba spin-orbit coupling, andmore » the presence of bulk bands at the Fermi level in our samples suggests that bulk states of Bi2Se3 are involved in the observed AR.« less
Single-mode fiber systems for deep space communication network
NASA Technical Reports Server (NTRS)
Lutes, G.
1982-01-01
The present investigation is concerned with the development of single-mode optical fiber distribution systems. It is pointed out that single-mode fibers represent potentially a superior medium for the distribution of frequency and timing reference signals and wideband (400 MHz) IF signals. In this connection, single-mode fibers have the potential to improve the capability and precision of NASA's Deep Space Network (DSN). Attention is given to problems related to precise time synchronization throughout the DSN, questions regarding the selection of a transmission medium, and the function of the distribution systems, taking into account specific improvements possible by an employment of single-mode fibers.
Sapra, K. Tanuj; Balasubramanian, G. Prakash; Labudde, Dirk; Bowie, James U.; Muller, Daniel J.
2009-01-01
Using single-molecule force spectroscopy, we investigated the effect of single point mutations on the energy landscape and unfolding pathways of the transmembrane protein bacteriorhodopsin. We show that the unfolding energy barriers in the energy landscape of the membrane protein followed a simple two-state behavior and represent a manifestation of many converging unfolding pathways. Although the unfolding pathways of wild-type and mutant bacteriorhodopsin did not change, indicating the presence of same ensemble of structural unfolding intermediates, the free energies of the rate-limiting transition states of the bacteriorhodopsin mutants decreased as the distance of those transition states to the folded intermediate states decreased. Thus, all mutants exhibited Hammond behavior and a change in the free energies of the intermediates along the unfolding reaction coordinate and, consequently, their relative occupancies. This is the first experimental proof showing that point mutations can reshape the free energy landscape of a membrane protein and force single proteins to populate certain unfolding pathways over others. PMID:18191146
Effect of β-catenin alterations in the prognosis of patients with sporadic colorectal cancer.
Rafael, Sara; Veganzones, Silvia; Vidaurreta, Marta; de la Orden, Virginia; Maestro, Maria Luisa
2014-01-01
Wnt pathway activation represents a critical step in the etiology of most of colorectal cancer (CRC) and it is commonly due to mutations in the APC gene, which originates the loss of β-catenin regulatory function. It has been suggested that APC inactivation or β-catenin alteration have similar effects in tumor progression in CRC tumorigenesis. The aim of this study was to analyze the frequency of β-catenin gene mutation in patients with sporadic CRC and to determine its effect in prognosis. This was a prospective cohort study, which included 345 patients with sporadic CRC. β-Catenin gene mutations in exon 3 were detected by single strand conformation polymorphism (SSCP). Exon 3 deletion was studied by identifying differences in fragment length of specific amplification products. All the altered samples were confirmed by direct sequencing. In our population, point mutations were detected in 1.8% of the samples and 4.9% of the samples showed deletion. We observed association between exon 3 mutations and increased levels of Carcinoenbryonic Antigen (CEA). In these patients, clinically relevant improvement in overall survival was also observed. Frequency of point mutations in exon 3 β-catenin gene is low in our population. It would be interesting to increase the population size to test the clinically relevant influence in the prognosis found, and to test the relation of these events with Microsatellite Instabillity (MSI) pathway. If these findings were confirmed, β-catenin determination would help in the selection of patients with different prognosis.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
The Relationship between Education and Work Credentials. Data Point. NCES 2015-556
ERIC Educational Resources Information Center
Hudson, Lisa; Ewert, Stephanie
2015-01-01
This Data Point uses data from the U.S. Census Bureau's Survey of Income and Program Participation (SIPP), a nationally representative sample survey of households. The SIPP provides information on many topics, including income, participation in government programs, family dynamics, and education. This report uses new SIPP data on professional…
DaCosta, Ralph S.; Kulbatski, Iris; Lindvere-Teene, Liis; Starr, Danielle; Blackmore, Kristina; Silver, Jason I.; Opoku, Julie; Wu, Yichao Charlie; Medeiros, Philip J.; Xu, Wei; Xu, Lizhen; Wilson, Brian C.; Rosen, Cheryl; Linden, Ron
2015-01-01
Background Traditionally, chronic wound infection is diagnosed by visual inspection under white light and microbiological sampling, which are subjective and suboptimal, respectively, thereby delaying diagnosis and treatment. To address this, we developed a novel handheld, fluorescence imaging device (PRODIGI) that enables non-contact, real-time, high-resolution visualization and differentiation of key pathogenic bacteria through their endogenous autofluorescence, as well as connective tissues in wounds. Methods and Findings This was a two-part Phase I, single center, non-randomized trial of chronic wound patients (male and female, ≥18 years; UHN REB #09-0015-A for part 1; UHN REB #12-5003 for part 2; clinicaltrials.gov Identifier: NCT01378728 for part 1 and NCT01651845 for part 2). Part 1 (28 patients; 54% diabetic foot ulcers, 46% non-diabetic wounds) established the feasibility of autofluorescence imaging to accurately guide wound sampling, validated against blinded, gold standard swab-based microbiology. Part 2 (12 patients; 83.3% diabetic foot ulcers, 16.7% non-diabetic wounds) established the feasibility of autofluorescence imaging to guide wound treatment and quantitatively assess treatment response. We showed that PRODIGI can be used to guide and improve microbiological sampling and debridement of wounds in situ, enabling diagnosis, treatment guidance and response assessment in patients with chronic wounds. PRODIGI is safe, easy to use and integrates into the clinical workflow. Clinically significant bacterial burden can be detected in seconds, quantitatively tracked over days-to-months and their biodistribution mapped within the wound bed, periphery, and other remote areas. Conclusions PRODIGI represents a technological advancement in wound sampling and treatment guidance for clinical wound care at the point-of-care. Trial Registration ClinicalTrials.gov NCT01651845; ClinicalTrials.gov NCT01378728 PMID:25790480
Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.
2001-01-01
Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352
Baran, Timothy M; Foster, Thomas H
2013-10-01
We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar
2018-01-01
We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.
Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh
2013-01-01
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417
Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh
2013-01-01
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.
Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J
2014-09-01
The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.
ERIC Educational Resources Information Center
Yalouris, Nicolaos
1996-01-01
Provides a brief but concise overview of the historical development of the Olympic Games in ancient Greece. Originally conceived as a one day, single-race event, the Games grew to the point where they represented an apotheosis of Greek culture. Discusses the role played by conflict within the city-states. (MJP)
PCC Framework for Program-Generators
NASA Technical Reports Server (NTRS)
Kong, Soonho; Choi, Wontae; Yi, Kwangkeun
2009-01-01
In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.
Ultrahigh resolution multicolor colocalization of single fluorescent probes
Weiss, Shimon; Michalet, Xavier; Lacoste, Thilo D.
2005-01-18
A novel optical ruler based on ultrahigh-resolution colocalization of single fluorescent probes is described. Two unique families of fluorophores are used, namely energy-transfer fluorescent beads and semiconductor nanocrystal (NC) quantum dots, that can be excited by a single laser wavelength but emit at different wavelengths. A novel multicolor sample-scanning confocal microscope was constructed which allows one to image each fluorescent light emitter, free of chromatic aberrations, by scanning the sample with nanometer scale steps using a piezo-scanner. The resulting spots are accurately localized by fitting them to the known shape of the excitation point-spread-function of the microscope.
An increase of intelligence measured by the WPPSI in China, 1984–2006
Liu, Jianghong; Yang, Hua; Li, Linda; Chen, Tunong; Lynn, Richard
2017-01-01
Normative data for 5–6 year olds on the Chinese Preschool and Primary Scale of Intelligence (WPPSI) are reported for samples tested in 1984 and 2006. There was a significant increase in Full Scale IQ of 4.53 points over the 22 year period, representing a gain of 2.06 IQ points per decade. There were also significant increases in Verbal IQ of 4.27 points and in Performance IQ of 4.08 points. PMID:29416189
2010-06-01
Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenger, Drake C., E-mail: drake.stenger@ars.usda.
Population structure of Homalodisca coagulata Virus-1 (HoCV-1) among and within field-collected insects sampled from a single point in space and time was examined. Polymorphism in complete consensus sequences among single-insect isolates was dominated by synonymous substitutions. The mutant spectrum of the C2 helicase region within each single-insect isolate was unique and dominated by nonsynonymous singletons. Bootstrapping was used to correct the within-isolate nonsynonymous:synonymous arithmetic ratio (N:S) for RT-PCR error, yielding an N:S value ~one log-unit greater than that of consensus sequences. Probability of all possible single-base substitutions for the C2 region predicted N:S values within 95% confidence limits of themore » corrected within-isolate N:S when the only constraint imposed was viral polymerase error bias for transitions over transversions. These results indicate that bottlenecks coupled with strong negative/purifying selection drive consensus sequences toward neutral sequence space, and that most polymorphism within single-insect isolates is composed of newly-minted mutations sampled prior to selection. -- Highlights: •Sampling protocol minimized differential selection/history among isolates. •Polymorphism among consensus sequences dominated by negative/purifying selection. •Within-isolate N:S ratio corrected for RT-PCR error by bootstrapping. •Within-isolate mutant spectrum dominated by new mutations yet to undergo selection.« less
This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...
Gabriel: Gateway to Europe's National Libraries
ERIC Educational Resources Information Center
Jefcoate, Graham
2006-01-01
Purpose: This paper seeks to look into Gabriel--the Worldwide web server for those European national libraries represented in the Conference of European National Librarians (CENL), providing a single point of access on the internet for the retrieval of information about their functions, services and collections. Above all, it serves as a gateway…
Discrimination of microbiological samples using femtosecond laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Baudelet, Matthieu; Yu, Jin; Bossu, Myriam; Jovelet, Julien; Wolf, Jean-Pierre; Amodeo, Tanguy; Fréjafon, Emeric; Laloi, Patrick
2006-10-01
Using femtosecond laser-induced breakdown spectroscopy, the authors have analyzed five different species of bacterium. Line emissions from six trace mineral elements, Na, Mg, P, K, Ca, and Fe, have been clearly detected. Their intensities correspond to relative concentrations of these elements contained in the analyzed samples. The authors demonstrate that the concentration profile of trace elements allows unambiguous discrimination of different bacteria. Quantitative differentiation has been made by representing bacteria in a six-dimension hyperspace with each of its axis representing a detected trace element. In such hyperspace, representative points of different species of bacterium are gathered in different and distinct volumes.
Evidence for a Population of High-Redshift Submillimeter Galaxies from Interferometric Imaging
NASA Astrophysics Data System (ADS)
Younger, Joshua D.; Fazio, Giovanni G.; Huang, Jia-Sheng; Yun, Min S.; Wilson, Grant W.; Ashby, Matthew L. N.; Gurwell, Mark A.; Lai, Kamson; Peck, Alison B.; Petitpas, Glen R.; Wilner, David J.; Iono, Daisuke; Kohno, Kotaro; Kawabe, Ryohei; Hughes, David H.; Aretxaga, Itziar; Webb, Tracy; Martínez-Sansigre, Alejo; Kim, Sungeun; Scott, Kimberly S.; Austermann, Jason; Perera, Thushara; Lowenthal, James D.; Schinnerer, Eva; Smolčić, Vernesa
2007-12-01
We have used the Submillimeter Array to image a flux-limited sample of seven submillimeter galaxies, selected by the AzTEC camera on the JCMT at 1.1 mm, in the COSMOS field at 890 μm with ~2" resolution. All of the sources-two radio-bright and five radio-dim-are detected as single point sources at high significance (>6 σ), with positions accurate to ~0.2" that enable counterpart identification at other wavelengths observed with similarly high angular resolution. All seven have IRAC counterparts, but only two have secure counterparts in deep HST ACS imaging. As compared to the two radio-bright sources in the sample, and those in previous studies, the five radio-dim sources in the sample (1) have systematically higher submillimeter-to-radio flux ratios, (2) have lower IRAC 3.6-8.0 μm fluxes, and (3) are not detected at 24 μm. These properties, combined with size constraints at 890 μm (θ<~1.2''), suggest that the radio-dim submillimeter galaxies represent a population of very dusty starbursts, with physical scales similar to local ultraluminous infrared galaxies, with an average redshift higher than radio-bright sources.
Latz, Simone; Wahida, Adam; Arif, Assuda; Häfner, Helga; Hoß, Mareike; Ritter, Klaus; Horz, Hans-Peter
2016-10-01
Bacteriophages (phages) represent a potential alternative for combating multi-drug resistant bacteria. Because of their narrow host range and the ever emergence of novel pathogen variants the continued search for phages is a prerequisite for optimal treatment of bacterial infections. Here we performed an ad hoc survey in the surroundings of a University hospital for the presence of phages with therapeutic potential. To this end, 16 aquatic samples of different origins and locations were tested simultaneously for the presence of phages with lytic activity against five current, but distinct strains each from the ESKAPE-group (i.e., Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter cloacae). Phages could be isolated for 70% of strains, covering all bacterial species except S. aureus. Apart from samples from two lakes, freshwater samples were largely devoid of phages. By contrast, one liter of hospital effluent collected at a single time point already contained phages active against two-thirds of tested strains. In conclusion, phages with lytic activity against nosocomial pathogens are unevenly distributed across environments with the prime source being the immediate hospital vicinity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.
NASA Technical Reports Server (NTRS)
Polanco, Michael A.; Kellas, Sotiris; Jackson, Karen
2009-01-01
The performance of material models to simulate a novel composite honeycomb Deployable Energy Absorber (DEA) was evaluated using the nonlinear explicit dynamic finite element code LS-DYNA(Registered TradeMark). Prototypes of the DEA concept were manufactured using a Kevlar/Epoxy composite material in which the fibers are oriented at +/-45 degrees with respect to the loading axis. The development of the DEA has included laboratory tests at subcomponent and component levels such as three-point bend testing of single hexagonal cells, dynamic crush testing of single multi-cell components, and impact testing of a full-scale fuselage section fitted with a system of DEA components onto multi-terrain environments. Due to the thin nature of the cell walls, the DEA was modeled using shell elements. In an attempt to simulate the dynamic response of the DEA, it was first represented using *MAT_LAMINATED_COMPOSITE_FABRIC, or *MAT_58, in LS-DYNA. Values for each parameter within the material model were generated such that an in-plane isotropic configuration for the DEA material was assumed. Analytical predictions showed that the load-deflection behavior of a single-cell during three-point bending was within the range of test data, but predicted the DEA crush response to be very stiff. In addition, a *MAT_PIECEWISE_LINEAR_PLASTICITY, or *MAT_24, material model in LS-DYNA was developed, which represented the Kevlar/Epoxy composite as an isotropic elastic-plastic material with input from +/-45 degrees tensile coupon data. The predicted crush response matched that of the test and localized folding patterns of the DEA were captured under compression, but the model failed to predict the single-cell three-point bending response.
Targeted Capture and High-Throughput Sequencing Using Molecular Inversion Probes (MIPs).
Cantsilieris, Stuart; Stessman, Holly A; Shendure, Jay; Eichler, Evan E
2017-01-01
Molecular inversion probes (MIPs) in combination with massively parallel DNA sequencing represent a versatile, yet economical tool for targeted sequencing of genomic DNA. Several thousand genomic targets can be selectively captured using long oligonucleotides containing unique targeting arms and universal linkers. The ability to append sequencing adaptors and sample-specific barcodes allows large-scale pooling and subsequent high-throughput sequencing at relatively low cost per sample. Here, we describe a "wet bench" protocol detailing the capture and subsequent sequencing of >2000 genomic targets from 192 samples, representative of a single lane on the Illumina HiSeq 2000 platform.
Hennig, Bianca P.; Velten, Lars; Racke, Ines; Tu, Chelsea Szu; Thoms, Matthias; Rybin, Vladimir; Besir, Hüseyin; Remans, Kim; Steinmetz, Lars M.
2017-01-01
Efficient preparation of high-quality sequencing libraries that well represent the biological sample is a key step for using next-generation sequencing in research. Tn5 enables fast, robust, and highly efficient processing of limited input material while scaling to the parallel processing of hundreds of samples. Here, we present a robust Tn5 transposase purification strategy based on an N-terminal His6-Sumo3 tag. We demonstrate that libraries prepared with our in-house Tn5 are of the same quality as those processed with a commercially available kit (Nextera XT), while they dramatically reduce the cost of large-scale experiments. We introduce improved purification strategies for two versions of the Tn5 enzyme. The first version carries the previously reported point mutations E54K and L372P, and stably produces libraries of constant fragment size distribution, even if the Tn5-to-input molecule ratio varies. The second Tn5 construct carries an additional point mutation (R27S) in the DNA-binding domain. This construct allows for adjustment of the fragment size distribution based on enzyme concentration during tagmentation, a feature that opens new opportunities for use of Tn5 in customized experimental designs. We demonstrate the versatility of our Tn5 enzymes in different experimental settings, including a novel single-cell polyadenylation site mapping protocol as well as ultralow input DNA sequencing. PMID:29118030
Further improvement of hydrostatic pressure sample injection for microchip electrophoresis.
Luo, Yong; Zhang, Qingquan; Qin, Jianhua; Lin, Bingcheng
2007-12-01
Hydrostatic pressure sample injection method is able to minimize the number of electrodes needed for a microchip electrophoresis process; however, it neither can be applied for electrophoretic DNA sizing, nor can be implemented on the widely used single-cross microchip. This paper presents an injector design that makes the hydrostatic pressure sample injection method suitable for DNA sizing. By introducing an assistant channel into the normal double-cross injector, a rugged DNA sample plug suitable for sizing can be successfully formed within the cross area during the sample loading. This paper also demonstrates that the hydrostatic pressure sample injection can be performed in the single-cross microchip by controlling the radial position of the detection point in the separation channel. Rhodamine 123 and its derivative as model sample were successfully separated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, V.E.
Naturally occurring radioactivity was measured in the atmospheric emissions and process materials of a thermal phosphate (elemental phosphorus) plant. Representative exhaust stack samples were collected from each process in the plant. The phosphate ore contained 12 to 20 parts per million uranium. Processes, emission points, and emission controls are described. Radioactivity concentrations and emission rates from the sources sampled are given.
The relevance of time series in molecular ecology and conservation biology.
Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E
2014-05-01
The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.
Houzet, Laurent; Deleage, Claire; Satie, Anne-Pascale; Merlande, Laetitia; Mahe, Dominique; Dejucq-Rainsford, Nathalie
2015-01-01
PCR is the most widely applied technique for large scale screening of bacterial clones, mouse genotypes, virus genomes etc. A drawback of large PCR screening is that amplicon analysis is usually performed using gel electrophoresis, a step that is very labor intensive, tedious and chemical waste generating. Single genome amplification (SGA) is used to characterize the diversity and evolutionary dynamics of virus populations within infected hosts. SGA is based on the isolation of single template molecule using limiting dilution followed by nested PCR amplification and requires the analysis of hundreds of reactions per sample, making large scale SGA studies very challenging. Here we present a novel approach entitled Long Amplicon Melt Profiling (LAMP) based on the analysis of the melting profile of the PCR reactions using SYBR Green and/or EvaGreen fluorescent dyes. The LAMP method represents an attractive alternative to gel electrophoresis and enables the quick discrimination of positive reactions. We validate LAMP for SIV and HIV env-SGA, in 96- and 384-well plate formats. Because the melt profiling allows the screening of several thousands of PCR reactions in a cost-effective, rapid and robust way, we believe it will greatly facilitate any large scale PCR screening. PMID:26053379
Approach to Spacelab Payload mission management
NASA Technical Reports Server (NTRS)
Craft, H. G.; Lester, R. C.
1978-01-01
The nucleus of the approach to Spacelab Payload mission management is the establishment of a single point of authority for the entire payload on a given mission. This single point mission manager will serve as a 'broker' between the individual experiments and the STS, negotiating agreements by two-part interaction. The payload mission manager, along with a small support team, will represent the users in negotiating use of STS accommodations. He will provide the support needed by each individual experimenter to meet the scientific, technological, and applications objectives of the mission with minimum cost and maximum efficiency. The investigator will assume complete responsibility for his experiment hardware definition and development and will take an active role in the integration and operation of his experiment.
Probabilistic #D data fusion for multiresolution surface generation
NASA Technical Reports Server (NTRS)
Manduchi, R.; Johnson, A. E.
2002-01-01
In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.
Characterisation of turbulence downstream of a linear compressor cascade
NASA Astrophysics Data System (ADS)
di Mare, Luca; Jelly, Thomas; Day, Ivor
2014-11-01
Characterisation of turbulence in turbomachinery remains one of the most complex tasks in fluid mechanics. In addition, current closure models required for Reynolds-averaged Navier-Stokes computations do not accurately represent the action of turbulent forces against the mean flow. Therefore, the statistical properties of turbulence in turbomachinery are of significant interest. In the current work, single- and two-point hot-wire measurements have been acquired downstream of a linear compressor cascade in order to examine the properties of large-scale turbulent structures and to assess how they affect turbulent momentum and energy transfer in compressor passages. The cascade has seven controlled diffusion which are representative of high-pressure stator blades found in turbofan engines. Blade chord, thickness and camber are 0.1515 m, 9.3% and 42 degrees, respectively. Measurements were acquired at a chord Reynolds number of 6 . 92 ×105 . Single-point statistics highlight differences in turbulence structure when comparing mid-span and end-wall regions. Evaluation of two-point correlations and their corresponding spectra reveal the length-scales of the energy-bearing eddies in the cascade. Ultimately, these measurements can be used to calibrate future computational models. The authors gratefully acknowledge Rolls-Royce plc for funding this work and granting permission for its publication.
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
Fractional compartmental models and multi-term Mittag-Leffler response functions.
Verotta, Davide
2010-04-01
Systems of fractional differential equations (SFDE) have been increasingly used to represent physical and control system, and have been recently proposed for use in pharmacokinetics (PK) by (J Pharmacokinet Pharmacodyn 36:165-178, 2009) and (J Phamacokinet Pharmacodyn, 2010). We contribute to the development of a theory for the use of SFDE in PK by, first, further clarifying the nature of systems of FDE, and in particular point out the distinction and properties of commensurate versus non-commensurate ones. The second purpose is to show that for both types of systems, relatively simple response functions can be derived which satisfy the requirements to represent single-input/single-output PK experiments. The response functions are composed of sums of single- (for commensurate) or two-parameters (for non-commensurate) Mittag-Leffler functions, and establish a direct correspondence with the familiar sums of exponentials used in PK.
Arab, L; Ang, A
2015-03-01
To examine the association between walnut consumption and measures of cognitive function in the US population. Nationally representative cross sectional study using 24 hour dietary recalls of intakes to assess walnut and other nut consumption as compared to the group reporting no nut consumption. 1988-1994 and 1999-2002 rounds of the National Health and Nutrition Examination Survey (NHANES). Representative weighted sample of US adults 20 to 90 years of age. The Neurobehavioral Evaluation System 2 (NES2), consisting of simple reaction time (SRTT), symbol digit substitution (SDST), the single digit learning (SDLT), Story Recall (SRT) and digit-symbol substitution (DSST) tests. Adults 20-59 years old reporting walnut consumption of an average of 10.3 g/d required 16.4ms less time to respond on the SRTT, P=0.03, and 0.39s less for the SDST, P=0.01. SDLT scores were also significantly lower by 2.38s (P=0.05). Similar results were obtained when tertiles of walnut consumption were examined in trend analyses. Significantly better outcomes were noted in all cognitive test scores among those with higher walnut consumption (P < 0.01). Among adults 60 years and older, walnut consumers averaged 13.1 g/d, scored 7.1 percentile points higher, P=0.03 on the SRT and 7.3 percentile points higher on the DSST, P=0.05. Here also trend analyses indicate significant improvements in all cognitive test scores (P < 0.01) except for SRTT (P = 0.06) in the fully adjusted models. These significant, positive associations between walnut consumption and cognitive functions among all adults, regardless of age, gender or ethnicity suggest that daily walnut intake may be a simple beneficial dietary behavior.
Genetic Mapping of Fixed Phenotypes: Disease Frequency as a Breed Characteristic
Jones, Paul; Martin, Alan; Ostrander, Elaine A.; Lark, Karl G.
2009-01-01
Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pacreatitis. PMID:19321632
Hobfoll, Stevan E.; Palmieri, Patrick A.; Johnson, Robert J.; Canetti-Nisim, Daphna; Hall, Brian J.; Galea, Sandro
2010-01-01
This is the 1st longitudinal examination of trajectories of resilience and resistance (rather than ill-being) among a national sample under ongoing threat of mass casualty. The authors interviewed a nationally representative sample of Jews and Arabs in Israel (N = 709) at 2 times during a period of terrorist and rocket attacks (2004–2005). The resistance trajectory, exhibiting few or no symptoms of traumatic stress and depression at both time points, was substantially less common (22.1%) than has previously been documented in studies following single mass casualty events. The resilience trajectory, exhibiting initial symptoms and becoming relatively nonsymptomatic, was evidenced by 13.5% of interviewees. The chronic distress trajectory was documented among a majority of participants (54.0%), and a small proportion of persons were initially relatively symptom-free but became distressed (termed delayed distress trajectory; 10.3%). Less psychosocial resource loss and majority status (Jewish) were the most consistent predictors of resistance and resilience trajectories, followed by greater socioeconomic status, greater support from friends, and less report of posttraumatic growth. PMID:19170460
Genetic mapping of fixed phenotypes: disease frequency as a breed characteristic.
Chase, Kevin; Jones, Paul; Martin, Alan; Ostrander, Elaine A; Lark, Karl G
2009-01-01
Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pancreatitis.
NASA Technical Reports Server (NTRS)
Jurns, John M.; McQuillen, John B.; Gaby, Joseph D., Jr.; Sinacore, Steven A., Jr.
2009-01-01
Liquid acquisition devices (LADs) can be utilized within a propellant tank in space to deliver single-phase liquid to the engine in low gravity. One type of liquid acquisition device is a screened gallery whereby a fine mesh screen acts as a 'bubble filter' and prevents the gas bubbles from passing through until a crucial pressure differential condition across the screen, called the bubble point, is reached. This paper presents data for LAD bubble point data in liquid methane (LCH4) for stainless steel Dutch twill screens with mesh sizes of 325 by 2300. These tests represent the first known nonproprietary effort to collect bubble point data for LCH4.
Single-Molecule Counting of Point Mutations by Transient DNA Binding
NASA Astrophysics Data System (ADS)
Su, Xin; Li, Lidan; Wang, Shanshan; Hao, Dandan; Wang, Lei; Yu, Changyuan
2017-03-01
High-confidence detection of point mutations is important for disease diagnosis and clinical practice. Hybridization probes are extensively used, but are hindered by their poor single-nucleotide selectivity. Shortening the length of DNA hybridization probes weakens the stability of the probe-target duplex, leading to transient binding between complementary sequences. The kinetics of probe-target binding events are highly dependent on the number of complementary base pairs. Here, we present a single-molecule assay for point mutation detection based on transient DNA binding and use of total internal reflection fluorescence microscopy. Statistical analysis of single-molecule kinetics enabled us to effectively discriminate between wild type DNA sequences and single-nucleotide variants at the single-molecule level. A higher single-nucleotide discrimination is achieved than in our previous work by optimizing the assay conditions, which is guided by statistical modeling of kinetics with a gamma distribution. The KRAS c.34 A mutation can be clearly differentiated from the wild type sequence (KRAS c.34 G) at a relative abundance as low as 0.01% mutant to WT. To demonstrate the feasibility of this method for analysis of clinically relevant biological samples, we used this technology to detect mutations in single-stranded DNA generated from asymmetric RT-PCR of mRNA from two cancer cell lines.
ERIC Educational Resources Information Center
Quirk, Abigail; Spiegelman, Maura
2018-01-01
The Principal Questionnaire was administered as part of the 2015-16 National Teacher and Principal Survey (NTPS), which is a nationally representative sample survey of public K-12 schools, principals, and teachers in the 50 states and the District of Columbia. This Data Point examines the relationship between public school principals' perceived…
Isoelectric Focusing of Cassava Protoplasts
Santana, María Angélica; Villegas, Leopoldo
1991-01-01
Cassava (Manihot esculenta Crantz) protoplast was analyzed by using isoelectric focusing techniques. Two populations, representing 68 and 32% of the total sample, with mean isoelectric points of 4.48 and 4.60, were obtained using mesophyll protoplasts. The use of this technique allows demonstration of a discontinuous distribution of protoplast isoelectric point from one species according to their surface potential. Images Figure 1 PMID:16667975
Multi-Level Information Systems. AIR Forum Paper 1978.
ERIC Educational Resources Information Center
Jones, Leighton D.; Trautman, DeForest L.
To support informational needs of day-to-day and long-range decision-making, many universities have developed their own data collection devices and institutional reporting systems. Often these models only represent a single point in time and do not effectively support needs at college and departmental levels. This paper identifies some of the more…
Baran, Timothy M.; Foster, Thomas H.
2014-01-01
Background and Objective We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Materials and Methods Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. Results We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. Conclusion This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. PMID:24037853
Growth, and magnetic study of Sm0.4Er0.6FeO3 single crystal grown by optical floating zone technique
NASA Astrophysics Data System (ADS)
Wu, Anhua; Zhao, Xiangyang; Man, Peiwen; Su, Liangbi; Kalashnikova, A. M.; Pisarev, R. V.
2018-03-01
Sm0.4Er0.6FeO3 single crystals were successfully grown by optical floating zone method; high quality samples with various orientations were manufactured. Based on these samples, Magnetic property of Sm0.4Er0.6FeO3 single crystals were investigated systemically by means of the temperature dependence of magnetization. It indicated that compositional variations not only alter the spin reorientation temperature, but also the compensation temperature of the orthoferrites. Unlike single rare earth orthoferrites, the reversal transition temperature point of Sm0.4Er0.6FeO3 increases as magnetic field increases, which is positive for designing novel spin switching or magnetic sensor device.
Exploring variation-aware contig graphs for (comparative) metagenomics using MaryGold
Nijkamp, Jurgen F.; Pop, Mihai; Reinders, Marcel J. T.; de Ridder, Dick
2013-01-01
Motivation: Although many tools are available to study variation and its impact in single genomes, there is a lack of algorithms for finding such variation in metagenomes. This hampers the interpretation of metagenomics sequencing datasets, which are increasingly acquired in research on the (human) microbiome, in environmental studies and in the study of processes in the production of foods and beverages. Existing algorithms often depend on the use of reference genomes, which pose a problem when a metagenome of a priori unknown strain composition is studied. In this article, we develop a method to perform reference-free detection and visual exploration of genomic variation, both within a single metagenome and between metagenomes. Results: We present the MaryGold algorithm and its implementation, which efficiently detects bubble structures in contig graphs using graph decomposition. These bubbles represent variable genomic regions in closely related strains in metagenomic samples. The variation found is presented in a condensed Circos-based visualization, which allows for easy exploration and interpretation of the found variation. We validated the algorithm on two simulated datasets containing three respectively seven Escherichia coli genomes and showed that finding allelic variation in these genomes improves assemblies. Additionally, we applied MaryGold to publicly available real metagenomic datasets, enabling us to find within-sample genomic variation in the metagenomes of a kimchi fermentation process, the microbiome of a premature infant and in microbial communities living on acid mine drainage. Moreover, we used MaryGold for between-sample variation detection and exploration by comparing sequencing data sampled at different time points for both of these datasets. Availability: MaryGold has been written in C++ and Python and can be downloaded from http://bioinformatics.tudelft.nl/software Contact: d.deridder@tudelft.nl PMID:24058058
Gracia, Ana; Arsuaga, Juan Luis; Martínez, Ignacio; Lorenzo, Carlos; Carretero, José Miguel; Bermúdez de Castro, José María; Carbonell, Eudald
2009-04-21
We report here a previously undescribed human Middle Pleistocene immature specimen, Cranium 14, recovered at the Sima de los Huesos (SH) site (Atapuerca, Spain), that constitutes the oldest evidence in human evolution of a very rare pathology in our own species, lambdoid single suture craniosynostosis (SSC). Both the ecto- and endo-cranial deformities observed in this specimen are severe. All of the evidence points out that this severity implies that the SSC occurred before birth, and that facial asymmetries, as well as motor/cognitive disorders, were likely to be associated with this condition. The analysis of the present etiological data of this specimen lead us to consider that Cranium 14 is a case of isolated SSC, probably of traumatic origin. The existence of this pathological individual among the SH sample represents also a fact to take into account when referring to sociobiological behavior in Middle Pleistocene humans.
Gracia, Ana; Arsuaga, Juan Luis; Martínez, Ignacio; Lorenzo, Carlos; Carretero, José Miguel; Bermúdez de Castro, José María; Carbonell, Eudald
2009-01-01
We report here a previously undescribed human Middle Pleistocene immature specimen, Cranium 14, recovered at the Sima de los Huesos (SH) site (Atapuerca, Spain), that constitutes the oldest evidence in human evolution of a very rare pathology in our own species, lambdoid single suture craniosynostosis (SSC). Both the ecto- and endo-cranial deformities observed in this specimen are severe. All of the evidence points out that this severity implies that the SSC occurred before birth, and that facial asymmetries, as well as motor/cognitive disorders, were likely to be associated with this condition. The analysis of the present etiological data of this specimen lead us to consider that Cranium 14 is a case of isolated SSC, probably of traumatic origin. The existence of this pathological individual among the SH sample represents also a fact to take into account when referring to sociobiological behavior in Middle Pleistocene humans. PMID:19332773
Global dynamics of selective attention and its lapses in primary auditory cortex.
Lakatos, Peter; Barczak, Annamaria; Neymotin, Samuel A; McGinnis, Tammy; Ross, Deborah; Javitt, Daniel C; O'Connell, Monica Noelle
2016-12-01
Previous research demonstrated that while selectively attending to relevant aspects of the external world, the brain extracts pertinent information by aligning its neuronal oscillations to key time points of stimuli or their sampling by sensory organs. This alignment mechanism is termed oscillatory entrainment. We investigated the global, long-timescale dynamics of this mechanism in the primary auditory cortex of nonhuman primates, and hypothesized that lapses of entrainment would correspond to lapses of attention. By examining electrophysiological and behavioral measures, we observed that besides the lack of entrainment by external stimuli, attentional lapses were also characterized by high-amplitude alpha oscillations, with alpha frequency structuring of neuronal ensemble and single-unit operations. Entrainment and alpha-oscillation-dominated periods were strongly anticorrelated and fluctuated rhythmically at an ultra-slow rate. Our results indicate that these two distinct brain states represent externally versus internally oriented computational resources engaged by large-scale task-positive and task-negative functional networks.
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.
Integrated electrochemical microsystems for genetic detection of pathogens at the point of care.
Hsieh, Kuangwen; Ferguson, B Scott; Eisenstein, Michael; Plaxco, Kevin W; Soh, H Tom
2015-04-21
The capacity to achieve rapid, sensitive, specific, quantitative, and multiplexed genetic detection of pathogens via a robust, portable, point-of-care platform could transform many diagnostic applications. And while contemporary technologies have yet to effectively achieve this goal, the advent of microfluidics provides a potentially viable approach to this end by enabling the integration of sophisticated multistep biochemical assays (e.g., sample preparation, genetic amplification, and quantitative detection) in a monolithic, portable device from relatively small biological samples. Integrated electrochemical sensors offer a particularly promising solution to genetic detection because they do not require optical instrumentation and are readily compatible with both integrated circuit and microfluidic technologies. Nevertheless, the development of generalizable microfluidic electrochemical platforms that integrate sample preparation and amplification as well as quantitative and multiplexed detection remains a challenging and unsolved technical problem. Recognizing this unmet need, we have developed a series of microfluidic electrochemical DNA sensors that have progressively evolved to encompass each of these critical functionalities. For DNA detection, our platforms employ label-free, single-step, and sequence-specific electrochemical DNA (E-DNA) sensors, in which an electrode-bound, redox-reporter-modified DNA "probe" generates a current change after undergoing a hybridization-induced conformational change. After successfully integrating E-DNA sensors into a microfluidic chip format, we subsequently incorporated on-chip genetic amplification techniques including polymerase chain reaction (PCR) and loop-mediated isothermal amplification (LAMP) to enable genetic detection at clinically relevant target concentrations. To maximize the potential point-of-care utility of our platforms, we have further integrated sample preparation via immunomagnetic separation, which allowed the detection of influenza virus directly from throat swabs and developed strategies for the multiplexed detection of related bacterial strains from the blood of septic mice. Finally, we developed an alternative electrochemical detection platform based on real-time LAMP, which not is only capable of detecting across a broad dynamic range of target concentrations, but also greatly simplifies quantitative measurement of nucleic acids. These efforts represent considerable progress toward the development of a true sample-in-answer-out platform for genetic detection of pathogens at the point of care. Given the many advantages of these systems, and the growing interest and innovative contributions from researchers in this field, we are optimistic that iterations of these systems will arrive in clinical settings in the foreseeable future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.
In this study, we report initial demonstrations of the use of single crystals in indirect x-ray imaging with a benchtop implementation of propagation-based (PB) x-ray phase contrast imaging. Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point-spread function (PSF) with the 50-μm thick single crystal scintillators than with the reference polycrystalline phosphor/scintillator. Fiber-optic plate depth-of-focus and Al reflective-coating aspects are also elucidated. Guided by the results from the 25-mm diameter crystal samples, we report additionally the first results with a unique 88-mm diameter single crystal bonded to a fiber optic platemore » and coupled to the large format CCD. Both PSF and x-ray phase contrast imaging data are quantified and presented.« less
A Weak Embrace: Popular and Scholarly Depictions of Single-Parent Families, 1900-1998
ERIC Educational Resources Information Center
Usdansky, Margaret L.
2009-01-01
The growth of single-parent families constitutes one of the most dramatic and most studied social changes of the 20th century. Evolving attitudes toward these families have received less attention. This paper explores depictions of these families in representative samples of popular magazine (N = 474) and social science journal (N = 202) articles.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...
2016-03-01
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Multiscale study on stochastic reconstructions of shale samples
NASA Astrophysics Data System (ADS)
Lili, J.; Lin, M.; Jiang, W. B.
2016-12-01
Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).
An evaluation of dynamic lip-tooth characteristics during speech and smile in adolescents.
Ackerman, Marc B; Brensinger, Colleen; Landis, J Richard
2004-02-01
This retrospective study was conducted to measure lip-tooth characteristics of adolescents. Pretreatment video clips of 1242 consecutive patients were screened for Class-I skeletal and dental patterns. After all inclusion criteria were applied, the final sample consisted of 50 patients (27 boys, 23 girls) with a mean age of 12.5 years. The raw digital video stream of each patient was edited to select a single image frame representing the patient saying the syllable "chee" and a second single image representing the patient's posed social smile and saved as part of a 12-frame image sequence. Each animation image was analyzed using a SmileMesh computer application to measure the smile index (the ratio of the intercommissure width divided by the interlabial gap), intercommissure width (mm), interlabial gap (mm), percent incisor below the intercommissure line, and maximum incisor exposure (mm). The data were analyzed using SAS (version 8.1). All recorded differences in linear measures had to be > or = 2 mm. The results suggest that anterior tooth display at speech and smile should be recorded independently but evaluated as part of a dynamic range. Asking patients to say "cheese" and then smile is no longer a valid method to elicit the parameters of anterior tooth display. When planning the vertical positions of incisors during orthodontic treatment, the orthodontist should view the dynamics of anterior tooth display as a continuum delineated by the time points of rest, speech, posed social smile, and a Duchenne smile.
ERIC Educational Resources Information Center
Collishaw, Stephan; Maughan, Barbara; Natarajan, Lucy; Pickles, Andrew
2010-01-01
Background: Evidence about trends in adolescent emotional problems (depression and anxiety) is inconclusive, because few studies have used comparable measures and samples at different points in time. We compared rates of adolescent emotional problems in two nationally representative English samples of youth 20 years apart using identical symptom…
Tumor Heterogeneity, Single-Cell Sequencing, and Drug Resistance.
Schmidt, Felix; Efferth, Thomas
2016-06-16
Tumor heterogeneity has been compared with Darwinian evolution and survival of the fittest. The evolutionary ecosystem of tumors consisting of heterogeneous tumor cell populations represents a considerable challenge to tumor therapy, since all genetically and phenotypically different subpopulations have to be efficiently killed by therapy. Otherwise, even small surviving subpopulations may cause repopulation and refractory tumors. Single-cell sequencing allows for a better understanding of the genomic principles of tumor heterogeneity and represents the basis for more successful tumor treatments. The isolation and sequencing of single tumor cells still represents a considerable technical challenge and consists of three major steps: (1) single cell isolation (e.g., by laser-capture microdissection), fluorescence-activated cell sorting, micromanipulation, whole genome amplification (e.g., with the help of Phi29 DNA polymerase), and transcriptome-wide next generation sequencing technologies (e.g., 454 pyrosequencing, Illumina sequencing, and other systems). Data demonstrating the feasibility of single-cell sequencing for monitoring the emergence of drug-resistant cell clones in patient samples are discussed herein. It is envisioned that single-cell sequencing will be a valuable asset to assist the design of regimens for personalized tumor therapies based on tumor subpopulation-specific genetic alterations in individual patients.
Stack Characterization in CryoSat Level1b SAR/SARin Baseline C
NASA Astrophysics Data System (ADS)
Scagliola, Michele; Fornari, Marco; Di Giacinto, Andrea; Bouffard, Jerome; Féménias, Pierre; Parrinello, Tommaso
2015-04-01
CryoSat was launched on the 8th April 2010 and is the first European ice mission dedicated to the monitoring of precise changes in the thickness of polar ice sheets and floating sea ice. CryoSat is the first altimetry mission operating in SAR mode and it carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. The current CryoSat IPF (Instrument Processing Facility), Baseline B, was released in operation in February 2012. After more than 2 years of development, the release in operations of the Baseline C is expected in the first half of 2015. It is worth recalling here that the CryoSat SAR/SARin IPF1 generates 20Hz waveforms in correspondence of an approximately equally spaced set of ground locations on the Earth surface, i.e. surface samples, and that a surface sample gathers a collection of single-look echoes coming from the processed bursts during the time of visibility. Thus, for a given surface sample, the stack can be defined as the collection of all the single-look echoes pointing to the current surface sample, after applying all the necessary range corrections. The L1B product contains the power average of all the single-look echoes in the stack: the multi-looked L1B waveform. This reduces the data volume, while removing some information contained in the single looks, useful for characterizing the surface and modelling the L1B waveform. To recover such information, a set of parameters has been added to the L1B product: the stack characterization or beam behaviour parameters. The stack characterization, already included in previous Baselines, has been reviewed and expanded in Baseline C. This poster describes all the stack characterization parameters, detailing what they represent and how they have been computed. In details, such parameters can be summarized in: - Stack statistical parameters, such as skewness and kurtosis - Look angle (i.e. the angle at which the surfaces sample is seen with respect to the nadir direction of the satellite) and Doppler angle (i.e. the angle at which the surfaces sample is seen with respect to the normal to the velocity vector) for the first and the last single-look echoes in the stack. - Number of single-looks averaged in the stack (in Baseline C a stack-weighting has been applied that reduces the number of looks). With the correct use of these parameters, users will be able to retrieve some of the 'lost' information contained within the stack and fully exploit the L1B product.
Impact of sampling techniques on measured stormwater quality data for small streams
Harmel, R.D.; Slade, R.M.; Haney, R.L.
2010-01-01
Science-based sampling methodologies are needed to enhance water quality characterization for setting appropriate water quality standards, developing Total Maximum Daily Loads, and managing nonpoint source pollution. Storm event sampling, which is vital for adequate assessment of water quality in small (wadeable) streams, is typically conducted by manual grab or integrated sampling or with an automated sampler. Although it is typically assumed that samples from a single point adequately represent mean cross-sectional concentrations, especially for dissolved constituents, this assumption of well-mixed conditions has received limited evaluation. Similarly, the impact of temporal (within-storm) concentration variability is rarely considered. Therefore, this study evaluated differences in stormwater quality measured in small streams with several common sampling techniques, which in essence evaluated within-channel and within-storm concentration variability. Constituent concentrations from manual grab samples and from integrated samples were compared for 31 events, then concentrations were also compared for seven events with automated sample collection. Comparison of sampling techniques indicated varying degrees of concentration variability within channel cross sections for both dissolved and particulate constituents, which is contrary to common assumptions of substantial variability in particulate concentrations and of minimal variability in dissolved concentrations. Results also indicated the potential for substantial within-storm (temporal) concentration variability for both dissolved and particulate constituents. Thus, failing to account for potential cross-sectional and temporal concentration variability in stormwater monitoring projects can introduce additional uncertainty in measured water quality data. Copyright ?? 2010 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?
NASA Astrophysics Data System (ADS)
Bertoni, Bridget; Hooper, Dan; Linden, Tim
2016-05-01
In a previous paper, we pointed out that the gamma-ray source 3FGL J2212.5+\\linebreak 0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18-33 GeV and an annihilation cross section on the order of σ v ~ 10-26 cm3/s (for the representative case of annihilations to bbar b), similar to the values required to generate the Galactic Center gamma-ray excess.
Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?
Bertoni, Bridget; Hooper, Dan; Linden, Tim
2016-05-23
In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less
Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertoni, Bridget; Hooper, Dan; Linden, Tim
In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less
Ion photon emission microscope
Doyle, Barney L.
2003-04-22
An ion beam analysis system that creates microscopic multidimensional image maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the ion-induced photons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted photons are collected in the lens system of a conventional optical microscope, and projected on the image plane of a high resolution single photon position sensitive detector. Position signals from this photon detector are then correlated in time with electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these photons initially.
Is It Attachment Style or Socio-Demography: Singlehood in a Representative Sample.
Petrowski, Katja; Schurig, Susan; Schmutzer, Gabriele; Brähler, Elmar; Stöbel-Richter, Yve
2015-01-01
Since the percentage of single adults is steadily increasing, the reasons for this development have become a matter of growing interest. Hereby, an individual's attachment style may have a connection to the partnership status. In the following analysis, attachment style, gender, age, education, and income were compared in regard to the partnership status. Furthermore, an analysis of variance was computed to compare the attachment style within different groups. In 2012, a sample of 1,676 representative participants was used. The participants were aged 18 to 60 (M = 41.0, SD = 12.3); 54% of the sample were female, and 40% were single. Attachment-related attitudes were assessed with the German version of the adult attachment scale (AAS). Single adult males did not show a more anxious attachment style than single adult females or females in relationships. Younger, i.e., 18 to 30 years old, paired individuals showed greater attachment anxiety than single individuals, whereby single individuals between the ages of 31 to 45 showed greater attachment anxiety than individuals in relationships. In addition, single individuals more frequently had obtained their high school diploma in contrast to individuals in relationships. Concerning attachment style, the individuals who had not completed their high school diploma showed less faith in others independent of singlehood or being in a relationship. Concerning age, older single individuals, i.e., 46 to 60 years, felt less comfortable in respect to closeness and showed less faith in others compared to paired individuals. Logistic regression showed that individuals were not single if they did not mind depending on others, showed high attachment anxiety, were older, and had lower education. An income below € 2000/month was linked to a nearly 13-fold increase of likelihood of being single. In sum, the attachment style had a differential age-dependent association to singlehood versus being in a relationship. Education played also a role, exclusively concerning faith in others.
NASA Astrophysics Data System (ADS)
Chen, Jinlei; Wen, Jun; Tian, Hui
2016-02-01
Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.
2014-07-01
The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Cuozzo, Frank P; Rasoazanabary, Emilienne; Godfrey, Laurie R; Sauther, Michelle L; Youssouf, Ibrahim Antho; LaFleur, Marni M
2013-01-01
A thorough knowledge of biological variation in extant primates is imperative for interpreting variation, and for delineating species in primate biology and paleobiology. This is especially the case given the recent, rapid taxonomic expansion in many primate groups, notably among small-bodied nocturnal forms. Here we present data on dental, cranial, and pelage variation in a single-locality museum sample of mouse lemurs from Amboasary, Madagascar. To interpret these data, we include comparative information from other museum samples, and from a newly collected mouse lemur skeletal sample from the Beza Mahafaly Special Reserve (BMSR), Madagascar. We scored forty dental traits (n = 126) and three pelage variants (n = 19), and collected 21 cranial/dental measures. Most dental traits exhibit variable frequencies, with some only rarely present. Individual dental variants include misshapen and supernumerary teeth. All Amboasary pelage specimens display a "reversed V" on the cap, and a distinct dorsal median stripe on the back. All but two displayed the dominant gray-brown pelage coloration typical of Microcebus griseorufus. Cranial and dental metric variability are each quite low, and craniometric variation does not illustrate heteroscedasticity. To assess whether this sample represents a single species, we compared dental and pelage variation to a documented, single-species M. griseorufus sample from BMSR. As at Amboasary, BMSR mouse lemurs display limited odontometric variation and wide variation in non-metric dental traits. In contrast, BMSR mouse lemurs display diverse pelage, despite reported genetic homogeneity. Ranges of dental and pelage variation at BMSR and Amboasary overlap. Thus, we conclude that the Amboasary mouse lemurs represent a single species - most likely (in the absence of genetic data to the contrary) M. griseorufus, and we reject their previous allocation to Microcebus murinus. Patterns of variation in the Amboasary sample provide a comparative template for recognizing the degree of variation manifested in a single primate population, and by implication, they provide minimum values for this species' intraspecific variation. Finally, discordance between different biological systems in our mouse lemur samples illustrates the need to examine multiple systems when conducting taxonomic analyses among living or fossil primates. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
Correlation of Spatially Filtered Dynamic Speckles in Distance Measurement Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, Dmitry V.; Nippolainen, Ervin; Kamshilin, Alexei A.
2008-04-15
In this paper statistical properties of spatially filtered dynamic speckles are considered. This phenomenon was not sufficiently studied yet while spatial filtering is an important instrument for speckles velocity measurements. In case of spatial filtering speckle velocity information is derived from the modulation frequency of filtered light power which is measured by photodetector. Typical photodetector output is represented by a narrow-band random noise signal which includes non-informative intervals. Therefore more or less precious frequency measurement requires averaging. In its turn averaging implies uncorrelated samples. However, conducting research we found that correlation is typical property not only of dynamic speckle patternsmore » but also of spatially filtered speckles. Using spatial filtering the correlation is observed as a response of measurements provided to the same part of the object surface or in case of simultaneously using several adjacent photodetectors. Found correlations can not be explained using just properties of unfiltered dynamic speckles. As we demonstrate the subject of this paper is important not only from pure theoretical point but also from the point of applied speckle metrology. E.g. using single spatial filter and an array of photodetector can greatly improve accuracy of speckle velocity measurements.« less
Li, Yiyan; Yang, Xing; Zhao, Weian
2018-01-01
Rapid bacterial identification (ID) and antibiotic susceptibility testing (AST) are in great demand due to the rise of drug-resistant bacteria. Conventional culture-based AST methods suffer from a long turnaround time. By necessity, physicians often have to treat patients empirically with antibiotics, which has led to an inappropriate use of antibiotics, an elevated mortality rate and healthcare costs, and antibiotic resistance. Recent advances in miniaturization and automation provide promising solutions for rapid bacterial ID/AST profiling, which will potentially make a significant impact in the clinical management of infectious diseases and antibiotic stewardship in the coming years. In this review, we summarize and analyze representative emerging micro- and nanotechnologies, as well as automated systems for bacterial ID/AST, including both phenotypic (e.g., microfluidic-based bacterial culture, and digital imaging of single cells) and molecular (e.g., multiplex PCR, hybridization probes, nanoparticles, synthetic biology tools, mass spectrometry, and sequencing technologies) methods. We also discuss representative point-of-care (POC) systems that integrate sample processing, fluid handling, and detection for rapid bacterial ID/AST. Finally, we highlight major remaining challenges and discuss potential future endeavors toward improving clinical outcomes with rapid bacterial ID/AST technologies. PMID:28850804
Li, Yiyan; Yang, Xing; Zhao, Weian
2017-12-01
Rapid bacterial identification (ID) and antibiotic susceptibility testing (AST) are in great demand due to the rise of drug-resistant bacteria. Conventional culture-based AST methods suffer from a long turnaround time. By necessity, physicians often have to treat patients empirically with antibiotics, which has led to an inappropriate use of antibiotics, an elevated mortality rate and healthcare costs, and antibiotic resistance. Recent advances in miniaturization and automation provide promising solutions for rapid bacterial ID/AST profiling, which will potentially make a significant impact in the clinical management of infectious diseases and antibiotic stewardship in the coming years. In this review, we summarize and analyze representative emerging micro- and nanotechnologies, as well as automated systems for bacterial ID/AST, including both phenotypic (e.g., microfluidic-based bacterial culture, and digital imaging of single cells) and molecular (e.g., multiplex PCR, hybridization probes, nanoparticles, synthetic biology tools, mass spectrometry, and sequencing technologies) methods. We also discuss representative point-of-care (POC) systems that integrate sample processing, fluid handling, and detection for rapid bacterial ID/AST. Finally, we highlight major remaining challenges and discuss potential future endeavors toward improving clinical outcomes with rapid bacterial ID/AST technologies.
Performance of the Cell processor for biomolecular simulations
NASA Astrophysics Data System (ADS)
De Fabritiis, G.
2007-06-01
The new Cell processor represents a turning point for computing intensive applications. Here, I show that for molecular dynamics it is possible to reach an impressive sustained performance in excess of 30 Gflops with a peak of 45 Gflops for the non-bonded force calculations, over one order of magnitude faster than a single core standard processor.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
ERIC Educational Resources Information Center
Liao, Yuan
2011-01-01
The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…
Nuclear thermionic power plants in the 50-300 kWe range.
NASA Technical Reports Server (NTRS)
Van Hoomissen, J. E.; Sawyer, C. D.; Prickett, W. Z.
1972-01-01
This paper reviews the results of recent studies performed by General Electric on in-core thermionic reactor power plants in the 50-300 kWe range. In particular, a 100 kWe manned Space Base mission and a 240 kWe unmanned electric propulsion mission are singled out as representative design points for this concept.
Computed Tomography to Estimate the Representative Elementary Area for Soil Porosity Measurements
Borges, Jaqueline Aparecida Ribaski; Pires, Luiz Fernando; Belmont Pereira, André
2012-01-01
Computed tomography (CT) is a technique that provides images of different solid and porous materials. CT could be an ideal tool to study representative sizes of soil samples because of the noninvasive characteristic of this technique. The scrutiny of such representative elementary sizes (RESs) has been the target of attention of many researchers related to soil physics field owing to the strong relationship between physical properties and size of the soil sample. In the current work, data from gamma-ray CT were used to assess RES in measurements of soil porosity (ϕ). For statistical analysis, a study on the full width at a half maximum (FWHM) of the adjustment of distribution of ϕ at different areas (1.2 to 1162.8 mm2) selected inside of tomographic images was proposed herein. The results obtained point out that samples with a section area corresponding to at least 882.1 mm2 were the ones that provided representative values of ϕ for the studied Brazilian tropical soil. PMID:22666133
ERIC Educational Resources Information Center
Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.
2011-01-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
1980-09-01
where 4BD represents the instantaneous effect of the body, while OFS represents the free surface disturbance generated by the body over all previous...acceleration boundary condition. This deter- mines the time-derivative of the body-induced component of the flow, 4BD (as well as OBD through integration...panel with uniform density ei acting over a surface of area Ai is replaced by a single point source with strength s i(t) - A i(a i(t n ) + (t-t n ) G( td
Kennedy, Laura; Vass, J. Keith; Haggart, D. Ross; Moore, Steve; Burczynski, Michael E.; Crowther, Dan; Miele, Gino
2008-01-01
Peripheral blood as a surrogate tissue for transcriptome profiling holds great promise for the discovery of diagnostic and prognostic disease biomarkers, particularly when target tissues of disease are not readily available. To maximize the reliability of gene expression data generated from clinical blood samples, both the sample collection and the microarray probe generation methods should be optimized to provide stabilized, reproducible and representative gene expression profiles faithfully representing the transcriptional profiles of the constituent blood cell types present in the circulation. Given the increasing innovation in this field in recent years, we investigated a combination of methodological advances in both RNA stabilisation and microarray probe generation with the goal of achieving robust, reliable and representative transcriptional profiles from whole blood. To assess the whole blood profiles, the transcriptomes of purified blood cell types were measured and compared with the global transcriptomes measured in whole blood. The results demonstrate that a combination of PAXgene™ RNA stabilising technology and single-stranded cDNA probe generation afforded by the NuGEN Ovation RNA amplification system V2™ enables an approach that yields faithful representation of specific hematopoietic cell lineage transcriptomes in whole blood without the necessity for prior sample fractionation, cell enrichment or globin reduction. Storage stability assessments of the PAXgene™ blood samples also advocate a short, fixed room temperature storage time for all PAXgene™ blood samples collected for the purposes of global transcriptional profiling in clinical studies. PMID:19578521
ERIC Educational Resources Information Center
Kansi, Juliska; Wichstrom, Lars; Bergman, Lars R.
2005-01-01
The longitudinal stability of eating problems and their relationships to risk factors were investigated in a representative population sample of 623 Norwegian girls aged 13-14 followed over 7 years (3 time points). Three eating problem symptoms were measured: Restriction, Bulimia-food preoccupation, and Diet, all taken from the 12-item Eating…
Comparison of efficacy of pulverization and sterile paper point techniques for sampling root canals.
Tran, Kenny T; Torabinejad, Mahmoud; Shabahang, Shahrokh; Retamozo, Bonnie; Aprecio, Raydolfo M; Chen, Jung-Wei
2013-08-01
The purpose of this study was to compare the efficacy of the pulverization and sterile paper point techniques for sampling root canals using 5.25% NaOCl/17% EDTA and 1.3% NaOCl/MTAD (Dentsply, Tulsa, OK) as irrigation regimens. Single-canal extracted human teeth were decoronated and infected with Enterococcus faecalis. Roots were randomly assigned to 2 irrigation regimens: group A with 5.25% NaOCl/17% EDTA (n = 30) and group B with 1.3% NaOCl/MTAD (n = 30). After chemomechanical debridement, bacterial samplings were taken using sterile paper points and pulverized powder of the apical 5 mm root ends. The sterile paper point technique did not show growth in any samples. The pulverization technique showed growth in 24 of the 60 samples. The Fisher exact test showed significant differences between sampling techniques (P < .001). The sterile paper point technique showed no difference between irrigation regimens. However, 17 of the 30 roots in group A and 7 of the 30 roots in group B resulted in growth as detected by pulverization technique. Data showed a significant difference between irrigation regimens (P = .03) in pulverization technique. The pulverization technique was more efficacious in detecting viable bacteria. Furthermore, this technique showed that 1.3% NaOCl/MTAD regimen was more effective in disinfecting root canals. Published by Elsevier Inc.
Single-qubit unitary gates by graph scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumer, Benjamin A.; Underwood, Michael S.; Feder, David L.
2011-12-15
We consider the effects of plane-wave states scattering off finite graphs as an approach to implementing single-qubit unitary operations within the continuous-time quantum walk framework of universal quantum computation. Four semi-infinite tails are attached at arbitrary points of a given graph, representing the input and output registers of a single qubit. For a range of momentum eigenstates, we enumerate all of the graphs with up to n=9 vertices for which the scattering implements a single-qubit gate. As n increases, the number of new unitary operations increases exponentially, and for n>6 the majority correspond to rotations about axes distributed roughly uniformlymore » across the Bloch sphere. Rotations by both rational and irrational multiples of {pi} are found.« less
Electrical conductivity of high-purity germanium crystals at low temperature
NASA Astrophysics Data System (ADS)
Yang, Gang; Kooi, Kyler; Wang, Guojian; Mei, Hao; Li, Yangyang; Mei, Dongming
2018-05-01
The temperature dependence of electrical conductivity of single-crystal and polycrystalline high-purity germanium (HPGe) samples has been investigated in the temperature range from 7 to 100 K. The conductivity versus inverse of temperature curves for three single-crystal samples consist of two distinct temperature ranges: a high-temperature range where the conductivity increases to a maximum with decreasing temperature, and a low-temperature range where the conductivity continues decreasing slowly with decreasing temperature. In contrast, the conductivity versus inverse of temperature curves for three polycrystalline samples, in addition to a high- and a low-temperature range where a similar conductive behavior is shown, have a medium-temperature range where the conductivity decreases dramatically with decreasing temperature. The turning point temperature ({Tm}) which corresponds to the maximum values of the conductivity on the conductivity versus inverse of temperature curves are higher for the polycrystalline samples than for the single-crystal samples. Additionally, the net carrier concentrations of all samples have been calculated based on measured conductivity in the whole measurement temperature range. The calculated results show that the ionized carrier concentration increases with increasing temperature due to thermal excitation, but it reaches saturation around 40 K for the single-crystal samples and 70 K for the polycrystalline samples. All these differences between the single-crystal samples and the polycrystalline samples could be attributed to trapping and scattering effects of the grain boundaries on the charge carriers. The relevant physical models have been proposed to explain these differences in the conductive behaviors between two kinds of samples.
Letter Report: Stable Hydrogen and Oxygen Isotope Analysis of B-Complex Perched Water Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Brady D.; Moran, James J.; Nims, Megan K.
Fine-grained sediments associated with the Cold Creek Unit at Hanford have caused the formation of a perched water aquifer in the deep vadose zone at the B Complex area, which includes waste sites in the 200-DV-1 Operable Unit and the single-shell tank farms in Waste Management Area B-BX-BY. High levels of contaminants, such as uranium, technetium-99, and nitrate, make this aquifer a continuing source of contamination for the groundwater located a few meters below the perched zone. Analysis of deuterium ( 2H) and 18-oxygen ( 18O) of nine perched water samples from three different wells was performed. Samples represent timemore » points from hydraulic tests performed on the perched aquifer using the three wells. The isotope analyses showed that the perched water had δ 2H and δ 18O ratios consistent with the regional meteoric water line, indicating that local precipitation events at the Hanford site likely account for recharge of the perched water aquifer. Data from the isotope analysis can be used along with pumping and recovery data to help understand the perched water dynamics related to aquifer size and hydraulic control of the aquifer in the future.« less
High-pressure high-temperature phase diagram of gadolinium studied using a boron-doped heater anvil
NASA Astrophysics Data System (ADS)
Montgomery, J. M.; Samudrala, G. K.; Velisavljevic, N.; Vohra, Y. K.
2016-04-01
A boron-doped designer heater anvil is used in conjunction with powder x-ray diffraction to collect structural information on a sample of quasi-hydrostatically loaded gadolinium metal up to pressures above 8 GPa and 600 K. The heater anvil consists of a natural diamond anvil that has been surface modified with a homoepitaxially grown chemical-vapor-deposited layer of conducting boron-doped diamond, and is used as a DC heating element. Internally insulating both diamond anvils with sapphire support seats allows for heating and cooling of the high-pressure area on the order of a few tens of seconds. This device is then used to scan the phase diagram of the sample by oscillating the temperature while continuously increasing the externally applied pressure and collecting in situ time-resolved powder diffraction images. In the pressure-temperature range covered in this experiment, the gadolinium sample is observed in its hcp, αSm, and dhcp phases. Under this temperature cycling, the hcp → αSm transition proceeds in discontinuous steps at points along the expected phase boundary. From these measurements (representing only one hour of synchrotron x-ray collection time), a single-experiment equation of state and phase diagram of each phase of gadolinium is presented for the range of 0-10 GPa and 300-650 K.
An Increase of Intelligence Measured by the WPPSI in China, 1984-2006
ERIC Educational Resources Information Center
Liu, Jianghong; Yang, Hua; Li, Linda; Chen, Tunong; Lynn, Richard
2012-01-01
Normative data for 5-6 year olds on the Chinese Preschool and Primary Scale of Intelligence (WPPSI) are reported for samples tested in 1984 and 2006. There was a significant increase in Full Scale IQ of 4.53 points over the 22 year period, representing a gain of 2.06 IQ points per decade. There were also significant increases in Verbal IQ of 4.27…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koivisto, U.M.; Viikari, J.S.; Kontula, K.
Two deletions of the low-density lipoprotein (LDL) receptor gene were previously shown to account for about two thirds of all mutations causing familial hypercholesterolemia (FH) in Finland. We screened the DNA samples from a cohort representing the remaining 30% of Finnish heterozygous FH patients by amplifying all the 18 exons of the receptor gene by PCR and searching for DNA variations with the SSCP technique. Ten novel mutations were identified, comprising two nonsense and seven missense mutations as well as one frameshift mutation caused by a 13-bp deletion. A single nucleotide change, substituting adenine for guanidine at position 2533 andmore » resulting in an amino acid change of glycine to aspartic acid at codon 823, was found in DNA samples from 14 unrelated FH probands. This mutation (FH-Turku) affects the sequence encoding the putative basolateral sorting signal of the LDL receptor protein; however, the exact functional consequences of this mutation are yet to be examined. The FH-Turku gene and another point mutation (Leu380{r_arrow}His or FH-Pori) together account for {approximately}8% of the FH-causing genes in Finland and are particularly common among FH patients from the southwestern part of the country (combined, 30%). Primer-introduced restriction analysis was applied for convenient assay of the FH-Turku and FH-Pori point mutations. In conclusion, this paper demonstrates the unique genetic background of FH in Finland and describes a commonly occurring FH gene with a missense mutation closest to the C terminus thus far reported. 32 refs., 5 figs., 2 tabs.« less
Vardhan Reddy, Puram Vishnu; Shiva Nageswara Rao, Singireesu Soma; Pratibha, Mambatta Shankaranarayanan; Sailaja, Buddhi; Kavya, Bakka; Manorama, Ravoori Ruth; Singh, Shiv Mohan; Radha Srinivas, Tanuku Naga; Shivaji, Sisinthy
2009-10-01
Culturable bacterial diversity of Midtre Lovenbreen glacier, an Arctic glacier, was studied using 12 sediment samples collected from different points, along a transect, from the snout of Midtre Lovenbreen glacier up to the convergence point of the melt water stream with the sea. Bacterial abundance appeared to be closer to the convergence point of the glacial melt water stream with the sea than at the snout of the glacier. A total of 117 bacterial strains were isolated from the sediment samples. Based on 16S rRNA gene sequence analyses, the isolates (n=117) could be categorised in to 32 groups, with each group representing a different taxa belonging to 4 phyla (Actinobacteria, Bacilli, Flavobacteria and Proteobacteria). Representatives of the 32 groups varied in their growth temperature range (4-37 degrees C), in their tolerance to NaCl (0.1-1M NaCl) and in the growth pH range (2-13). Only 14 of 32 representative strains exhibited amylase, lipase and (or) protease activity and only one isolate (AsdM4-6) showed all three enzyme activities at 5 and 20 degrees C respectively. More than half of the isolates were pigmented. Fatty acid profile studies indicated that short-chain fatty acids, unsaturated fatty acids, branched fatty acids, cyclic and cis fatty acids are predominant in the psychrophilic bacteria.
On singular and highly oscillatory properties of the Green function for ship motions
NASA Astrophysics Data System (ADS)
Chen, Xiao-Bo; Xiong Wu, Guo
2001-10-01
The Green function used for analysing ship motions in waves is the velocity potential due to a point source pulsating and advancing at a uniform forward speed. The behaviour of this function is investigated, in particular for the case when the source is located at or close to the free surface. In the far field, the Green function is represented by a single integral along one closed dispersion curve and two open dispersion curves. The single integral along the open dispersion curves is analysed based on the asymptotic expansion of a complex error function. The singular and highly oscillatory behaviour of the Green function is captured, which shows that the Green function oscillates with indefinitely increasing amplitude and indefinitely decreasing wavelength, when a field point approaches the track of the source point at the free surface. This sheds some light on the nature of the difficulties in the numerical methods used for predicting the motion of a ship advancing in waves.
Microwave Induced Direct Bonding of Single Crystal Silicon Wafers
NASA Technical Reports Server (NTRS)
Budraa, N. K.; Jackson, H. W.; Barmatz, M.
1999-01-01
We have heated polished doped single-crystal silicon wafers in a single mode microwave cavity to temperatures where surface to surface bonding occurred. The absorption of microwaves and heating of the wafers is attributed to the inclusion of n-type or p-type impurities into these substrates. A cylindrical cavity TM (sub 010) standing wave mode was used to irradiate samples of various geometry's at positions of high magnetic field. This process was conducted in vacuum to exclude plasma effects. This initial study suggests that the inclusion of impurities in single crystal silicon significantly improved its microwave absorption (loss factor) to a point where heating silicon wafers directly can be accomplished in minimal time. Bonding of these substrates, however, occurs only at points of intimate surface to surface contact. The inclusion of a thin metallic layer on the surfaces enhances the bonding process.
Non-invasive cortisol measurements as indicators of physiological stress responses in guinea pigs
Pschernig, Elisabeth; Wallner, Bernard; Millesi, Eva
2016-01-01
Non-invasive measurements of glucocorticoid (GC) concentrations, including cortisol and corticosterone, serve as reliable indicators of adrenocortical activities and physiological stress loads in a variety of species. As an alternative to invasive analyses based on plasma, GC concentrations in saliva still represent single-point-of-time measurements, suitable for studying short-term or acute stress responses, whereas fecal GC metabolites (FGMs) reflect overall stress loads and stress responses after a species-specific time frame in the long-term. In our study species, the domestic guinea pig, GC measurements are commonly used to indicate stress responses to different environmental conditions, but the biological relevance of non-invasive measurements is widely unknown. We therefore established an experimental protocol based on the animals’ natural stress responses to different environmental conditions and compared GC levels in plasma, saliva, and fecal samples during non-stressful social isolations and stressful two-hour social confrontations with unfamiliar individuals. Plasma and saliva cortisol concentrations were significantly increased directly after the social confrontations, and plasma and saliva cortisol levels were strongly correlated. This demonstrates a high biological relevance of GC measurements in saliva. FGM levels measured 20 h afterwards, representing the reported mean gut passage time based on physiological validations, revealed that the overall stress load was not affected by the confrontations, but also no relations to plasma cortisol levels were detected. We therefore measured FGMs in two-hour intervals for 24 h after another social confrontation and detected significantly increased levels after four to twelve hours, reaching peak concentrations already after six hours. Our findings confirm that non-invasive GC measurements in guinea pigs are highly biologically relevant in indicating physiological stress responses compared to circulating levels in plasma in the short- and long-term. Our approach also underlines the importance of detailed investigations on how to use and interpret non-invasive measurements, including the determination of appropriate time points for sample collections. PMID:26839750
[Dissociative phenomena in a sample of outpatients].
Cantone, Daniela; Sperandeo, Raffaele; Maldonato, Mauro Nelson; Cozzolino, Pasquale; Perris, Francesco
2012-01-01
The study describes the frequency and the quality of dissociative phenomena and their relationship with axis I disorders and the psychopathological severity in outpatients. The sample (N=383) was subjected to MINI diagnostic interview and self-assessment scales DES and SCL-90. The data were analysed using SPSS. The 11,0% of subjects has a score ≥20 on DES. The 5,2% has no dissociative symptoms. The absorption images is the most frequent dissociative phenomenon, the less common is the dissociation amnesia. A relationship between dissociative phenomena and conditions unemployment, marital separation and single parties and an inverse relationship with age founded. Dissociative phenomena are more frequent in participants who have been diagnosed at least one axis I disorder and their severity is positively correlated with the number of diagnosed diseases and scores to the General Symptomatic Index. Our results point towards the existence of three types of dissociative experiences. The first type, represented by the factor absorption/imaginative involvement, is expressed along a continuum from normal to pathological; a second type, represented by the factor depersonalization/derealization, occurs in a significantly more intense and specific among subjects with axis I disorders; the latest manifestation dissociative, described by the dissociation amnesia, seems to have a predominantly typological feature that qualifies it as an experience not commonly distributed in the general population. The identifying of dissociative symptoms is necessary for the psychopathologic evaluation and to improve the effectiveness of treatment programs.
Arabidopsis myrosinases link the glucosinolate-myrosinase system and the cuticle
Ahuja, Ishita; de Vos, Ric C. H.; Rohloff, Jens; Stoopen, Geert M.; Halle, Kari K.; Ahmad, Samina Jam Nazeer; Hoang, Linh; Hall, Robert D.; Bones, Atle M.
2016-01-01
Both physical barriers and reactive phytochemicals represent two important components of a plant’s defence system against environmental stress. However, these two defence systems have generally been studied independently. Here, we have taken an exclusive opportunity to investigate the connection between a chemical-based plant defence system, represented by the glucosinolate-myrosinase system, and a physical barrier, represented by the cuticle, using Arabidopsis myrosinase (thioglucosidase; TGG) mutants. The tgg1, single and tgg1 tgg2 double mutants showed morphological changes compared to wild-type plants visible as changes in pavement cells, stomatal cells and the ultrastructure of the cuticle. Extensive metabolite analyses of leaves from tgg mutants and wild-type Arabidopsis plants showed altered levels of cuticular fatty acids, fatty acid phytyl esters, glucosinolates, and indole compounds in tgg single and double mutants as compared to wild-type plants. These results point to a close and novel association between chemical defence systems and physical defence barriers. PMID:27976683
Concrete/mortar water phase transition studied by single-point MRI methods.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W; Grattan-Bellew, P E
1998-01-01
A series of magnetic resonance imaging (MRI) water density and T2* profiles in hardened concrete and mortar samples has been obtained during freezing conditions (-50 degrees C < T < 11 degrees C). The single-point ramped imaging with T1 enhancement (SPRITE) sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 microseconds and T1 < 3.6 ms). The frozen and evaporable water distribution was quantified through a position based study of the profile magnitude. Submillimetric resolution of proton-density and T2*-relaxation parameters as a function of temperature has been achieved.
Fat Content and Composition in Retail Samples of Australian Beef Mince
Fayet-Moore, Flavia; Cunningham, Judy; Stobaus, Tim; Droulez, Veronique
2014-01-01
Nutrient composition data, representative of the retail supply, is required to support labelling and dietetic practice. Because beef mince represents approximately 30% of all beef dishes prepared in Australian households, a national survey of the different types of mince available for purchase in representative retail outlets was conducted. Sixty-one samples of beef mince from 24 retail outlets in New South Wales, Queensland, Victoria and Western Australia were collected in 2010 and analysed for moisture, protein, total fat and fatty acid profile. A variety of 18 different descriptors were used at point of sale with “Premium” (n = 15) and “Regular” (n = 8) the most commonly used terms. The analysed fat content of “Premium” samples varied from 2.2 g/100 g to 8.0 g/100 g. Forty-eight percent (n = 29) of the samples were categorised as low fat (<5 g/100 g; mean 4.1 g/100 g), 21% as medium fat (5–10 g/100 g; mean 8.9 g/100 g) and 31% as high fat (>10 g/100 g; mean 10.4 g/100 g). There was no significant difference between the types of mince available for purchase in low versus high socio-economic suburbs (Chi-square, p > 0.05). In conclusion, the fat content of the majority of retail beef mince in Australia is <10 g/100 g and a variety of descriptors are used at point of sale, all of which do not necessarily reflect analysed fat content. PMID:24922174
NASA Astrophysics Data System (ADS)
Serafini, John; Hossain, A.; James, R. B.; Guziewicz, M.; Kruszka, R.; Słysz, W.; Kochanowska, D.; Domagala, J. Z.; Mycielski, A.; Sobolewski, Roman
2017-07-01
We present our studies on both photoconductive (PC) and electro-optic (EO) responses of (Cd,Mg)Te single crystals. In an In-doped Cd0.92Mg0.08Te single crystal, subpicosecond electrical pulses were optically generated via a PC effect, coupled into a transmission line, and, subsequently, detected using an internal EO sampling scheme, all in the same (Cd,Mg)Te material. For photo-excitation and EO sampling, we used femtosecond optical pulses generated by the same Ti:sapphire laser with the wavelengths of 410 and 820 nm, respectively. The shortest transmission line distance between the optical excitation and EO sampling points was 75 μm. By measuring the transient waveforms at different distances from the excitation point, we calculated the transmission-line complex propagation factor, as well as the THz frequency attenuation factor and the propagation velocity, all of which allowed us to reconstruct the electromagnetic transient generated directly at the excitation point, showing that the original PC transient was subpicosecond in duration with a fall time of ˜500 fs. Finally, the measured EO retardation, together with the amount of the electric-field penetration, allowed us to determine the magnitude of the internal EO effect in our (Cd,Mg)Te crystal. The obtained THz-frequency EO coefficient was equal to 0.4 pm/V, which is at the lower end among the values reported for CdTe-based ternaries, apparently, due to the disorientation of the tested crystal that resulted in the non-optimal EO measurement condition.
Zidaric, Valerija; Pardon, Bart; dos Vultos, Tiago; Deprez, Piet; Brouwer, Michael Sebastiaan Maria; Roberts, Adam P.; Henriques, Adriano O.
2012-01-01
Clostridium difficile strains were sampled periodically from 50 animals at a single veal calf farm over a period of 6 months. At arrival, 10% of animals were C. difficile positive, and the peak incidence was determined to occur at the age of 18 days (16%). The prevalence then decreased, and at slaughter, C. difficile could not be isolated. Six different PCR ribotypes were detected, and strains within a single PCR ribotype could be differentiated further by pulsed-field gel electrophoresis (PFGE). The PCR ribotype diversity was high up to the animal age of 18 days, but at later sampling points, PCR ribotype 078 and the highly related PCR ribotype 126 predominated. Resistance to tetracycline, doxycycline, and erythromycin was detected, while all strains were susceptible to amoxicillin and metronidazole. Multiple variations of the resistance gene tet(M) were present at the same sampling point, and these changed over time. We have shown that PCR ribotypes often associated with cattle (ribotypes 078, 126, and 033) were not clonal but differed in PFGE type, sporulation properties, antibiotic sensitivities, and tetracycline resistance determinants, suggesting that multiple strains of the same PCR ribotype infected the calves and that calves were likely to be infected prior to arrival at the farm. Importantly, strains isolated at later time points were more likely to be resistant to tetracycline and erythromycin and showed higher early sporulation efficiencies in vitro, suggesting that these two properties converge to promote the persistence of C. difficile in the environment or in hosts. PMID:23001653
Serafini, John; Hossain, A.; James, R. B.; ...
2017-07-03
We present our studies on both photoconductive (PC) and electro-optic (EO) responses of (Cd,Mg)Te single crystals. In an In-doped Cd 0.92Mg 0.08Te single crystal, subpicosecond electrical pulses were optically generated via a PC effect, coupled into a transmission line, and, subsequently, detected using an internal EO sampling scheme, all in the same (Cd,Mg)Te material. For photo-excitation and EO sampling, we used femtosecond optical pulses generated by the same Ti:sapphire laser with the wavelength 410 and 820 nm, respectively. The shortest transmission line distance between the optical excitation and EO sampling points was 75 μm. By measuring the transient waveforms atmore » different distances from the excitation point, we calculated the transmission-line complex propagation factor, as well as the THz frequency attenuation factor and the propagation velocity, all of which allowed us to reconstruct the electromagnetic transient generated directly at the excitation point, showing that the original PC transient was subpicosecond in duration with a fall time of ~500 fs. Finally, the measured EO retardation, together with the amount of the electric-field penetration, allowed us to determine the magnitude of the internal EO effect in our (Cd,Mg)Te crystal. The obtained THz-frequency EO coefficient was equal to 0.4 pm/V, which is at the lower end among the values reported for CdTe-based ternaries, due to a twinned structure and misalignment of the tested (Cd,Mg)Te crystal.« less
Recommendations for representative ballast water sampling
NASA Astrophysics Data System (ADS)
Gollasch, Stephan; David, Matej
2017-05-01
Until now, the purpose of ballast water sampling studies was predominantly limited to general scientific interest to determine the variety of species arriving in ballast water in a recipient port. Knowing the variety of species arriving in ballast water also contributes to the assessment of relative species introduction vector importance. Further, some sampling campaigns addressed awareness raising or the determination of organism numbers per water volume to evaluate the species introduction risk by analysing the propagule pressure of species. A new aspect of ballast water sampling, which this contribution addresses, is compliance monitoring and enforcement of ballast water management standards as set by, e.g., the IMO Ballast Water Management Convention. To achieve this, sampling methods which result in representative ballast water samples are essential. We recommend such methods based on practical tests conducted on two commercial vessels also considering results from our previous studies. The results show that different sampling approaches influence the results regarding viable organism concentrations in ballast water samples. It was observed that the sampling duration (i.e., length of the sampling process), timing (i.e., in which point in time of the discharge the sample is taken), the number of samples and the sampled water quantity are the main factors influencing the concentrations of viable organisms in a ballast water sample. Based on our findings we provide recommendations for representative ballast water sampling.
Bravini, Elisabetta; Franchignoni, Franco; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano; Foti, Calogero
2015-01-01
To perform a comprehensive analysis of the psychometric properties and dimensionality of the Upper Limb Functional Index (ULFI) using both classical test theory and Rasch analysis (RA). Prospective, single-group observational design. Freestanding rehabilitation center. Convenience sample of Italian-speaking subjects with upper limb musculoskeletal disorders (N=174). Not applicable. The Italian version of the ULFI. Data were analyzed using parallel analysis, exploratory factor analysis, and RA for evaluating dimensionality, functioning of rating scale categories, item fit, hierarchy of item difficulties, and reliability indices. Parallel analysis revealed 2 factors explaining 32.5% and 10.7% of the response variance. RA confirmed the failure of the unidimensionality assumption, and 6 items out of the 25 misfitted the Rasch model. When the analysis was rerun excluding the misfitting items, the scale showed acceptable fit values, loading meaningfully to a single factor. Item separation reliability and person separation reliability were .98 and .89, respectively. Cronbach alpha was .92. RA revealed weakness of the scale concerning dimensionality and internal construct validity. However, a set of 19 ULFI items defined through the statistical process demonstrated a unidimensional structure, good psychometric properties, and clinical meaningfulness. These findings represent a useful starting point for further analyses of the tool (based on modern psychometric approaches and confirmatory factor analysis) in larger samples, including different patient populations and nationalities. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Spectrophotometric Properties of E+A Galaxies in SDSS-IV MaNGA
NASA Astrophysics Data System (ADS)
Marinelli, Mariarosa; Dudley, Raymond; Edwards, Kay; Gonzalez, Andrea; Johnson, Amalya; Kerrison, Nicole; Melchert, Nancy; Ojanen, Winonah; Weaver, Olivia; Liu, Charles; SDSS-IV MaNGA
2018-01-01
Quenched post-starburst galaxies, or E+A galaxies, represent a unique and informative phase in the evolution of galaxies. We used a qualitative rubric-based methodology, informed by the literature, to manually select galaxies from the SDSS-IV IFU survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) using the single-fiber spectra from the Sloan Digital Sky Survey Data Release 8. Of the 2,812 galaxies observed so far in MaNGA, we found 39 galaxies meeting our criteria for E+A classification. Spectral energy distributions of these 39 galaxies from the far-UV to the mid-infrared demonstrate a heterogeneity in our sample emerging in the infrared, indicating many distinct paths to visually similar optical spectra. We used SDSS-IV MaNGA Pipe3D data products to analyze stellar population ages, and found that 34 galaxies exhibited stellar populations that were older at 1 effective radius than at the center of the galaxy. Given that our sample was manually chosen based on E+A markers in the single-fiber spectra aimed at the center of each galaxy, our E+A galaxies may have only experienced their significant starbursts in the central region, with a disk of quenched or quenching material further outward. This work was supported by grants AST-1460860 from the National Science Foundation and SDSS FAST/SSP-483 from the Alfred P. Sloan Foundation to the CUNY College of Staten Island.
Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.
Kappel, S; Boulyga, S F; Dorta, L; Günther, D; Hattendorf, B; Koffler, D; Laaha, G; Leisch, F; Prohaska, T
2013-03-01
Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e. 'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated. The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U isotope ratios of particles deposited on the NUSIMEP-7 test samples.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
Magnetic phase composition of strontium titanate implanted with iron ions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dulov, E.N., E-mail: evgeny.dulov@ksu.ru; Ivoilov, N.G.; Strebkov, O.A.
2011-12-15
Highlights: Black-Right-Pointing-Pointer The origin of RT-ferromagnetism in iron implanted strontium titanate. Black-Right-Pointing-Pointer Metallic iron nanoclusters form during implantation and define magnetic behaviour. Black-Right-Pointing-Pointer Paramagnetic at room temperature iron-substituted strontium titanate identified. -- Abstract: Thin magnetic films were synthesized by means of implantation of iron ions into single-crystalline (1 0 0) substrates of strontium titanate. Depth-selective conversion electron Moessbauer spectroscopy (DCEMS) indicates that origin of the samples magnetism is {alpha}-Fe nanoparticles. Iron-substituted strontium titanate was also identified but with paramagnetic behaviour at room temperature. Surface magneto-optical Kerr effect (SMOKE) confirms that the films reveal superparamagnetism (the low-fluence sample) or ferromagnetism (themore » high-fluence sample), and demonstrate absence of magnetic in-plane anisotropy. These findings highlight iron implanted strontium titanate as a promising candidate for composite multiferroic material and also for gas sensing applications.« less
NASA Technical Reports Server (NTRS)
Jolliff, B. L.; Haskin, L. A.; Gillis, J. J.; Korotev, R. L.; Zeigler, R. A.
2002-01-01
Diversity of rock fragments in individual regolith samples from Apollo sites and inferred regolith stratigraphy from large craters and basins in the South Pole-Aitken region are used to assess the value of a single-point sample from the SPA basin. Additional information is contained in the original extended abstract.
The Simple Map for a Single-null Divertor Tokamak: How to Find the Footprint of Field lines
NASA Astrophysics Data System (ADS)
Figgins, Montoya; Ali, Halima; Punjabi, Alkesh
2000-10-01
We are working with the Simple Map^1 to find the footprint of field lines on the diverter plate in a single-null tokamak. Footprint of a field line is the position of the line when it escapes across the divertor plate. The Simple Map represents the magnetic field in a single-null divertor tokamak. The path of a field line is given by the equations: X_n+1=X_n-kY_n(1-Y_n) and Y_n+1=Y_n+kX_n+1. In order to find the footprint, we must first find the last good surface which is Y=0.997135768 and X=0. The value of k is fixed at 0.6. The starting values X0 are fixed at X_0=0. We use 10,000 points between the last good surface and the X-point. The X-point is located at (0,1). We also use the Continuous Analog of the Simple Map given by the equations: X(φ)=X_0-kY0 (1-Y_0)φ and Y(φ)=Y_0+kX(φ)φ. This will tell us what the (φ,X) is which represents the field lines crossing the divertor plate. The divertor plate is located at Y=1. When graphed, the footprint of field lines looks like the rings of Saturn. This work is supported by US DOES OFES. Ms. Montoya Figgins is HU CFRT Summer Fusion High School Scholar from E. E. Smith High School in North Carolina. She is supported by NASA under its NASA SHARP Plus Program. 1. Punjabi A, Verma A, and Boozer A, Phys Rev Lett, 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994)
ERIC Educational Resources Information Center
Parker, Amy T.
2009-01-01
Persons who are deaf-blind represent a heterogeneous, low-incidence population of children and adults who, at some point in life, regardless of the presence of additional disabilities, may benefit from formal orientation and mobility (O&M) instruction. Current national policies, such as the No Child Left Behind Act, which emphasize that…
Monitoring Birds in a Regional Landscape: Lessons from the Nicolet National Forest Bird Survey
Robert W. Howe; Amy T. Wolf; Tony Rinaldi
1995-01-01
The Nicolet National Forest Bird Survey represents one of the first systematic bird monitoring programs in a USDA National Forest. Volunteers visit approximately 500 permanently marked points biennially (250 each year) during a single weekend of mid-June. Results from the first 6 years provide a general inventory of the Forest's avifauna, documentation of...
Intended and Unintended Meanings of Validity: Some Clarifying Comments
ERIC Educational Resources Information Center
Geisinger, Kurt F.
2016-01-01
The six primary papers in this issue of "Assessment in Education" emphasise a single primary point: the concept of validity is a complex one. Essentially, validity is a collective noun. That is, just as a group of players may be called a team and a group of geese a flock, so too does validity represent a variety of processes and…
Pohl, Lydia; Kölbl, Angelika; Werner, Florian; Mueller, Carsten W; Höschen, Carmen; Häusler, Werner; Kögel-Knabner, Ingrid
2018-04-30
Aluminium (Al)-substituted goethite is ubiquitous in soils and sediments. The extent of Al-substitution affects the physicochemical properties of the mineral and influences its macroscale properties. Bulk analysis only provides total Al/Fe ratios without providing information with respect to the Al-substitution of single minerals. Here, we demonstrate that nanoscale secondary ion mass spectrometry (NanoSIMS) enables the precise determination of Al-content in single minerals, while simultaneously visualising the variation of the Al/Fe ratio. Al-substituted goethite samples were synthesized with increasing Al concentrations of 0.1, 3, and 7 % and analysed by NanoSIMS in combination with established bulk spectroscopic methods (XRD, FTIR, Mössbauer spectroscopy). The high spatial resolution (50-150 nm) of NanoSIMS is accompanied by a high number of single-point measurements. We statistically evaluated the Al/Fe ratios derived from NanoSIMS, while maintaining the spatial information and reassigning it to its original localization. XRD analyses confirmed increasing concentration of incorporated Al within the goethite structure. Mössbauer spectroscopy revealed 11 % of the goethite samples generated at high Al concentrations consisted of hematite. The NanoSIMS data show that the Al/Fe ratios are in agreement with bulk data derived from total digestion and demonstrated small spatial variability between single-point measurements. More advantageously, statistical analysis and reassignment of single-point measurements allowed us to identify distinct spots with significantly higher or lower Al/Fe ratios. NanoSIMS measurements confirmed the capacity to produce images, which indicated the uniform increase in Al-concentrations in goethite. Using a combination of statistical analysis with information from complementary spectroscopic techniques (XRD, FTIR and Mössbauer spectroscopy) we were further able to reveal spots with lower Al/Fe ratios as hematite. Copyright © 2018 John Wiley & Sons, Ltd.
Evaluating the implementation of RxNorm in ambulatory electronic prescriptions
Ward-Charlerie, Stacy; Rupp, Michael T; Kilbourne, John; Amin, Vishal P; Ruiz, Joshua
2016-01-01
Objective RxNorm is a standardized drug nomenclature maintained by the National Library of Medicine that has been recommended as an alternative to the National Drug Code (NDC) terminology for use in electronic prescribing. The objective of this study was to evaluate the implementation of RxNorm in ambulatory care electronic prescriptions (e-prescriptions). Methods We analyzed a random sample of 49 997 e-prescriptions that were received by 7391 locations of a national retail pharmacy chain during a single day in April 2014. The e-prescriptions in the sample were generated by 37 801 ambulatory care prescribers using 519 different e-prescribing software applications. Results We found that 97.9% of e-prescriptions in the study sample could be accurately represented by an RxNorm identifier. However, RxNorm identifiers were actually used as drug identifiers in only 16 433 (33.0%) e-prescriptions. Another 431 (2.5%) e-prescriptions that used RxNorm identifiers had a discrepancy in the corresponding Drug Database Code qualifier field or did not have a qualifier (Term Type) at all. In 10 e-prescriptions (0.06%), the free-text drug description and the RxNorm concept unique identifier pointed to completely different drug concepts, and in 7 e-prescriptions (0.04%), the NDC and RxNorm drug identifiers pointed to completely different drug concepts. Discussion The National Library of Medicine continues to enhance the RxNorm terminology and expand its scope. This study illustrates the need for technology vendors to improve their implementation of RxNorm; doing so will accelerate the adoption of RxNorm as the preferred alternative to using the NDC terminology in e-prescribing. PMID:26510879
[A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].
Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y
2016-04-18
To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.
Partin, Melissa R; Powell, Adam A; Burgess, Diana J; Haggstrom, David A; Gravely, Amy A; Halek, Krysten; Bangerter, Ann; Shaukat, Aasma; Nelson, David B
2015-09-01
This study assessed whether postal follow-up to a web-based physician survey improves response rates, response quality, and representativeness. We recruited primary care and gastroenterology chiefs at 125 Veterans Affairs medical facilities to complete a 10-min web-based survey on colorectal cancer screening and diagnostic practices in 2010. We compared response rates, response errors, and representativeness in the primary care and gastroenterology samples before and after adding postal follow-up. Adding postal follow-up increased response rates by 20-25 percentage points; markedly greater increases than predicted from a third e-mail reminder. In the gastroenterology sample, the mean number of response errors made by web responders (0.25) was significantly smaller than the mean number made by postal responders (2.18), and web responders provided significantly longer responses to open-ended questions. There were no significant differences in these outcomes in the primary care sample. Adequate representativeness was achieved before postal follow-up in both samples, as indicated by the lack of significant differences between web responders and the recruitment population on facility characteristics. We conclude adding postal follow-up to this web-based physician leader survey improved response rates but not response quality or representativeness. © The Author(s) 2013.
Measurement of the Water Relaxation Time of ɛ-Polylysine Aqueous Solutions
NASA Astrophysics Data System (ADS)
Shirakashi, Ryo; Amano, Yuki; Yamada, Jun
2017-05-01
ɛ-Polylysine is an effective food preservative. In this paper, the β-relaxation time of ɛ-polylysine aqueous solutions, which represents the rotational speed of a single water molecule, was measured by broadband dielectric spectroscopy at various temperatures and concentrations. The broadband dielectric spectrum of each sample containing water ranging from 35 wt% to 75 wt% at temperatures ranging from 0°C to 25°C was measured using a co-axial semirigid cable probe. The measured dielectric spectra of the samples were composed of several Debye relaxation peaks, including a shortest single molecular rotational relaxation time of water, the β-relaxation time, longer than that of pure water. This result represents that ɛ-polylysine suppresses the molecular kinetics of water. It is also found that the β-relaxation time of an ɛ-polylysine solution that contained more than 35 wt% water showed a typical Arrhenius plot in the temperature range from 0°C to 25°C. The activation energy of each sample depends on the water content ratio of the sample. As indicated by its long β-relaxation time, ɛ-polylysine is expected to possess high abilities of suppressing freezing and ice coarsening.
Estimating systemic exposure to levonorgestrel from an oral contraceptive.
Basaraba, Cale N; Westhoff, Carolyn L; Pike, Malcolm C; Nandakumar, Renu; Cremers, Serge
2017-04-01
The gold standard for measuring oral contraceptive (OC) pharmacokinetics is the 24-h steady-state area under the curve (AUC). We conducted this study to assess whether limited sampling at steady state or measurements following use of one or two OCs could provide an adequate proxy in epidemiological studies for the progestin 24-h steady-state AUC of a particular OC. We conducted a 13-sample, 24-h pharmacokinetic study on both day 1 and day 21 of the first cycle of a monophasic OC containing 30-mcg ethinyl estradiol and 150-mcg levonorgestrel (LNG) in 17 normal-weight healthy White women and a single-dose 9-sample study of the same OC after a 1-month washout. We compared the 13-sample steady-state results with several steady-state and single-dose results calculated using parsimonious sampling schemes. The 13-sample steady-state 24-h LNG AUC was highly correlated with the steady-state 24-h trough value [r=0.95; 95% confidence interval (0.85, 0.98)] and with the steady-state 6-, 8-, 12- and 16-h values (0.92≤r≤0.95). The trough values after one or two doses were moderately correlated with the steady-state 24-h AUC value [r=0.70; 95% CI (0.27, 0.90) and 0.77; 95% CI (0.40, 0.92), respectively]. Single time-point concentrations at steady state and after administration of one or two OCs gave highly to moderately correlated estimates of steady-state LNG AUC. Using such measures could facilitate prospective pharmaco-epidemiologic studies of the OC and its side effects. A single time-point LNG concentration at steady state is an excellent proxy for complete and resource-intensive steady-state AUC measurement. The trough level after two single doses is a fair proxy for steady-state AUC. These results provide practical tools to facilitate large studies to investigate the relationship between systemic LNG exposure and side effects in a real-life setting. Copyright © 2017 Elsevier Inc. All rights reserved.
Problem of the thermodynamic status of the mixed-layer minerals
Zen, E.-A.
1962-01-01
Minerals that show mixed layering, particularly with the component layers in random sequence, pose problems because they may behave thermodynamically as single phases or as polyphase aggregates. Two operational criteria are proposed for their distinction. The first scheme requires two samples of mixed-layer material which differ only in the proportions of the layers. If each of these two samples are allowed to equilibrate with the same suitably chosen monitoring solution, then the intensive parameters of the solution will be invariant if the mixed-layer sample is a polyphase aggregate, but not otherwise. The second scheme makes use of the fact that portions of many titration curves of clay minerals show constancy of the chemical activities of the components in the equilibrating solutions, suggesting phase separation. If such phase separation occurs for a mixed-layer material, then, knowing the number of independent components in the system, it should be possible to decide on the number of phases the mixed-layer material represents. Knowledge of the phase status of mixed-layer material is essential to the study of the equilibrium relations of mineral assemblages involving such material, because a given mixed-layer mineral will be plotted and treated differently on a phase diagram, depending on whether it is a single phase or a polyphase aggregate. Extension of the titration technique to minerals other than the mixed-layer type is possible. In particular, this method may be used to determine if cryptoperthites and peristerites are polyphase aggregates. In general, for any high-order phase separation, the method may be used to decide just at what point in this continuous process the system must be regarded operationally as a polyphase aggregate. ?? 1962.
Functional Multi-Locus QTL Mapping of Temporal Trends in Scots Pine Wood Traits
Li, Zitong; Hallingbäck, Henrik R.; Abrahamsson, Sara; Fries, Anders; Gull, Bengt Andersson; Sillanpää, Mikko J.; García-Gil, M. Rosario
2014-01-01
Quantitative trait loci (QTL) mapping of wood properties in conifer species has focused on single time point measurements or on trait means based on heterogeneous wood samples (e.g., increment cores), thus ignoring systematic within-tree trends. In this study, functional QTL mapping was performed for a set of important wood properties in increment cores from a 17-yr-old Scots pine (Pinus sylvestris L.) full-sib family with the aim of detecting wood trait QTL for general intercepts (means) and for linear slopes by increasing cambial age. Two multi-locus functional QTL analysis approaches were proposed and their performances were compared on trait datasets comprising 2 to 9 time points, 91 to 455 individual tree measurements and genotype datasets of amplified length polymorphisms (AFLP), and single nucleotide polymorphism (SNP) markers. The first method was a multilevel LASSO analysis whereby trend parameter estimation and QTL mapping were conducted consecutively; the second method was our Bayesian linear mixed model whereby trends and underlying genetic effects were estimated simultaneously. We also compared several different hypothesis testing methods under either the LASSO or the Bayesian framework to perform QTL inference. In total, five and four significant QTL were observed for the intercepts and slopes, respectively, across wood traits such as earlywood percentage, wood density, radial fiberwidth, and spiral grain angle. Four of these QTL were represented by candidate gene SNPs, thus providing promising targets for future research in QTL mapping and molecular function. Bayesian and LASSO methods both detected similar sets of QTL given datasets that comprised large numbers of individuals. PMID:25305041
Functional multi-locus QTL mapping of temporal trends in Scots pine wood traits.
Li, Zitong; Hallingbäck, Henrik R; Abrahamsson, Sara; Fries, Anders; Gull, Bengt Andersson; Sillanpää, Mikko J; García-Gil, M Rosario
2014-10-09
Quantitative trait loci (QTL) mapping of wood properties in conifer species has focused on single time point measurements or on trait means based on heterogeneous wood samples (e.g., increment cores), thus ignoring systematic within-tree trends. In this study, functional QTL mapping was performed for a set of important wood properties in increment cores from a 17-yr-old Scots pine (Pinus sylvestris L.) full-sib family with the aim of detecting wood trait QTL for general intercepts (means) and for linear slopes by increasing cambial age. Two multi-locus functional QTL analysis approaches were proposed and their performances were compared on trait datasets comprising 2 to 9 time points, 91 to 455 individual tree measurements and genotype datasets of amplified length polymorphisms (AFLP), and single nucleotide polymorphism (SNP) markers. The first method was a multilevel LASSO analysis whereby trend parameter estimation and QTL mapping were conducted consecutively; the second method was our Bayesian linear mixed model whereby trends and underlying genetic effects were estimated simultaneously. We also compared several different hypothesis testing methods under either the LASSO or the Bayesian framework to perform QTL inference. In total, five and four significant QTL were observed for the intercepts and slopes, respectively, across wood traits such as earlywood percentage, wood density, radial fiberwidth, and spiral grain angle. Four of these QTL were represented by candidate gene SNPs, thus providing promising targets for future research in QTL mapping and molecular function. Bayesian and LASSO methods both detected similar sets of QTL given datasets that comprised large numbers of individuals. Copyright © 2014 Li et al.
Micro Unmanned Surface Vehicle for Shallow Littoral Data Sampling
NASA Astrophysics Data System (ADS)
Murphy, R. R.; Wilde, G.
2016-02-01
This paper describes the creation of an autonomous air boat that can be carried by one person, called a micro unmanned surface vehicle (USV), for sensor sampling in shallow littoral areas such as inlets and creeks. A USV offers advantages over other types of unmanned marine vehicles. Unlike an autonomous underwater vehicle, the Challenge 1.0 air boat can operate in shallow water of less than 15 cm depth and maintain network connectivity for control and data sampling. A USV does not require a tether, like a remotely operated marine vehicle (ROV), which would limit the distance and mobility. However, a USV operating in shallow littoral areas poses several challenges. Navigation is a challenge since rivers and bays may have semi-submerged obstacles and there may be no depth maps; the approach taken in the Challenge 1.0 project is to let the operator specify a safe area of the water by visual inspection and then the USV autonomously creates a path to optimally sample the collision free area. Navigation is also a challenge because of platform dynamics-the USV we describe is a non-holonomic vehicle; this paper explores spiral paths rather than boustrophedon paths. Another challenge is the quality of sensing. Water-based sensing is noisy and thus a reading at a single point may not reflect the overall value. In practice, areas are sampled rather than a single point, but the noise in the point values within the sampled area produce a survey with widely varying numbers and are difficult for humans to interpret. This paper implements an inverse distance weighting interpolation algorithm to produce a visual "heatmap" that reliably portrays the smoothed data.
A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials
NASA Technical Reports Server (NTRS)
Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.
1994-01-01
Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.
Supernova neutrinos and antineutrinos: ternary luminosity diagram and spectral split patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogli, Gianluigi; Marrone, Antonio; Tamborra, Irene
2009-10-01
In core-collapse supernovae, the ν{sub e} and ν-bar {sub e} species may experience collective flavor swaps to non-electron species ν{sub x}, within energy intervals limited by relatively sharp boundaries (''splits''). These phenomena appear to depend sensitively upon the initial energy spectra and luminosities. We investigate the effect of generic variations of the fractional luminosities (l{sub e}, l{sub ē}, l{sub x}) with respect to the usual ''energy equipartition'' case (1/6, 1/6, 1/6), within an early-time supernova scenario with fixed thermal spectra and total luminosity. We represent the constraint l{sub e}+l{sub ē}+4l{sub x} = 1 in a ternary diagram, which is exploredmore » via numerical experiments (in single-angle approximation) over an evenly-spaced grid of points. In inverted hierarchy, single splits arise in most cases, but an abrupt transition to double splits is observed for a few points surrounding the equipartition one. In normal hierarchy, collective effects turn out to be unobservable at all grid points but one, where single splits occur. Admissible deviations from equipartition may thus induce dramatic changes in the shape of supernova (anti)neutrino spectra. The observed patterns are interpreted in terms of initial flavor polarization vectors (defining boundaries for the single/double split transitions), lepton number conservation, and minimization of potential energy.« less
NASA Astrophysics Data System (ADS)
De Rosa, R.
This paper illustrates some problems involved in the quantitative compositional study of pyroclastic deposits and proposes criteria for selecting the main petrographic and textural classes for modal analysis. The relative proportions of the different classes are obtained using a point-counting procedure applied to medium-coarse ash samples that reduces the dependence of the modal composition on grain size and avoids tedious counting of different grain-size fractions. The major purposes of a quantified measure of component distributions are to: (a) document the nature of the fragmenting magma; (b) define the eruptive dynamics of the eruptions on a detailed scale; and (c) ensure accuracy in classifying pyroclastic deposits. Compositional modes of the ash fraction of pyroclastic deposits vary systematically, and their graphical representation defines the compositional and textural characteristics of pyroclastic fragments associated with different eruptive styles. Textural features of the glass component can be very helpful for inferring aspects of eruptive dynamics. Four major parameters can be used to represent the component composition of pyroclastic ash deposits: (a) juvenile index (JI); (b) crystallinity index (CrI); (c) juvenile vesicularity index (JVI); and (d) free crystal index (FCrI). The FCrI is defined as the ratio between single and total crystal fragments in the juvenile component (single crystals+crystals in juvenile glass). This parameter may provide an effective estimate of the mechanical energy of eruptions. Variations in FCrI vs JVI discriminate among pyroclastic deposits of different origin and define compositional fields that represent ash derived from different fragmentation styles.
Accelerated 2D magnetic resonance spectroscopy of single spins using matrix completion
NASA Astrophysics Data System (ADS)
Scheuer, Jochen; Stark, Alexander; Kost, Matthias; Plenio, Martin B.; Naydenov, Boris; Jelezko, Fedor
2015-12-01
Two dimensional nuclear magnetic resonance (NMR) spectroscopy is one of the major tools for analysing the chemical structure of organic molecules and proteins. Despite its power, this technique requires long measurement times, which, particularly in the recently emerging diamond based single molecule NMR, limits its application to stable samples. Here we demonstrate a method which allows to obtain the spectrum by collecting only a small fraction of the experimental data. Our method is based on matrix completion which can recover the full spectral information from randomly sampled data points. We confirm experimentally the applicability of this technique by performing two dimensional electron spin echo envelope modulation (ESEEM) experiments on a two spin system consisting of a single nitrogen vacancy (NV) centre in diamond coupled to a single 13C nuclear spin. The signal to noise ratio of the recovered 2D spectrum is compared to the Fourier transform of randomly subsampled data, where we observe a strong suppression of the noise when the matrix completion algorithm is applied. We show that the peaks in the spectrum can be obtained with only 10% of the total number of the data points. We believe that our results reported here can find an application in all types of two dimensional spectroscopy, as long as the measured matrices have a low rank.
Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis
NASA Technical Reports Server (NTRS)
Arkin, C.; Gillespie, Stacey; Ratzel, Christopher
2010-01-01
A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.
Berglund, E. Carina; Kuklinski, Nicholas J.; Karagündüz, Ekin; Ucar, Kubra; Hanrieder, Jörg; Ewing, Andrew G.
2013-01-01
Micellar electrokinetic capillary chromatography with electrochemical detection has been used to quantify biogenic amines in freeze-dried Drosophila melanogaster brains. Freeze drying samples offers a way to preserve the biological sample while making dissection of these tiny samples easier and faster. Fly samples were extracted in cold acetone and dried in a rotary evaporator. Extraction and drying times were optimized in order to avoid contamination by red-pigment from the fly eyes and still have intact brain structures. Single freeze-dried fly-brain samples were found to produce representative electropherograms as a single hand-dissected brain sample. Utilizing the faster dissection time that freeze drying affords, the number of brains in a fixed homogenate volume can be increased to concentrate the sample. Thus, concentrated brain samples containing five or fifteen preserved brains were analyzed for their neurotransmitter content, and five analytes; dopamine N-acetyloctopamine, Nacetylserotonin, N-acetyltyramine, N-acetyldopamine were found to correspond well with previously reported values. PMID:23387977
Device for modular input high-speed multi-channel digitizing of electrical data
VanDeusen, Alan L.; Crist, Charles E.
1995-09-26
A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages.
Inorganic, Radioisotopic, and Organic Analysis of 241-AP-101 Tank Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiskum, S.K.; Bredt, P.R.; Campbell, J.A.
2000-10-17
Battelle received five samples from Hanford waste tank 241-AP-101, taken at five different depths within the tank. No visible solids or organic layer were observed in the individual samples. Individual sample densities were measured, then the five samples were mixed together to provide a single composite. The composite was homogenized and representative sub-samples taken for inorganic, radioisotopic, and organic analysis. All analyses were performed on triplicate sub-samples of the composite material. The sample composite did not contain visible solids or an organic layer. A subsample held at 10 C for seven days formed no visible solids.
Novel approaches to analysis by flow injection gradient titration.
Wójtowicz, Marzena; Kozak, Joanna; Kościelniak, Paweł
2007-09-26
Two novel procedures for flow injection gradient titration with the use of a single stock standard solution are proposed. In the multi-point single-line (MP-SL) method the calibration graph is constructed on the basis of a set of standard solutions, which are generated in a standard reservoir and subsequently injected into the titrant. According to the single-point multi-line (SP-ML) procedure the standard solution and a sample are injected into the titrant stream from four loops of different capacities, hence four calibration graphs are able to be constructed and the analytical result is calculated on the basis of a generalized slope of these graphs. Both approaches have been tested on the example of spectrophotometric acid-base titration of hydrochloric and acetic acids with using bromothymol blue and phenolphthalein as indicators, respectively, and sodium hydroxide as a titrant. Under optimized experimental conditions the analytical results of precision less than 1.8 and 2.5% (RSD) and of accuracy less than 3.0 and 5.4% (relative error (RE)) were obtained for MP-SL and SP-ML procedures, respectively, in ranges of 0.0031-0.0631 mol L(-1) for samples of hydrochloric acid and of 0.1680-1.7600 mol L(-1) for samples of acetic acid. The feasibility of both methods was illustrated by applying them to the total acidity determination in vinegar samples with precision lower than 0.5 and 2.9% (RSD) for MP-SL and SP-ML procedures, respectively.
Janes, Holly; Herbeck, Joshua T; Tovanabutra, Sodsai; Thomas, Rasmi; Frahm, Nicole; Duerr, Ann; Hural, John; Corey, Lawrence; Self, Steve G; Buchbinder, Susan P; McElrath, M Juliana; O'Connell, Robert J; Paris, Robert M; Rerks-Ngarm, Supachai; Nitayaphan, Sorachai; Pitisuttihum, Punnee; Kaewkungwal, Jaranit; Robb, Merlin L; Michael, Nelson L; Mullins, James I; Kim, Jerome H; Gilbert, Peter B; Rolland, Morgane
2015-10-01
Given the variation in the HIV-1 viral load (VL) set point across subjects, as opposed to a fairly stable VL over time within an infected individual, it is important to identify the characteristics of the host and virus that affect VL set point. Although recently infected individuals with multiple phylogenetically linked HIV-1 founder variants represent a minority of HIV-1 infections, we found--n two different cohorts--hat more diverse HIV-1 populations in early infection were associated with significantly higher VL 1 year after HIV-1 diagnosis.
NASA Astrophysics Data System (ADS)
Wan, L. G.; Lin, Q.; Bian, D. J.; Ren, Q. K.; Xiao, Y. B.; Lu, W. X.
2018-02-01
In order to reveal the spatial difference of the bacterial community structure in the Micro-pressure Air-lift Loop Reactor, the activated sludge bacterial at five different representative sites in the reactor were studied by denaturing gradient gel electrophoresis (DGGE). The results of DGGE showed that the difference of environmental conditions (such as substrate concentration, dissolved oxygen and PH, etc.) resulted in different diversity and similarity of microbial flora in different spatial locations. The Shannon-Wiener diversity index of the total bacterial samples from five sludge samples varied from 0.92 to 1.28, the biodiversity index was the smallest at point 5, and the biodiversity index was the highest at point 2. The similarity of the flora between the point 2, 3 and 4 was 80% or more, respectively. The similarity of the flora between the point 5 and the other samples was below 70%, and the similarity of point 2 was only 59.2%. Due to the different contribution of different strains to the removal of pollutants, it can give full play to the synergistic effect of bacterial degradation of pollutants, and further improve the efficiency of sewage treatment.
Beck, Jennifer A.; Paschke, Suzanne S.; Arnold, L. Rick
2011-01-01
This report describes results from a groundwater data-collection program completed in 2003-2004 by the U.S. Geological Survey in support of the South Platte Decision Support System and in cooperation with the Colorado Water Conservation Board. Two monitoring wells were installed adjacent to existing water-table monitoring wells. These wells were installed as well pairs with existing wells to characterize the hydraulic properties of the alluvial aquifer and shallow Denver Formation sandstone aquifer in and near the Lost Creek Designated Ground Water Basin. Single-well tests were performed in the 2 newly installed wells and 12 selected existing monitoring wells. Sediment particle size was analyzed for samples collected from the screened interval depths of each of the 14 wells. Hydraulic-conductivity and transmissivity values were calculated after the completion of single-well tests on each of the selected wells. Recovering water-level data from the single-well tests were analyzed using the Bouwer and Rice method because test data most closely resembled those obtained from traditional slug tests. Results from the single-well test analyses for the alluvial aquifer indicate a median hydraulic-conductivity value of 3.8 x 10-5 feet per second and geometric mean hydraulic-conductivity value of 3.4 x 10-5 feet per second. Median and geometric mean transmissivity values in the alluvial aquifer were 8.6 x 10-4 feet squared per second and 4.9 x 10-4 feet squared per second, respectively. Single-well test results for the shallow Denver Formation sandstone aquifer indicate a median hydraulic-conductivity value of 5.4 x 10-6 feet per second and geometric mean value of 4.9 x 10-6 feet per second. Median and geometric mean transmissivity values for the shallow Denver Formation sandstone aquifer were 4.0 x 10-5 feet squared per second and 5.9 x 10-5 feet squared per second, respectively. Hydraulic-conductivity values for the alluvial aquifer in and near the Lost Creek Designated Ground Water Basin generally were greater than hydraulic-conductivity values for the Denver Formation sandstone aquifer and less than hydraulic-conductivity values for the alluvial aquifer along the main stem of the South Platte River Basin reported by previous studies. Particle sizes were analyzed for a total of 14 samples of material representative of the screened interval in each of the 14 wells tested in this study. Of the 14 samples collected, 8 samples represent the alluvial aquifer and 6 samples represent the Denver Formation sandstone aquifer in and near the Lost Creek Designated Ground Water Basin. The sampled alluvial aquifer material generally contained a greater percentage of large particles (larger than 0.5 mm) than the sampled sandstone aquifer material. Alternatively, the sampled sandstone aquifer material generally contained a greater percentage of fine particles (smaller than 0.5 mm) than the sampled alluvial aquifer material consistent with the finding that the alluvial aquifer is more conductive than the sandstone aquifer in the vicinity of the Lost Creek Designated Ground Water Basin.
NASA Astrophysics Data System (ADS)
Dirilgen, Tara; Juceviča, Edite; Melecis, Viesturs; Querner, Pascal; Bolger, Thomas
2018-01-01
The relative importance of niche separation, non-equilibrial and neutral models of community assembly has been a theme in community ecology for many decades with none appearing to be applicable under all circumstances. In this study, Collembola species abundances were recorded over eleven consecutive years in a spatially explicit grid and used to examine (i) whether observed beta diversity differed from that expected under conditions of neutrality, (ii) whether sampling points differed in their relative contributions to overall beta diversity, and (iii) the number of samples required to provide comparable estimates of species richness across three forest sites. Neutrality could not be rejected for 26 of the forest by year combinations. However, there is a trend toward greater structure in the oldest forest, where beta diversity was greater than predicted by neutrality on five of the eleven sampling dates. The lack of difference in individual- and sample-based rarefaction curves also suggests randomness in the system at this particular scale of investigation. It seems that Collembola communities are not spatially aggregated and assembly is driven primarily by neutral processes particularly in the younger two sites. Whether this finding is due to small sample size or unaccounted for environmental variables cannot be determined. Variability between dates and sites illustrates the potential of drawing incorrect conclusions if data are collected at a single site and a single point in time.
Study on the stability of adrenaline and on the determination of its acidity constants
NASA Astrophysics Data System (ADS)
Corona-Avendaño, S.; Alarcón-Angeles, G.; Rojas-Hernández, A.; Romero-Romo, M. A.; Ramírez-Silva, M. T.
2005-01-01
In this work, the results are presented concerning the influence of time on the spectral behaviour of adrenaline (C 9H 13NO 3) (AD) and of the determination of its acidity constants by means of spectrophotometry titrations and point-by-point analysis, using for the latter freshly prepared samples for each analysis at every single pH. As the catecholamines are sensitive to light, all samples were protected against it during the course of the experiments. Each method rendered four acidity constants corresponding each to the four acid protons belonging to the functional groups present in the molecule; for the point-by-point analysis the values found were: log β 1=38.25±0.21 , log β 2=29.65±0.17 , log β 3=21.01±0.14 , log β 4=11.34±0.071 .
Rapid habitability assessment of Mars samples by pyrolysis-FTIR
NASA Astrophysics Data System (ADS)
Gordon, Peter R.; Sephton, Mark A.
2016-02-01
Pyrolysis Fourier transform infrared spectroscopy (pyrolysis FTIR) is a potential sample selection method for Mars Sample Return missions. FTIR spectroscopy can be performed on solid and liquid samples but also on gases following preliminary thermal extraction, pyrolysis or gasification steps. The detection of hydrocarbon and non-hydrocarbon gases can reveal information on sample mineralogy and past habitability of the environment in which the sample was created. The absorption of IR radiation at specific wavenumbers by organic functional groups can indicate the presence and type of any organic matter present. Here we assess the utility of pyrolysis-FTIR to release water, carbon dioxide, sulfur dioxide and organic matter from Mars relevant materials to enable a rapid habitability assessment of target rocks for sample return. For our assessment a range of minerals were analyzed by attenuated total reflectance FTIR. Subsequently, the mineral samples were subjected to single step pyrolysis and multi step pyrolysis and the products characterised by gas phase FTIR. Data from both single step and multi step pyrolysis-FTIR provide the ability to identify minerals that reflect habitable environments through their water and carbon dioxide responses. Multi step pyrolysis-FTIR can be used to gain more detailed information on the sources of the liberated water and carbon dioxide owing to the characteristic decomposition temperatures of different mineral phases. Habitation can be suggested when pyrolysis-FTIR indicates the presence of organic matter within the sample. Pyrolysis-FTIR, therefore, represents an effective method to assess whether Mars Sample Return target rocks represent habitable conditions and potential records of habitation and can play an important role in sample triage operations.
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
Effect of black point on accuracy of LCD displays colorimetric characterization
NASA Astrophysics Data System (ADS)
Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan
2018-03-01
Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.
NASA Astrophysics Data System (ADS)
Mohadjer, Solmaz; Ehlers, Todd; Bendick, Rebecca; Mutz, Sebastian
2016-04-01
Previous studies related to the kinematics of deformation within the India-Asia collision zone have relied on slip rate data for major active faults to test kinematic models that explain the deformation of the region. The slip rate data, however, are generally disputed for many of the first-order faults in the region (e.g., Altyn Tagh and Karakorum faults). Several studies have also challenged the common assumption that geodetic slip rates are representative of Quaternary slip rates. What has received little attention is the degree to which geodetic slip rates relate to Quaternary slip rates for active faults in the India-Asia collision zone. In this study, we utilize slip rate data from a new Quaternary fault database for Central Asia to determine the overall relationship between Quaternary and GPS-derived slip rates for 18 faults. The preliminary analysis investigating this relationship uses weighted least squares and a re-sampling analysis to test the sensitivity of this relationship to different data point attributes (e.g., faults associated with data points and dating methods used for estimating Quaternary slip rates). The resulting sample subsets of data points yield a maximum possible Pearson correlation coefficient of ~0.6, suggesting moderate correlation between Quaternary and GPS-derived slip rates for some faults (e.g., Kunlun and Longmen Shan faults). Faults with poorly correlated Quaternary and GPS-derived slip rates were identified and dating methods used for the Quaternary slip rates were examined. Results indicate that a poor correlation between Quaternary and GPS-derived slip rates exist for the Karakorum and Chaman faults. Large differences between Quaternary and GPS slip rates for these faults appear to be connected to qualitative dating of landforms used in the estimation of the Quaternary slip rates and errors in the geomorphic and structural reconstruction of offset landforms (e.g., offset terrace riser reconstructions for Altyn Tagh fault). Other factors such as a low density in the GPS network (e.g., GPS rate based on data from a single station for the Karakorum fault) appear to also contribute to the mismatch observed between the slip rates. Taken together, these results suggest that GPS-derived slip rates are often (but not always) representative of Quaternary slip rates and that the dating methods and sampling approaches used to identify transients in a fault slip rate history should be heavily scrutinized before interpreting the seismic hazards for a region.
2013-09-01
sequence dataset. All procedures were performed by personnel in the IIMT UT Southwestern Genomics and Microarray Core using standard protocols. More... sequencing run, samples were demultiplexed using standard algorithms in the Genomics and Microarray Core and processed into individual sample Illumina single... Sequencing (RNA-Seq), using Illumina’s multiplexing mRNA-Seq to generate full sequence libraries from the poly-A tailed RNA to a read depth of 30
ERIC Educational Resources Information Center
Ruddock, Ivan S.
2009-01-01
The derivation and description of the modes in optical waveguides and fibres are reviewed. The version frequently found in undergraduate textbooks is shown to be incorrect and misleading due to the assumption of an axial ray of light corresponding to the lowest order mode. It is pointed out that even the lowest order must still be represented in…
Tellurium content of marine manganese oxides and other manganese oxides
Lakin, H.W.; Thompson, C.E.; Davidson, D.F.
1963-01-01
Tellurium in amounts ranging from 5 to 125 parts per million was present in all of 12 samples of manganese oxide nodules from the floor of the Pacific and Indian oceans. These samples represent the first recognized points of high tellurium concentration in a sedimentary cycle. The analyses may lend support to the theory that the minor-element content of seafloor manganese nodules is derived from volcanic emanations.
Crocker, A D; Cronshaw, J; Holmes, W N
1975-01-01
Ducklings given hypertonic saline drinking water show significant increases in the rates of Na+ and water transfer across the intestinal mucosa. These increased rates of transfer are maintained as long as the birds are fed dypertonic saline. Oral administration of a single small dose of crude oil had no effect on the basal rate of mucosal transfer in freshwater-maintained ducklings but the adaptive response of the mucosa is suppressed in birds given hypertonic saline. When crude oils from eight different geographical locations were tested, the degree of inhibition varied between them; the greatest and smallest degrees of inhibition being observed following administration of Kuwait and North Slope, Alaska, crude oils respectively. The effects of distallation fractions derived from two chemically different crude oils were also examined. The volume of each distallation fraction administered corresponded to its relative abundance in the crude oil from which it was derived. The inhibitory effect was not associated exclusively with the same distallation fractions from each oil. A highly naphthenic crude oil from the San Joaquin Valley, California, showed the greatest inhibitory activity in the least abundant (2%), low boiling point (smaller than 245 degrees C) fraction and the least inhibitory activity in the highest boiling point (greater than 482 degrees C) most abundant (47%) fraction. In contrast, a highly paraffinic crude oil from Paradox Basin, Utah, showed the greatest inhibitory effect with the highest boiling point fraction and a minimal effect with the lowest boiling point fraction; the relative abundances of these two fractions in the crude oil represented 27 and 28% respectively. Water-soluble extracts of both crude oils also had inhibitory effects on mucosal transfer rates and these roughly proportionate to the inhibitory potency of the low boiling point fraction of each oil. Weathered samples of San Joaquin Valley, California, and the Paradox Basin, Utah, oils showed greater effects than corresponding samples of unweathered oils even though most of the low molecular weight material from both oils was either evaporated or solubilized in the underlying water during the 36-h weathering period.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Ji, Hong-Mei; Zhang, Wen-Qian; Wang, Xu; Li, Xiao-Wu
2015-01-01
The three-point bending strength and fracture behavior of single oriented crossed-lamellar structure in Scapharca broughtonii shell were investigated. The samples for bending tests were prepared with two different orientations perpendicular and parallel to the radial ribs of the shell, which corresponds to the tiled and stacked directions of the first-order lamellae, respectively. The bending strength in the tiled direction is approximately 60% higher than that in the stacked direction, primarily because the regularly staggered arrangement of the second-order lamellae in the tiled direction can effectively hinder the crack propagation, whereas the cracks can easily propagate along the interfaces between lamellae in the stacked direction. PMID:28793557
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül
2015-01-01
In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.
The Vineyard Yeast Microbiome, a Mixed Model Microbial Map
Setati, Mathabatha Evodia; Jacobson, Daniel; Andong, Ursula-Claire; Bauer, Florian
2012-01-01
Vineyards harbour a wide variety of microorganisms that play a pivotal role in pre- and post-harvest grape quality and will contribute significantly to the final aromatic properties of wine. The aim of the current study was to investigate the spatial distribution of microbial communities within and between individual vineyard management units. For the first time in such a study, we applied the Theory of Sampling (TOS) to sample gapes from adjacent and well established commercial vineyards within the same terroir unit and from several sampling points within each individual vineyard. Cultivation-based and molecular data sets were generated to capture the spatial heterogeneity in microbial populations within and between vineyards and analysed with novel mixed-model networks, which combine sample correlations and microbial community distribution probabilities. The data demonstrate that farming systems have a significant impact on fungal diversity but more importantly that there is significant species heterogeneity between samples in the same vineyard. Cultivation-based methods confirmed that while the same oxidative yeast species dominated in all vineyards, the least treated vineyard displayed significantly higher species richness, including many yeasts with biocontrol potential. The cultivatable yeast population was not fully representative of the more complex populations seen with molecular methods, and only the molecular data allowed discrimination amongst farming practices with multivariate and network analysis methods. Importantly, yeast species distribution is subject to significant intra-vineyard spatial fluctuations and the frequently reported heterogeneity of tank samples of grapes harvested from single vineyards at the same stage of ripeness might therefore, at least in part, be due to the differing microbiota in different sections of the vineyard. PMID:23300721
Enhancement of MS2D Bartington point measurement of soil magnetic susceptibility
NASA Astrophysics Data System (ADS)
Fabijańczyk, Piotr; Zawadzki, Jarosław
2015-04-01
Field magnetometry is fast method used to assess the potential soil pollution. The most popular device used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. Typically, in order to calculate the reliable average value of soil magnetic susceptibility, a series of MS2D readings is performed in the sample point. As it was analyzed previously, such methodology makes it possible to significantly reduce the nugget effect of the variograms of soil magnetic susceptibility that is related to the micro-scale variance and measurement errors. The goal of this study was to optimize the process of taking a series of MS2D readings, whose average value constitutes a single measurement, in order to take into account micro-scale variations of soil magnetic susceptibility in proper determination of this parameter. This was done using statistical and geostatistical analyses. The analyses were performed using field MS2D measurements that were carried out in the study area located in the direct vicinity of the Katowice agglomeration. At 150 sample points 10 MS2D readings of soil magnetic susceptibility were taken. Using this data set, series of experimental variograms were calculated and modeled. Firstly, using single random MS2D reading for each sample point, and next using the data set increased by adding one more MS2D reading, until their number reached 10. The parameters of variogram: nugget effect, sill and range of correlation were used to determine the most suitable number of MS2D readings at sample point. The distributions of soil magnetic susceptibility at sample point were also analyzed in order to determine adequate number of readings enabling to calculate reliable average soil magnetic susceptibility. The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013. References: Zawadzki J., Magiera T., Fabijańczyk P., 2007. The influence of forest stand and organic horizon development on soil surface measurement of magnetic susceptibility. Polish Journal of Soil Science, XL(2), 113-124 Zawadzki J., Fabijańczyk P., Magiera T., Strzyszcz Z., 2010. Study of litter influence on magnetic susceptibility measurements of urban forest topsoils using the MS2D sensor. Environmental Earth Sciences, 61(2), 223-230.
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2011-08-01
In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements. Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM). That surface similarly is defined by point cloud data. Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame. Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional area calculations to determine the percentage of materials present. With a draft method in place, and first-level method validation underway, we examine the transformation of the two dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified potential error factors versus the method process. This paper reports our on-going work and discusses our findings to date.
Flexibility in data interpretation: effects of representational format.
Braithwaite, David W; Goldstone, Robert L
2013-01-01
Graphs and tables differentially support performance on specific tasks. For tasks requiring reading off single data points, tables are as good as or better than graphs, while for tasks involving relationships among data points, graphs often yield better performance. However, the degree to which graphs and tables support flexibility across a range of tasks is not well-understood. In two experiments, participants detected main and interaction effects in line graphs and tables of bivariate data. Graphs led to more efficient performance, but also lower flexibility, as indicated by a larger discrepancy in performance across tasks. In particular, detection of main effects of variables represented in the graph legend was facilitated relative to detection of main effects of variables represented in the x-axis. Graphs may be a preferable representational format when the desired task or analytical perspective is known in advance, but may also induce greater interpretive bias than tables, necessitating greater care in their use and design.
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
Spear-anvil point-contact spectroscopy in pulsed magnetic fields
NASA Astrophysics Data System (ADS)
Arnold, F.; Yager, B.; Kampert, E.; Putzke, C.; Nyéki, J.; Saunders, J.
2013-11-01
We describe a new design and experimental technique for point-contact spectroscopy in non-destructive pulsed magnetic fields up to 70 {T}. Point-contact spectroscopy uses a quasi-dc four-point measurement of the current and voltage across a spear-anvil point-contact. The contact resistance could be adjusted over three orders of magnitude by a built-in fine pitch threaded screw. The first measurements using this set-up were performed on both single-crystalline and exfoliated graphite samples in a 150 {ms}, pulse length 70 {T} coil at 4.2 {K} and reproduced the well known point-contact spectrum of graphite and showed evidence for a developing high field excitation above 35 T, the onset field of the charge-density wave instability in graphite.
Lens-free imaging of magnetic particles in DNA assays.
Colle, Frederik; Vercruysse, Dries; Peeters, Sara; Liu, Chengxun; Stakenborg, Tim; Lagae, Liesbet; Del-Favero, Jurgen
2013-11-07
We present a novel opto-magnetic system for the fast and sensitive detection of nucleic acids. The system is based on a lens-free imaging approach resulting in a compact and cheap optical readout of surface hybridized DNA fragments. In our system magnetic particles are attracted towards the detection surface thereby completing the labeling step in less than 1 min. An optimized surface functionalization combined with magnetic manipulation was used to remove all nonspecifically bound magnetic particles from the detection surface. A lens-free image of the specifically bound magnetic particles on the detection surface was recorded by a CMOS imager. This recorded interference pattern was reconstructed in software, to represent the particle image at the focal distance, using little computational power. As a result we were able to detect DNA concentrations down to 10 pM with single particle sensitivity. The possibility of integrated sample preparation by manipulation of magnetic particles, combined with the cheap and highly compact lens-free detection makes our system an ideal candidate for point-of-care diagnostic applications.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
Efficacy of adrenal venous sampling is increased by point of care cortisol analysis
Viste, Kristin; Grytaas, Marianne A; Jørstad, Melissa D; Jøssang, Dag E; Høyden, Eivind N; Fotland, Solveig S; Jensen, Dag K; Løvås, Kristian; Thordarson, Hrafnkell; Almås, Bjørg; Mellgren, Gunnar
2013-01-01
Primary aldosteronism (PA) is a common cause of secondary hypertension and is caused by unilateral or bilateral adrenal disease. Treatment options depend on whether the disease is lateralized or not, which is preferably evaluated with selective adrenal venous sampling (AVS). This procedure is technically challenging, and obtaining representative samples from the adrenal veins can prove difficult. Unsuccessful AVS procedures often require reexamination. Analysis of cortisol during the procedure may enhance the success rate. We invited 21 consecutive patients to participate in a study with intra-procedural point of care cortisol analysis. When this assay showed nonrepresentative sampling, new samples were drawn after redirection of the catheter. The study patients were compared using the 21 previous procedures. The intra-procedural cortisol assay increased the success rate from 10/21 patients in the historical cohort to 17/21 patients in the study group. In four of the 17 successful procedures, repeated samples needed to be drawn. Successful sampling at first attempt improved from the first seven to the last seven study patients. Point of care cortisol analysis during AVS improves success rate and reduces the need for reexaminations, in accordance with previous studies. Successful AVS is crucial when deciding which patients with PA will benefit from surgical treatment. PMID:24169597
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
ERIC Educational Resources Information Center
Cole, John R.
1981-01-01
This paper points out that creationists have developed a skill unique to their trade, namely, that of misquotation and quotation out of context from the works of leading evolutionists. This tactic not only frustrates scientists but it misleads school board members, legislators, and the public. A representative sampling of scientists' responses to…
Intelligence and Physical Attractiveness
ERIC Educational Resources Information Center
Kanazawa, Satoshi
2011-01-01
This brief research note aims to estimate the magnitude of the association between general intelligence and physical attractiveness with large nationally representative samples from two nations. In the United Kingdom, attractive children are more intelligent by 12.4 IQ points (r=0.381), whereas in the United States, the correlation between…
Quasi-Solid-State Single-Atom Transistors.
Xie, Fangqing; Peukert, Andreas; Bender, Thorsten; Obermair, Christian; Wertz, Florian; Schmieder, Philipp; Schimmel, Thomas
2018-06-21
The single-atom transistor represents a quantum electronic device at room temperature, allowing the switching of an electric current by the controlled and reversible relocation of one single atom within a metallic quantum point contact. So far, the device operates by applying a small voltage to a control electrode or "gate" within the aqueous electrolyte. Here, the operation of the atomic device in the quasi-solid state is demonstrated. Gelation of pyrogenic silica transforms the electrolyte into the quasi-solid state, exhibiting the cohesive properties of a solid and the diffusive properties of a liquid, preventing the leakage problem and avoiding the handling of a liquid system. The electrolyte is characterized by cyclic voltammetry, conductivity measurements, and rotation viscometry. Thus, a first demonstration of the single-atom transistor operating in the quasi-solid-state is given. The silver single-atom and atomic-scale transistors in the quasi-solid-state allow bistable switching between zero and quantized conductance levels, which are integer multiples of the conductance quantum G 0 = 2e 2 /h. Source-drain currents ranging from 1 to 8 µA are applied in these experiments. Any obvious influence of the gelation of the aqueous electrolyte on the electron transport within the quantum point contact is not observed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Influence of wave-front sampling in adaptive optics retinal imaging
Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K.; Pircher, Michael
2017-01-01
A wide range of sampling densities of the wave-front has been used in retinal adaptive optics (AO) instruments, compared to the number of corrector elements. We developed a model in order to characterize the link between number of actuators, number of wave-front sampling points and AO correction performance. Based on available data from aberration measurements in the human eye, 1000 wave-fronts were generated for the simulations. The AO correction performance in the presence of these representative aberrations was simulated for different deformable mirror and Shack Hartmann wave-front sensor combinations. Predictions of the model were experimentally tested through in vivo measurements in 10 eyes including retinal imaging with an AO scanning laser ophthalmoscope. According to our study, a ratio between wavefront sampling points and actuator elements of 2 is sufficient to achieve high resolution in vivo images of photoreceptors. PMID:28271004
ImpulseDE: detection of differentially expressed genes in time series data using impulse models.
Sander, Jil; Schultze, Joachim L; Yosef, Nir
2017-03-01
Perturbations in the environment lead to distinctive gene expression changes within a cell. Observed over time, those variations can be characterized by single impulse-like progression patterns. ImpulseDE is an R package suited to capture these patterns in high throughput time series datasets. By fitting a representative impulse model to each gene, it reports differentially expressed genes across time points from a single or between two time courses from two experiments. To optimize running time, the code uses clustering and multi-threading. By applying ImpulseDE , we demonstrate its power to represent underlying biology of gene expression in microarray and RNA-Seq data. ImpulseDE is available on Bioconductor ( https://bioconductor.org/packages/ImpulseDE/ ). niryosef@berkeley.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Liu, Shiwei; Wu, Xiaoling; Lopez, Alan D; Wang, Lijun; Cai, Yue; Page, Andrew; Yin, Peng; Liu, Yunning; Li, Yichong; Liu, Jiangmei; You, Jinling; Zhou, Maigeng
2016-01-01
In China, sample-based mortality surveillance systems, such as the Chinese Center for Disease Control and Prevention's disease surveillance points system and the Ministry of Health's vital registration system, have been used for decades to provide nationally representative data on health status for health-care decision-making and performance evaluation. However, neither system provided representative mortality and cause-of-death data at the provincial level to inform regional health service needs and policy priorities. Moreover, the systems overlapped to a considerable extent, thereby entailing a duplication of effort. In 2013, the Chinese Government combined these two systems into an integrated national mortality surveillance system to provide a provincially representative picture of total and cause-specific mortality and to accelerate the development of a comprehensive vital registration and mortality surveillance system for the whole country. This new system increased the surveillance population from 6 to 24% of the Chinese population. The number of surveillance points, each of which covered a district or county, increased from 161 to 605. To ensure representativeness at the provincial level, the 605 surveillance points were selected to cover China's 31 provinces using an iterative method involving multistage stratification that took into account the sociodemographic characteristics of the population. This paper describes the development and operation of the new national mortality surveillance system, which is expected to yield representative provincial estimates of mortality in China for the first time.
Kronenberger, William G; Pisoni, David B; Harris, Michael S; Hoen, Helena M; Xu, Huiping; Miyamoto, Richard T
2013-06-01
Verbal short-term memory (STM) and working memory (WM) skills predict speech and language outcomes in children with cochlear implants (CIs) even after conventional demographic, device, and medical factors are taken into account. However, prior research has focused on single end point outcomes as opposed to the longitudinal process of development of verbal STM/WM and speech-language skills. In this study, the authors investigated relations between profiles of verbal STM/WM development and speech-language development over time. Profiles of verbal STM/WM development were identified through the use of group-based trajectory analysis of repeated digit span measures over at least a 2-year time period in a sample of 66 children (ages 6-16 years) with CIs. Subjects also completed repeated assessments of speech and language skills during the same time period. Clusters representing different patterns of development of verbal STM (digit span forward scores) were related to the growth rate of vocabulary and language comprehension skills over time. Clusters representing different patterns of development of verbal WM (digit span backward scores) were related to the growth rate of vocabulary and spoken word recognition skills over time. Different patterns of development of verbal STM/WM capacity predict the dynamic process of development of speech and language skills in this clinical population.
Raman spectroscopy as a PAT for pharmaceutical blending: Advantages and disadvantages.
Riolo, Daniela; Piazza, Alessandro; Cottini, Ciro; Serafini, Margherita; Lutero, Emilio; Cuoghi, Erika; Gasparini, Lorena; Botturi, Debora; Marino, Iari Gabriel; Aliatis, Irene; Bersani, Danilo; Lottici, Pier Paolo
2018-02-05
Raman spectroscopy has been positively evaluated as a tool for the in-line and real-time monitoring of powder blending processes and it has been proved to be effective in the determination of the endpoint of the mixing, showing its potential role as process analytical technology (PAT). The aim of this study is to show advantages and disadvantages of Raman spectroscopy with respect to the most traditional HPLC analysis. The spectroscopic results, obtained directly on raw powders, sampled from a two-axis blender in real case conditions, were compared with the chromatographic data obtained on the same samples. The formulation blend used for the experiment consists of active pharmaceutical ingredient (API, concentrations 6.0% and 0.5%), lactose and magnesium stearate (as excipients). The first step of the monitoring process was selecting the appropriate wavenumber region where the Raman signal of API is maximal and interference from the spectral features of excipients is minimal. Blend profiles were created by plotting the area ratios of the Raman peak of API (A API ) at 1598cm -1 and the Raman bands of excipients (A EXC ), in the spectral range between 1560 and 1630cm -1 , as a function of mixing time: the API content can be considered homogeneous when the time-dependent dispersion of the area ratio is minimized. In order to achieve a representative sampling with Raman spectroscopy, each sample was mapped in a motorized XY stage by a defocused laser beam of a micro-Raman apparatus. Good correlation between the two techniques has been found only for the composition at 6.0% (w/w). However, standard deviation analysis, applied to both HPLC and Raman data, showed that Raman results are more substantial than HPLC ones, since Raman spectroscopy enables generating data rich blend profiles. In addition, the relative standard deviation calculated from a single map (30 points) turned out to be representative of the degree of homogeneity for that blend time. Copyright © 2017 Elsevier B.V. All rights reserved.
A rapid single-tube protocol for HAV detection by nested real-time PCR.
Hu, Yuan; Arsov, Ivica
2014-09-01
Infections by food-borne viruses such as hepatitis A virus (HAV) and norovirus are significant public health concerns worldwide. Since food-borne viruses are rarely confirmed through direct isolation from contaminated samples, highly sensitive molecular techniques remain the methods of choice for the detection of viral genetic material. Our group has previously developed a specific nested real-time PCR (NRT-PCR) assay for HAV detection that improved overall sensitivity. Furthermore in this study, we have developed a single-tube NRT-PCR approach for HAV detection in food samples that reduces the likelihood of cross contamination between tubes during sample manipulation. HAV RNA was isolated from HAV-spiked food samples and HAV-infected cell cultures. All reactions following HAV RNA isolation, including conventional reverse transcriptase PCR, nested-PCR, and RT-PCR were performed in a single tube. Our results demonstrated that all the samples tested positive by RT-PCR and nested-PCR were also positive by a single-tube NRT-PCR. The detection limits observed for HAV-infected cell cultures and HAV-spiked green onions were 0.1 and 1 PFU, respectively. This novel method retained the specificity and robustness of the original NRT-PCR method, while greatly reducing sample manipulation, turnaround time, and the risk of carry-over contamination. Single-tube NRT-PCR thus represents a promising new tool that can potentially facilitate the detection of HAV in foods thereby improving food safety and public health.
Kamali, Tschackad; Považay, Boris; Kumar, Sunil; Silberberg, Yaron; Hermann, Boris; Werkmeister, René; Drexler, Wolfgang; Unterhuber, Angelika
2014-10-01
We demonstrate a multimodal optical coherence tomography (OCT) and online Fourier transform coherent anti-Stokes Raman scattering (FTCARS) platform using a single sub-12 femtosecond (fs) Ti:sapphire laser enabling simultaneous extraction of structural and chemical ("morphomolecular") information of biological samples. Spectral domain OCT prescreens the specimen providing a fast ultrahigh (4×12 μm axial and transverse) resolution wide field morphologic overview. Additional complementary intrinsic molecular information is obtained by zooming into regions of interest for fast label-free chemical mapping with online FTCARS spectroscopy. Background-free CARS is based on a Michelson interferometer in combination with a highly linear piezo stage, which allows for quick point-to-point extraction of CARS spectra in the fingerprint region in less than 125 ms with a resolution better than 4 cm(-1) without the need for averaging. OCT morphology and CARS spectral maps indicating phosphate and carbonate bond vibrations from human bone samples are extracted to demonstrate the performance of this hybrid imaging platform.
Near-Infrared Spatially Resolved Spectroscopy for Tablet Quality Determination.
Igne, Benoît; Talwar, Sameer; Feng, Hanzhou; Drennen, James K; Anderson, Carl A
2015-12-01
Near-infrared (NIR) spectroscopy has become a well-established tool for the characterization of solid oral dosage forms manufacturing processes and finished products. In this work, the utility of a traditional single-point NIR measurement was compared with that of a spatially resolved spectroscopic (SRS) measurement for the determination of tablet assay. Experimental designs were used to create samples that allowed for calibration models to be developed and tested on both instruments. Samples possessing a poor distribution of ingredients (highly heterogeneous) were prepared by under-blending constituents prior to compaction to compare the analytical capabilities of the two NIR methods. The results indicate that SRS can provide spatial information that is usually obtainable only through imaging experiments for the determination of local heterogeneity and detection of abnormal tablets that would not be detected with single-point spectroscopy, thus complementing traditional NIR measurement systems for in-line, and in real-time tablet analysis. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Evaluating Nighttime CALIOP 0.532 micron Aerosol Optical Depth and Extinction Coefficient Retrievals
NASA Technical Reports Server (NTRS)
Campbell, J. R.; Tackett, J. L.; Reid, J. S.; Zhang, J.; Curtis, C. A.; Hyer, E. J.; Sessions, W. R.; Westphal, D. L.; Prospero, J. M.; Welton, E. J.;
2012-01-01
NASA Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) Version 3.01 5-km nighttime 0.532 micron aerosol optical depth (AOD) datasets from 2007 are screened, averaged and evaluated at 1 deg X 1 deg resolution versus corresponding/co-incident 0.550 micron AOD derived using the US Navy Aerosol Analysis and Prediction System (NAAPS), featuring two-dimensional variational assimilation of quality-assured NASA Moderate Resolution Imaging Spectroradiometer (MODIS) and Multi-angle Imaging Spectroradiometer (MISR) AOD. In the absence of sunlight, since passive radiometric AOD retrievals rely overwhelmingly on scattered radiances, the model represents one of the few practical global estimates available from which to attempt such a validation. Daytime comparisons, though, provide useful context. Regional-mean CALIOP vertical profiles of night/day 0.532 micron extinction coefficient are compared with 0.523/0.532 micron ground-based lidar measurements to investigate representativeness and diurnal variability. In this analysis, mean nighttime CALIOP AOD are mostly lower than daytime (0.121 vs. 0.126 for all aggregated data points, and 0.099 vs. 0.102 when averaged globally per normalised 1 deg. X 1 deg. bin), though the relationship is reversed over land and coastal regions when the data are averaged per normalised bin (0.134/0.108 vs. 0140/0.112, respectively). Offsets assessed within single bins alone approach +/- 20 %. CALIOP AOD, both day and night, are higher than NAAPS over land (0.137 vs. 0.124) and equal over water (0.082 vs. 0.083) when averaged globally per normalised bin. However, for all data points inclusive, NAAPS exceeds CALIOP over land, coast and ocean, both day and night. Again, differences assessed within single bins approach 50% in extreme cases. Correlation between CALIOP and NAAPS AOD is comparable during both day and night. Higher correlation is found nearest the equator, both as a function of sample size and relative signal magnitudes inherent at these latitudes. Root mean square deviation between CALIOP and NAAPS varies between 0.1 and 0.3 globally during both day/night. Averaging of CALIOP along-track AOD data points within a single NAAPS grid bin improves correlation and RMSD, though day/night and land/ocean biases persist and are believed systematic. Vertical profiles of extinction coefficient derived in the Caribbean compare well with ground-based lidar observations, though potentially anomalous selection of a priori lidar ratios for CALIOP retrievals is likely inducing some discrepancies. Mean effective aerosol layer top heights are stable between day and night, indicating consistent layer-identification diurnally, which is noteworthy considering the potential limiting effects of ambient solar noise during day.
Percolation characteristics of solvent invasion in rough fractures under miscible conditions
NASA Astrophysics Data System (ADS)
Korfanta, M.; Babadagli, T.; Develi, K.
2017-10-01
Surface roughness and flow rate effects on the solvent transport under miscible conditions in a single fracture are studied. Surface replicas of seven different rocks (marble, granite, and limestone) are used to represent different surface roughness characteristics each described by different mathematical models including three fractal dimensions. Distribution of dyed solvent is investigated at various flow rate conditions to clarify the effect of roughness on convective and diffusive mixing. After a qualitative analysis using comparative images of different rocks, the area covered by solvent with respect to time is determined to conduct a semi-quantitative analysis. In this exercise, two distinct zones are identified, namely the straight lines obtained for convective (early times) and diffusive (late times) flow. The bending point between these two lines is used to point the transition between the two zones. Finally, the slopes of the straight lines and the bending points are correlated to five different roughness parameters and the rate (Peclet number). It is observed that both surface roughness and flow rate have significant effect on solvent spatial distribution. The largest area covered is obtained at moderate flow rates and hence not only the average surface roughness characteristic is important, but coessentially total fracture surface area needs to be considered when evaluating fluid distribution. It is also noted that the rate effect is critically different for the fracture samples of large grain size (marbles and granite) compared to smaller grain sizes (limestones). Variogram fractal dimension exhibits the strongest correlation with the maximum area covered by solvent, and display increasing trend at the moderate flow rates. Equations with variogram surface fractal dimension in combination with any other surface fractal parameter coupled with Peclet number can be used to predict maximum area covered by solvent in a single fracture, which in turn can be utilized to model oil recovery, waste disposal, and groundwater contamination processes in the presence of fractures.
Wickremsinhe, Enaksha R; Perkins, Everett J
2015-03-01
Traditional pharmacokinetic analysis in nonclinical studies is based on the concentration of a test compound in plasma and requires approximately 100 to 200 μL blood collected per time point. However, the total blood volume of mice limits the number of samples that can be collected from an individual animal-often to a single collection per mouse-thus necessitating dosing multiple mice to generate a pharmacokinetic profile in a sparse-sampling design. Compared with traditional methods, dried blood spot (DBS) analysis requires smaller volumes of blood (15 to 20 μL), thus supporting serial blood sampling and the generation of a complete pharmacokinetic profile from a single mouse. Here we compare plasma-derived data with DBS-derived data, explain how to adopt DBS sampling to support discovery mouse studies, and describe how to generate pharmacokinetic and pharmacodynamic data from a single mouse. Executing novel study designs that use DBS enhances the ability to identify and streamline better drug candidates during drug discovery. Implementing DBS sampling can reduce the number of mice needed in a drug discovery program. In addition, the simplicity of DBS sampling and the smaller numbers of mice needed translate to decreased study costs. Overall, DBS sampling is consistent with 3Rs principles by achieving reductions in the number of animals used, decreased restraint-associated stress, improved data quality, direct comparison of interanimal variability, and the generation of multiple endpoints from a single study.
Wickremsinhe, Enaksha R; Perkins, Everett J
2015-01-01
Traditional pharmacokinetic analysis in nonclinical studies is based on the concentration of a test compound in plasma and requires approximately 100 to 200 µL blood collected per time point. However, the total blood volume of mice limits the number of samples that can be collected from an individual animal—often to a single collection per mouse—thus necessitating dosing multiple mice to generate a pharmacokinetic profile in a sparse-sampling design. Compared with traditional methods, dried blood spot (DBS) analysis requires smaller volumes of blood (15 to 20 µL), thus supporting serial blood sampling and the generation of a complete pharmacokinetic profile from a single mouse. Here we compare plasma-derived data with DBS-derived data, explain how to adopt DBS sampling to support discovery mouse studies, and describe how to generate pharmacokinetic and pharmacodynamic data from a single mouse. Executing novel study designs that use DBS enhances the ability to identify and streamline better drug candidates during drug discovery. Implementing DBS sampling can reduce the number of mice needed in a drug discovery program. In addition, the simplicity of DBS sampling and the smaller numbers of mice needed translate to decreased study costs. Overall, DBS sampling is consistent with 3Rs principles by achieving reductions in the number of animals used, decreased restraint-associated stress, improved data quality, direct comparison of interanimal variability, and the generation of multiple endpoints from a single study. PMID:25836959
Anomalous weak values and the violation of a multiple-measurement Leggett-Garg inequality
NASA Astrophysics Data System (ADS)
Avella, Alessio; Piacentini, Fabrizio; Borsarelli, Michelangelo; Barbieri, Marco; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Degiovanni, Ivo Pietro; Genovese, Marco
2017-11-01
Quantum mechanics presents peculiar properties that, on the one hand, have been the subject of several theoretical and experimental studies about its very foundations and, on the other hand, provide tools for developing new technologies, the so-called quantum technologies. The nonclassicality pointed out by Leggett-Garg inequalities has represented, with Bell inequalities, one of the most investigated subjects. In this article we study the connection of Leggett-Garg inequalities with a new emerging field of quantum measurement, the weak values in the case of a series of sequential measurements on a single object. In detail, we perform an experimental study of the four-time-correlator Leggett-Garg test, by exploiting single and sequential weak measurements performed on heralded single photons.
Device for modular input high-speed multi-channel digitizing of electrical data
VanDeusen, A.L.; Crist, C.E.
1995-09-26
A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages. 1 fig.
Beaulieu, Karen M.; Bell, Amanda H.; Coles, James F.
2012-01-01
Beginning in 1999, the U.S. Geological Survey National Water Quality Assessment Program investigated the effects of urban development on stream ecosystems in nine metropolitan study areas across the United States. In seven of these study areas, stream-chemistry samples were collected every other month for 1 year at 6 to 10 sites. Within a study area, the sites collectively represented a gradient of urban development from minimally to highly developed watersheds, based on the percentage of urban land cover; depending on study area, the land cover before urban development was either forested or agricultural. The stream-chemistry factors measured in the samples were total nitrogen, total phosphorus, chloride, and pesticide toxicity. These data were used to characterize the stream-chemistry factors in four ways (hereafter referred to as characterizations)—seasonal high-flow value, seasonal low-flow value, the median value (representing a single integrated value of the factor over the year), and the standard deviation of values (representing the variation of the factor over the year). Aquatic macroinvertebrate communities were sampled at each site to infer the biological condition of the stream based on the relative sensitivity of the community to environmental stressors. A Spearman correlation analysis was used to evaluate relations between (1) urban development and each characterization of the stream-chemistry factors and (2) the biological condition of a stream and the different characterizations of chloride and pesticide toxicity. Overall, the study areas where the land cover before urban development was primarily forested had a greater number of moderate and strong relations compared with the study areas where the land cover before urban development was primarily agriculture; this was true when urban development was correlated with the stream-chemistry factors (except chloride) and when chloride and pesticide toxicity was correlated with the biological condition. Except for primarily phosphorus in two study areas, stream-chemistry factors generally increased with urban development, and among the different characterizations, the median value typically indicated the strongest relations. The variation in stream-chemistry factors throughout the year generally increased with urban development, indicating that water quality became less consistent as watersheds were developed. In study areas with high annual snow fall, the variation in chloride concentrations throughout the year was particularly strongly related to urban development, likely a result of road salt applications during the winter. The relations of the biological condition to chloride and pesticide toxicity were calculated irrespective of urban development, but the overall results indicated that the relations were still stronger in the study areas that had been forested before urban development. The weaker relations in the study areas that had been agricultural before urban development were likely the results of biological communities having been degraded from agricultural practices in the watersheds. Collectively, these results indicated that, compared with sampling a stream at a single point in time, sampling at regular intervals during a year may provide a more representative measure of water quality, especially in the areas of high urban development where water quality fluctuated more widely between samples. Furthermore, the use of "integrated" values of stream chemistry factors may be more appropriate when assessing relations to the biological condition of a stream because the taxa composition of a biological community typically reflects the water-quality conditions over time.
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
NASA Astrophysics Data System (ADS)
Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre
2010-03-01
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.
Exploring revictimization risk in a community sample of sexual assault survivors.
Chu, Ann T; Deprince, Anne P; Mauss, Iris B
2014-01-01
Previous research points to links between risk detection (the ability to detect danger cues in various situations) and sexual revictimization in college women. Given important differences between college and community samples that may be relevant to revictimization risk (e.g., the complexity of trauma histories), the current study explored the link between risk detection and revictimization in a community sample of women. Community-recruited women (N = 94) reported on their trauma histories in a semistructured interview. In a laboratory session, participants listened to a dating scenario involving a woman and a man that culminated in sexual assault. Participants were instructed to press a button "when the man had gone too far." Unlike in college samples, revictimized community women (n = 47) did not differ in terms of risk detection response times from women with histories of no victimization (n = 10) or single victimization (n = 15). Data from this study point to the importance of examining revictimization in heterogeneous community samples where risk mechanisms may differ from college samples.
Kauppinen, Ari; Toiviainen, Maunu; Korhonen, Ossi; Aaltonen, Jaakko; Järvinen, Kristiina; Paaso, Janne; Juuti, Mikko; Ketolainen, Jarkko
2013-02-19
During the past decade, near-infrared (NIR) spectroscopy has been applied for in-line moisture content quantification during a freeze-drying process. However, NIR has been used as a single-vial technique and thus is not representative of the entire batch. This has been considered as one of the main barriers for NIR spectroscopy becoming widely used in process analytical technology (PAT) for freeze-drying. Clearly it would be essential to monitor samples that reliably represent the whole batch. The present study evaluated multipoint NIR spectroscopy for in-line moisture content quantification during a freeze-drying process. Aqueous sucrose solutions were used as model formulations. NIR data was calibrated to predict the moisture content using partial least-squares (PLS) regression with Karl Fischer titration being used as a reference method. PLS calibrations resulted in root-mean-square error of prediction (RMSEP) values lower than 0.13%. Three noncontact, diffuse reflectance NIR probe heads were positioned on the freeze-dryer shelf to measure the moisture content in a noninvasive manner, through the side of the glass vials. The results showed that the detection of unequal sublimation rates within a freeze-dryer shelf was possible with the multipoint NIR system in use. Furthermore, in-line moisture content quantification was reliable especially toward the end of the process. These findings indicate that the use of multipoint NIR spectroscopy can achieve representative quantification of moisture content and hence a drying end point determination to a desired residual moisture level.
Kłyszejko, Adriana; Kubus, Zaneta; Zakowska, Zofia
2005-01-01
Filamentous fungi are cosmopolitan microorganisms found in almost all environments. It should be pointed out that occurance of moulds on food or feed may cause health disorders in humans and animals. Mycoflora appears as a source of toxic methabolites, mycotoxins, which hepatotoxic, genotoxic, nefrotoxic and carcinogenic abilities were already proven in several studies. Hense mycological analysis of cereal grains raises as an important manner in evaluation of food and feed health features. Among the most frequent cereal contaminants Alternaria, Aspergillus, Fusarium and Penicillium strains are mentioned. Due to their ability to grow on cereals during both its field growth and storage, Fusarium moulds occure to be an important contamination factors in food and feed industry. In this study Fusarium strains isolates from wheat and maize were examined in order to recognize their abilities to produce two toxins: zearalenon (ZEA) and deoxynivalenole (DON). Mycological analysis shown differentiation within fungal microflora occuring in samples of different storage conditions, where Fusarium strains represented aproximately 20-70% of all mould species present. In purpose of Fusarium strains species evaluation, isolates were mycologically analysed. In the second step of the project, toxicological screening of isolates was performed using Thin Liquid Chromatography (TLC) evaluating toxigenic potential of single strains' production of ZEA and DON. This data gives the possibility of pointing the most toxigenic strains and also shows differentiations in their occurance in cereals. This paper presents introductory research data, which can be useful in recognition of cereal contamination with moulds and their toxic methabolites.
A scalable self-priming fractal branching microchannel net chip for digital PCR.
Zhu, Qiangyuan; Xu, Yanan; Qiu, Lin; Ma, Congcong; Yu, Bingwen; Song, Qi; Jin, Wei; Jin, Qinhan; Liu, Jinyu; Mu, Ying
2017-05-02
As an absolute quantification method at the single-molecule level, digital PCR has been widely used in many bioresearch fields, such as next generation sequencing, single cell analysis, gene editing detection and so on. However, existing digital PCR methods still have some disadvantages, including high cost, sample loss, and complicated operation. In this work, we develop an exquisite scalable self-priming fractal branching microchannel net digital PCR chip. This chip with a special design inspired by natural fractal-tree systems has an even distribution and 100% compartmentalization of the sample without any sample loss, which is not available in existing chip-based digital PCR methods. A special 10 nm nano-waterproof layer was created to prevent the solution from evaporating. A vacuum pre-packaging method called self-priming reagent introduction is used to passively drive the reagent flow into the microchannel nets, so that this chip can realize sequential reagent loading and isolation within a couple of minutes, which is very suitable for point-of-care detection. When the number of positive microwells stays in the range of 100 to 4000, the relative uncertainty is below 5%, which means that one panel can detect an average of 101 to 15 374 molecules by the Poisson distribution. This chip is proved to have an excellent ability for single molecule detection and quantification of low expression of hHF-MSC stem cell markers. Due to its potential for high throughput, high density, low cost, lack of sample and reagent loss, self-priming even compartmentalization and simple operation, we envision that this device will significantly expand and extend the application range of digital PCR involving rare samples, liquid biopsy detection and point-of-care detection with higher sensitivity and accuracy.
Transport phenomena in helical edge state interferometers: A Green's function approach
NASA Astrophysics Data System (ADS)
Rizzo, Bruno; Arrachea, Liliana; Moskalets, Michael
2013-10-01
We analyze the current and the shot noise of an electron interferometer made of the helical edge states of a two-dimensional topological insulator within the framework of nonequilibrium Green's functions formalism. We study, in detail, setups with a single and with two quantum point contacts inducing scattering between the different edge states. We consider processes preserving the spin as well as the effect of spin-flip scattering. In the case of a single quantum point contact, a simple test based on the shot-noise measurement is proposed to quantify the strength of the spin-flip scattering. In the case of two single point contacts with the additional ingredient of gate voltages applied within a finite-size region at the top and bottom edges of the sample, we identify two types of interference processes in the behavior of the currents and the noise. One such process is analogous to that taking place in a Fabry-Pérot interferometer, while the second one corresponds to a configuration similar to a Mach-Zehnder interferometer. In the helical interferometer, these two processes compete.
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
Method of identifying clusters representing statistical dependencies in multivariate data
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
Approach is first to cluster and then to compute spatial boundaries for resulting clusters. Next step is to compute, from set of Monte Carlo samples obtained from scrambled data, estimates of probabilities of obtaining at least as many points within boundaries as were actually observed in original data.
Gap Analysis. Student Satisfaction Survey, Spring 1995.
ERIC Educational Resources Information Center
Breindel, Matthew
In spring 1995, College of the Desert, in California, undertook a study to determine the perceptions of students at both its Copper Mountain and Palm Desert campuses regarding college services. A representative sample of students were administered a 7-point attitude scale (Student Satisfaction Survey developed by Noel-Levitz Centers, Inc.) both…
ERIC Educational Resources Information Center
American Association of Community Colleges, Washington, DC.
This document, presented in the form of PowerPoint print outs, indicates a total of 420 (nearly 60%) associate degree nursing (ADN) programs responded to a survey conducted by the American Association of Community Colleges' (AACC) Nursing and Allied Health Initiative (NAHI) for 2003. The sample is representative based on urbanicity and region.…
40 CFR 141.71 - Criteria for avoiding filtration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Filtration and Disinfection § 141.71... developed under section 1428 of the Safe Drinking Water Act may be used, if the State deems it appropriate... representative sample of the source water immediately prior to the first or only point of disinfection...
NASA Astrophysics Data System (ADS)
Rossing, Thomas
This brief introduction may help to persuade the reader that acoustics covers a wide range of interesting topics. It is impossible to cover all these topics in a single handbook, but we have attempted to include a sampling of hot topics that represent current acoustical research, both fundamental and applied.
2010-04-13
HORACE STORNG (AEROSPACE ENGINEER, ER31 PROPULSION TURBOMACHINERY DESIGN & DEVELOPMENT BRANCH) ADJUSTS A UNIQUE MECHANICAL TEST SETUP THAT MEASURES STRAIN ON A SINGLE SAMPLE, USING TWO DIFFERENT TECHNIQUES AT THE SAME TIME. THE TEST FIXTURE HOLDS A SPECIMEN THAT REPRESENTS A LIQUID OXYGEN (LOX) BEARING FROM THE J2-X ENGINE
2010-04-13
TATHAN COFFEE (EM10 MATERIALS TEST ENGINEER, JACOBS ESTS GROUP/JTI) ADJUSTS A UNIQUE MECHANICAL TEST SETUP THAT MEASURES STRAIN ON A SINGLE SAMPLE, USING TWO DIFFERENT TECHNIQUES AT THE SAME TIME. THE TEST FIXTURE HOLDS A SPECIMEN THAT REPRESENTS A LIQUID OXYGEN (LOX) BEARING FROM THE J2-X ENGINE
High-pressure high-temperature phase diagram of gadolinium studied using a boron-doped heater anvil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, J. M.; Samudrala, G. K.; Vohra, Y. K.
A boron-doped designer heater anvil is used in conjunction with powder x-ray diffraction to collect structural information on a sample of quasi-hydrostatically loaded gadolinium metal up to pressures above 8 GPa and 600 K. The heater anvil consists of a natural diamond anvil that has been surface modified with a homoepitaxially grown chemical-vapor-deposited layer of conducting boron-doped diamond, and is used as a DC heating element. Internally insulating both diamond anvils with sapphire support seats allows for heating and cooling of the high-pressure area on the order of a few tens of seconds. This device is then used to scan the phasemore » diagram of the sample by oscillating the temperature while continuously increasing the externally applied pressure and collecting in situ time-resolved powder diffraction images. In the pressure-temperature range covered in this experiment, the gadolinium sample is observed in its hcp, αSm, and dhcp phases. Under this temperature cycling, the hcp → αSm transition proceeds in discontinuous steps at points along the expected phase boundary. From these measurements (representing only one hour of synchrotron x-ray collection time), a single-experiment equation of state and phase diagram of each phase of gadolinium is presented for the range of 0–10 GPa and 300–650 K.« less
Mapping mountain pine beetle mortality through growth trend analysis of time-series landsat data
Liang, Lu; Chen, Yanlei; Hawbaker, Todd J.; Zhu, Zhi-Liang; Gong, Peng
2014-01-01
Disturbances are key processes in the carbon cycle of forests and other ecosystems. In recent decades, mountain pine beetle (MPB; Dendroctonus ponderosae) outbreaks have become more frequent and extensive in western North America. Remote sensing has the ability to fill the data gaps of long-term infestation monitoring, but the elimination of observational noise and attributing changes quantitatively are two main challenges in its effective application. Here, we present a forest growth trend analysis method that integrates Landsat temporal trajectories and decision tree techniques to derive annual forest disturbance maps over an 11-year period. The temporal trajectory component successfully captures the disturbance events as represented by spectral segments, whereas decision tree modeling efficiently recognizes and attributes events based upon the characteristics of the segments. Validated against a point set sampled across a gradient of MPB mortality, 86.74% to 94.00% overall accuracy was achieved with small variability in accuracy among years. In contrast, the overall accuracies of single-date classifications ranged from 37.20% to 75.20% and only become comparable with our approach when the training sample size was increased at least four-fold. This demonstrates that the advantages of this time series work flow exist in its small training sample size requirement. The easily understandable, interpretable and modifiable characteristics of our approach suggest that it could be applicable to other ecoregions.
Zhang, Ruixin; Yang, Huaixin; Guo, Cong; Tian, Huanfang; Shi, Honglong; Chen, Genfu; Li, Jianqi
2016-12-19
Microstructural analyses based on aberration-corrected scanning transmission electron microscopy (STEM) observations demonstrate that low-dimensional Cs x Bi 4 Te 6 materials, known to be a novel thermoelectric and superconducting system, contain notable structural channels that go directly along the b axis, which can be partially filled by atom clusters depending on the thermal treatment process. We successfully prepared two series of Cs x Bi 4 Te 6 single-crystalline samples using two different sintering processes. The Cs x Bi 4 Te 6 samples prepared using an air-quenching method show superconductivity at approximately 4 K, while the Cs x Bi 4 Te 6 with the same nominal compositions prepared by slowly cooling are nonsuperconductors. Moreover, atomic structural investigations of typical samples reveal that the structural channels are often empty in superconducting materials; thus, we can represent the superconducting phase as Cs 1-y Bi 4 Te 6 with considering the point defects in the Cs layers. In addition, the channels in the nonsuperconducting crystals are commonly partially occupied by triplet Bi clusters. Moreover, the average structures for these two phases are also different in their monoclinic angles (β), which are estimated to be 102.3° for superconductors and 100.5° for nonsuperconductors.
Portable device for the detection of colorimetric assays
Nowak, E.; Kawchuk, J.; Hoorfar, M.; Najjaran, H.
2017-01-01
In this work, a low-cost, portable device is developed to detect colorimetric assays for in-field and point-of-care (POC) analysis. The device can rapidly detect both pH values and nitrite concentrations of five different samples, simultaneously. After mixing samples with specific reagents, a high-resolution digital camera collects a picture of the sample, and a single-board computer processes the image in real time to identify the hue–saturation–value coordinates of the image. An internal light source reduces the effect of any ambient light so the device can accurately determine the corresponding pH values or nitrite concentrations. The device was purposefully designed to be low-cost, yet versatile, and the accuracy of the results have been compared to those from a conventional method. The results obtained for pH values have a mean standard deviation of 0.03 and a correlation coefficient R2 of 0.998. The detection of nitrites is between concentrations of 0.4–1.6 mg l−1, with a low detection limit of 0.2 mg l−1, and has a mean standard deviation of 0.073 and an R2 value of 0.999. The results represent great potential of the proposed portable device as an excellent analytical tool for POC colorimetric analysis and offer broad accessibility in resource-limited settings. PMID:29291093
Miler, Miloš; Gosar, Mateja
2013-12-01
Solid particles in snow deposits, sampled in mining and Pb-processing area of Žerjav, Slovenia, have been investigated using scanning electron microscopy/energy-dispersive X-ray spectroscopy (SEM/EDS). Identified particles were classified as geogenic-anthropogenic, anthropogenic, and secondary weathering products. Geogenic-anthropogenic particles were represented by scarce Zn- and Pb-bearing ore minerals, originating from mine waste deposit. The most important anthropogenic metal-bearing particles in snow were Pb-, Sb- and Sn-bearing oxides and sulphides. The morphology of these particles showed that they formed at temperatures above their melting points. They were most abundant in snow sampled closest to the Pb-processing plant and least abundant in snow taken farthest from the plant, thus indicating that Pb processing was their predominant source between the last snowfall and the time of sampling. SEM/EDS analysis showed that Sb and Sn contents in these anthropogenic phases were higher and more variable than in natural Pb-bearing ore minerals. The most important secondary weathering products were Pb- and Zn-containing Fe-oxy-hydroxides whose elemental composition and morphology indicated that they mostly resulted from oxidation of metal-bearing sulphides emitted from the Pb-processing plant. This study demonstrated the importance of single particle analysis using SEM/EDS for differentiation between various sources of metals in the environment.
Extremely Lightweight Intrusion Detection (ELIDe)
2013-12-01
devices that would be more commonly found in a dynamic tactical environment. As a point of reference, the Raspberry Pi single-chip computer (4) is...the ELIDe application onto a resource- constrained hardware platform more likely to be used in a mobile tactical network, and the Raspberry Pi was...chosen as that representative platform. ELIDe was successfully tested on a Raspberry Pi , its throughput was benchmarked at approximately 8.3 megabits
The influence of point defects on the thermal conductivity of AlN crystals
NASA Astrophysics Data System (ADS)
Rounds, Robert; Sarkar, Biplab; Alden, Dorian; Guo, Qiang; Klump, Andrew; Hartmann, Carsten; Nagashima, Toru; Kirste, Ronny; Franke, Alexander; Bickermann, Matthias; Kumagai, Yoshinao; Sitar, Zlatko; Collazo, Ramón
2018-05-01
The average bulk thermal conductivity of free-standing physical vapor transport and hydride vapor phase epitaxy single crystal AlN samples with different impurity concentrations is analyzed using the 3ω method in the temperature range of 30-325 K. AlN wafers grown by physical vapor transport show significant variation in thermal conductivity at room temperature with values ranging between 268 W/m K and 339 W/m K. AlN crystals grown by hydride vapor phase epitaxy yield values between 298 W/m K and 341 W/m K at room temperature, suggesting that the same fundamental mechanisms limit the thermal conductivity of AlN grown by both techniques. All samples in this work show phonon resonance behavior resulting from incorporated point defects. Samples shown by optical analysis to contain carbon-silicon complexes exhibit higher thermal conductivity above 100 K. Phonon scattering by point defects is determined to be the main limiting factor for thermal conductivity of AlN within the investigated temperature range.
Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P
1983-08-01
Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases.
Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P
1983-01-01
Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases. Images PMID:6605714
Hallén, Jonas; Jensen, Jesper K; Fagerland, Morten W; Jaffe, Allan S; Atar, Dan
2010-12-01
To investigate the ability of cardiac troponin I (cTnI) to predict functional recovery and left ventricular remodelling following primary percutaneous coronary intervention (pPCI) in ST-elevation myocardial infarction (STEMI). Post hoc study extending from randomised controlled trial. 132 patients with STEMI receiving pPCI. Left ventricular ejection fraction (LVEF), end-diastolic and end-systolic volume index (EDVI and ESVI) and changes in these parameters from day 5 to 4 months after the index event. Cardiac magnetic resonance examination performed at 5 days and 4 months for evaluation of LVEF, EDVI and ESVI. cTnI was sampled at 24 and 48 h. In linear regression models adjusted for early (5 days) assessment of LVEF, ESVI and EDVI, single-point cTnI at either 24 or 48 h were independent and strong predictors of changes in LVEF (p<0.01), EDVI (p<0.01) and ESVI (p<0.01) during the follow-up period. In a logistic regression analysis for prediction of an LVEF below 40% at 4 months, single-point cTnI significantly improved the prognostic strength of the model (area under the curve = 0.94, p<0.01) in comparison with the combination of clinical variables and LVEF at 5 days. Single-point sampling of cTnI after pPCI for STEMI provides important prognostic information on the time-dependent evolution of left ventricular function and volumes.
A Portable Solid-State Moisture Meter For Agricultural And Food Products
NASA Astrophysics Data System (ADS)
Bull, C. R.; Stafford, J. V.; Weaving, G. S.
1988-10-01
This paper reports on the development of a small, robust, battery operated near infra-red (NIR) reflectance device, designed for rapid on-farm measurement of the moisture content of forage crops without prior sample preparation. It has potential application to other agricultural or food materials. The instrument is based on two light emitting diodes (LEDs), a germanium detector and a control CMOS single chip microcomputer. The meter has been calibrated to give a direct read out of moisture content for 4 common grass varieties at 3 stages of development. The accuracy of a single point measurement on a grass sample is approximately +/- 6% over a range of 40-80% (wet basis). However, the potential accuracy on a homogeous sample may be as goon as 0.15%.
Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.
Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P
2016-04-01
Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.
CePt2In7: Shubnikov-de Haas measurements on micro-structured samples under high pressures
NASA Astrophysics Data System (ADS)
Kanter, J.; Moll, P.; Friedemann, S.; Alireza, P.; Sutherland, M.; Goh, S.; Ronning, F.; Bauer, E. D.; Batlogg, B.
2014-03-01
CePt2In7 belongs to the CemMnIn3 m + 2 n heavy fermion family, but compared to the Ce MIn5 members of this group, exhibits a more two dimensional electronic structure. At zero pressure the ground state is antiferromagnetically ordered. Under pressure the antiferromagnetic order is suppressed and a superconducting phase is induced, with a maximum Tc above a quantum critical point around 31 kbar. To investigate the changes in the Fermi Surface and effective electron masses around the quantum critical point, Shubnikov-de Haas measurements were conducted under high pressures in an anvil cell. The samples were micro-structured and contacted using a Focused Ion Beam (FIB). The Focused Ion Beam enables sample contacting and structuring down to a sub-micrometer scale, making the measurement of several samples with complex shapes and multiple contacts on a single anvil feasible.
Dan, Haruka; Azuma, Teruaki; Hayakawa, Fumiyo; Kohyama, Kaoru
2005-05-01
This study was designed to examine human subjects' ability to discriminate between spatially different bite pressures. We measured actual bite pressure distribution when subjects simultaneously bit two silicone rubber samples with different hardnesses using their right and left incisors. They were instructed to compare the hardness of these two rubber samples and indicate which was harder (right or left). The correct-answer rates were statistically significant at P < 0.05 for all pairs of different right and left silicone rubber hardnesses. Simultaneous bite measurements using a multiple-point sheet sensor demonstrated that the bite force, active pressure and maximum pressure point were greater for the harder silicone rubber sample. The difference between the left and right was statistically significant (P < 0.05) for all pairs with different silicone rubber hardnesses. We demonstrated for the first time that subjects could perceive and discriminate between spatially different bite pressures during a single bite with incisors. Differences of the bite force, pressure and the maximum pressure point between the right and left silicone samples should be sensory cues for spatial hardness discrimination.
Barnard, P.L.; Hubbard, D.M.; Dugan, J.E.
2012-01-01
A 17-year time series of near-daily sand thickness measurements at a single intertidal location was compared with 5. years of semi-annual 3-dimensional beach surveys at the same beach, and at two other beaches within the same littoral cell. The daily single point measurements correlated extremely well with the mean beach elevation and shoreline position of ten high-spatial resolution beach surveys. Correlations were statistically significant at all spatial scales, even for beach surveys 10s of kilometers downcoast, and therefore variability at the single point monitoring site was representative of regional coastal behavior, allowing us to examine nearly two decades of continuous coastal evolution. The annual cycle of beach oscillations dominated the signal, typical of this region, with additional, less intense spectral peaks associated with seasonal wave energy fluctuations (~. 45 to 90. days), as well as full lunar (~. 29. days) and semi-lunar (~. 13. days; spring-neap cycle) tidal cycles. Sand thickness variability was statistically linked to wave energy with a 2. month peak lag, as well as the average of the previous 7-8. months of wave energy. Longer term anomalies in sand thickness were also apparent on time scales up to 15. months. Our analyses suggest that spatially-limited morphological data sets can be extremely valuable (with robust validation) for understanding the details of beach response to wave energy over timescales that are not resolved by typical survey intervals, as well as the regional behavior of coastal systems. ?? 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordienko, A V; Mavritskii, O B; Egorov, A N
2014-12-31
The statistics of the ionisation response amplitude measured at selected points and their surroundings within sensitive regions of integrated circuits (ICs) under focused femtosecond laser irradiation is obtained for samples chosen from large batches of two types of ICs. A correlation between these data and the results of full-chip scanning is found for each type. The criteria for express validation of IC single-event effect (SEE) hardness based on ionisation response measurements at selected points are discussed. (laser applications and other topics in quantum electronics)
NASA Technical Reports Server (NTRS)
Morris, R. V.; Gibbons, R. V.; Hoerz, F.
1975-01-01
Using a recently developed furnace, ferromagnetic resonance (FMR) thermomagnetic studies up to 900 C were employed to measure the Curie points of the superparamagnetic (SP) and single domain (SD) particles in lunar soils and potential magnetic analogue materials. Based on measured Curie points of 775 C, the SP and SD particles in lunar soils 10084-853, 12070-29, 14161-46, and 67010-4 are essentially pure metallic Fe. Synthetic and terrestrial samples containing magnetite, titanomaghemites, and magnetite-like particles have measured Curie points below 600 C are thus not magnetic analogues of lunar soils.
Cettomai, Deanna; Kwasa, Judith; Kendi, Caroline; Birbeck, Gretchen L; Price, Richard W; Bukusi, Elizabeth A; Cohen, Craig R; Meyer, Ana-Claire
2010-12-08
Neuropathy is the most common neurologic complication of HIV but is widely under-diagnosed in resource-constrained settings. We aimed to identify tools that accurately distinguish individuals with moderate/severe peripheral neuropathy and can be administered by non-physician healthcare workers (HCW) in resource-constrained settings. We enrolled a convenience sample of 30 HIV-infected outpatients from a Kenyan HIV-care clinic. A HCW administered the Neuropathy Severity Score (NSS), Single Question Neuropathy Screen (Single-QNS), Subjective Peripheral Neuropathy Screen (Subjective-PNS), and Brief Peripheral Neuropathy Screen (Brief-PNS). Monofilament, graduated tuning fork, and two-point discrimination examinations were performed. Tools were validated against a neurologist's clinical assessment of moderate/severe neuropathy. The sample was 57% male, mean age 38.6 years, and mean CD4 count 324 cells/µL. Neurologist's assessment identified 20% (6/30) with moderate/severe neuropathy. Diagnostic utilities for moderate/severe neuropathy were: Single-QNS--83% sensitivity, 71% specificity; Subjective-PNS-total--83% sensitivity, 83% specificity; Subjective-PNS-max and NSS--67% sensitivity, 92% specificity; Brief-PNS--0% sensitivity, 92% specificity; monofilament--100% sensitivity, 88% specificity; graduated tuning fork--83% sensitivity, 88% specificity; two-point discrimination--75% sensitivity, 58% specificity. Pilot testing suggests Single-QNS, Subjective-PNS, and monofilament examination accurately identify HIV-infected patients with moderate/severe neuropathy and may be useful diagnostic tools in resource-constrained settings.
Quantum Point Contact Single-Nucleotide Conductance for DNA and RNA Sequence Identification.
Afsari, Sepideh; Korshoj, Lee E; Abel, Gary R; Khan, Sajida; Chatterjee, Anushree; Nagpal, Prashant
2017-11-28
Several nanoscale electronic methods have been proposed for high-throughput single-molecule nucleic acid sequence identification. While many studies display a large ensemble of measurements as "electronic fingerprints" with some promise for distinguishing the DNA and RNA nucleobases (adenine, guanine, cytosine, thymine, and uracil), important metrics such as accuracy and confidence of base calling fall well below the current genomic methods. Issues such as unreliable metal-molecule junction formation, variation of nucleotide conformations, insufficient differences between the molecular orbitals responsible for single-nucleotide conduction, and lack of rigorous base calling algorithms lead to overlapping nanoelectronic measurements and poor nucleotide discrimination, especially at low coverage on single molecules. Here, we demonstrate a technique for reproducible conductance measurements on conformation-constrained single nucleotides and an advanced algorithmic approach for distinguishing the nucleobases. Our quantum point contact single-nucleotide conductance sequencing (QPICS) method uses combed and electrostatically bound single DNA and RNA nucleotides on a self-assembled monolayer of cysteamine molecules. We demonstrate that by varying the applied bias and pH conditions, molecular conductance can be switched ON and OFF, leading to reversible nucleotide perturbation for electronic recognition (NPER). We utilize NPER as a method to achieve >99.7% accuracy for DNA and RNA base calling at low molecular coverage (∼12×) using unbiased single measurements on DNA/RNA nucleotides, which represents a significant advance compared to existing sequencing methods. These results demonstrate the potential for utilizing simple surface modifications and existing biochemical moieties in individual nucleobases for a reliable, direct, single-molecule, nanoelectronic DNA and RNA nucleotide identification method for sequencing.
Determination of piezo-optic coefficients of crystals by means of four-point bending.
Krupych, Oleg; Savaryn, Viktoriya; Krupych, Andriy; Klymiv, Ivan; Vlokh, Rostyslav
2013-06-10
A technique developed recently for determining piezo-optic coefficients (POCs) of isotropic optical media, which represents a combination of digital imaging laser interferometry and a classical four-point bending method, is generalized and applied to a single-crystalline anisotropic material. The peculiarities of measuring procedures and data processing for the case of optically uniaxial crystals are described in detail. The capabilities of the technique are tested on the example of canonical nonlinear optical crystal LiNbO3. The high precision achieved in determination of the POCs for isotropic and anisotropic materials testifies that the technique should be both versatile and reliable.
Vibrations and structureborne noise in space station
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1985-01-01
Theoretical models were developed capable of predicting structural response and noise transmission to random point mechanical loads. Fiber reinforced composite and aluminum materials were considered. Cylindrical shells and circular plates were taken as typical representatives of structural components for space station habitability modules. Analytical formulations include double wall and single wall constructions. Pressurized and unpressurized models were considered. Parametric studies were conducted to determine the effect on structural response and noise transmission due to fiber orientation, point load location, damping in the core and the main load carrying structure, pressurization, interior acoustic absorption, etc. These analytical models could serve as preliminary tools for assessing noise related problems, for space station applications.
Holographic non-Fermi-liquid fixed points.
Faulkner, Tom; Iqbal, Nabil; Liu, Hong; McGreevy, John; Vegh, David
2011-04-28
Techniques arising from string theory can be used to study assemblies of strongly interacting fermions. Via this 'holographic duality', various strongly coupled many-body systems are solved using an auxiliary theory of gravity. Simple holographic realizations of finite density exhibit single-particle spectral functions with sharp Fermi surfaces, of a form distinct from those of the Landau theory. The self-energy is given by a correlation function in an infrared (IR) fixed-point theory that is represented by a two-dimensional anti de Sitter space (AdS(2)) region in the dual gravitational description. Here, we describe in detail the gravity calculation of this IR correlation function.
Rectangular Array Of Digital Processors For Planning Paths
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.; Fossum, Eric R.; Nixon, Robert H.
1993-01-01
Prototype 24 x 25 rectangular array of asynchronous parallel digital processors rapidly finds best path across two-dimensional field, which could be patch of terrain traversed by robotic or military vehicle. Implemented as single-chip very-large-scale integrated circuit. Excepting processors on edges, each processor communicates with four nearest neighbors along paths representing travel to north, south, east, and west. Each processor contains delay generator in form of 8-bit ripple counter, preset to 1 of 256 possible values. Operation begins with choice of processor representing starting point. Transmits signals to nearest neighbor processors, which retransmits to other neighboring processors, and process repeats until signals propagated across entire field.
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
NASA Technical Reports Server (NTRS)
Warsi, Saif A.
1989-01-01
A detailed operating manual is presented for a grid generating program that produces 3-D meshes for advanced turboprops. The code uses both algebraic and elliptic partial differential equation methods to generate single rotation and counterrotation, H or C type meshes for the z - r planes and H type for the z - theta planes. The code allows easy specification of geometrical constraints (such as blade angle, location of bounding surfaces, etc.), mesh control parameters (point distribution near blades and nacelle, number of grid points desired, etc.), and it has good runtime diagnostics. An overview is provided of the mesh generation procedure, sample input dataset with detailed explanation of all input, and example meshes.
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Noll, James; Gilles, Stewart; Wu, Hsin Wei; Rubinstein, Elaine
2015-01-01
In the United States, total carbon (TC) is used as a surrogate for determining diesel particulate matter (DPM) compliance exposures in underground metal/nonmetal mines. Since TC can be affected by interferences and elemental carbon (EC) is not, one method used to estimate the TC concentration is to multiply the EC concentration from the personal sample by a conversion factor to avoid the influence of potential interferences. Since there is no accepted single conversion factor for all metal/nonmetal mines, one is determined every time an exposure sample is taken by collecting an area sample that represents the TC/EC ratio in the miner's breathing zone and is away from potential interferences. As an alternative to this procedure, this article investigates the relationship between TC and EC from DPM samples to determine if a single conversion factor can be used for all metal/nonmetal mines. In addition, this article also investigates how well EC represents DPM concentrations in Australian coal mines since the recommended exposure limit for DPM in Australia is an EC value. When TC was predicted from EC values using a single conversion factor of 1.27 in 14 US metal/nonmetal mines, 95% of the predicted values were within 18% of the measured value, even at the permissible exposure limit (PEL) concentration of 160 μg/m3 TC. A strong correlation between TC and EC was also found in nine underground coal mines in Australia. PMID:25380085
`VIS/NIR mapping of TOC and extent of organic soils in the Nørre Å valley
NASA Astrophysics Data System (ADS)
Knadel, M.; Greve, M. H.; Thomsen, A.
2009-04-01
Organic soils represent a substantial pool of carbon in Denmark. The need for carbon stock assessment calls for more rapid and effective mapping methods to be developed. The aim of this study was to compare traditional soil mapping with maps produced from the results of a mobile VIS/NIR system and to evaluate the ability to estimate TOC and map the area of organic soils. The Veris mobile VIS/NIR spectroscopy system was compared to traditional manual sampling. The system is developed for in-situ near surface measurements of soil carbon content. It measures diffuse reflectance in the 350 nm-2200 nm region. The system consists of two spectrophotometers mounted on a toolbar and pulled by a tractor. Optical measurements are made through a sapphire window at the bottom of the shank. The shank was pulled at a depth of 5-7 cm at a speed of 4-5 km/hr. 20-25 spectra per second with 8 nm resolution were acquired by the spectrometers. Measurements were made on 10-12 m spaced transects. The system also acquired soil electrical conductivity (EC) for two soil depths: shallow EC-SH (0- 31 cm) and deep conductivity EC-DP (0- 91 cm). The conductivity was recorded together with GPS coordinates and spectral data for further construction of the calibration models. Two maps of organic soils in the Nørre Å valley (Central Jutland) were generated: (i) based on a conventional 25 m grid with 162 sampling points and laboratory analysis of TOC, (ii) based on in-situ VIS/NIR measurements supported by chemometrics. Before regression analysis, spectral information was compressed by calculating principal components. The outliers were determined by a mahalanobis distance equation and removed. Clustering using a fuzzy c- means algorithm was conducted. Within each cluster a location with the minimal spatial variability was selected. A map of 15 representative sample locations was proposed. The interpolation of the spectra into a single spectrum was performed using a Gaussian kernel weighting function. Spectra obtained near a sampled location were averaged. The collected spectra were correlated to TOC of the 15 representative samples using multivariate regression techniques (Unscrambler 9.7; Camo ASA, Oslo, Norway). Two types of calibrations were performed: using only spectra and using spectra together with the auxiliary data (EC-SH and EC-DP). These calibration equations were computed using PLS regression, segmented cross-validation method on centred data (using the raw spectral data, log 1/R). Six different spectra pre-treatments were conducted: (1) only spectra, (2) Savitsky-Golay smoothing over 11 wavelength points and transformation to a (3) 1'st and (4) 2'nd Savitzky and Golay derivative algorithm with a derivative interval of 21 wavelength points, (5) with or (6) without smoothing. The best treatment was considered to be the one with the lowest Root Mean Square Error of Prediction (RMSEP), the highest r2 between the VIS/NIR-predicted and measured values in the calibration model and the lowest mean deviation of predicted TOC values. The best calibration model was obtained with the mathematical pre-treatment's including smoothing, calculating the 2'nd derivative and outlier removal. The two TOC maps were compared after interpolation using kriging. They showed a similar pattern in the TOC distribution. Despite the unfavourable field conditions the VIS/NIR system performed well in both low and high TOC areas. Water content in places exceeding field capacity in the lower parts of the investigated field did not seriously degrade measurements. The present study represents the first attempt to apply the mobile Veris VIS/NIR system to the mapping of TOC of peat soils in Denmark. The result from this study show that a mobile VIS/NIR system can be applied to cost effective TOC mapping of mineral and organic soils with highly varying water content. Key words: VIS/NIR spectroscopy, organic soils, TOC
2010-04-13
AYMAN GIRGIS (EM10 MATERIALS TEST ENGINEER, JACOBS ESTS GROUP/JTI) ADJUSTS DUAL LENSES FOR A UNIQUE MECHANICAL TST SETUP THAT MEASURES STRAIN ON A SINGLE SAMPLE, USING TWO DIFFERENT TECHNIQUES AT THE SAME TIME. THE TEST FIXTURE HOLDS A SPECIMEN THAT REPRESENTS A LIQUID OXYGEN (LOX) BEARING FROM THE J2-X ENGINE
2010-04-13
ERIC EARHART (AEROSPACE ENGINEER, ER41 PROPULSION STRUCTURAL & DYNAMICS ANALYSIS BRANCH) DISCUSSES DATA PRODUCED BY A UNIQUE MECHANICAL TEST SETUP THAT MEASURES STRAIN ON A SINGLE SAMPLE, USING TWO DIFFERENT TECHNIQUES AT THE SAME TIME. THE TEST FIXTURE HOLDS A SPECIMEN THAT REPRESENTS A LIQUID OXYGEN (LOX) BEARING FROM THE J2-X ENGINE
LONG-TERM ECOLOGICAL STUDY IN THE OAK RIDGE AREA. III. THE ORIBATID MITE FAUNA IN PINE LITTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossley, D.A. Jr.; Bohnsack, K.K.
1960-10-01
From 215 samples of pine forest litter taken in the Oak Ridge area during the summer of 1956, a total of 30,371 arthropods was obtained. Of this number, mites comprised 82.9%, collembolans 12.2%, and other insects 3.6%. Particular attention was directed toward the oribatid mites. The samples were taken from 3 similar areas, each of which was represented by 3 stations. Three samples were taken from each station each week for a period of 8 weeks during July and August. Thus a hierarchic measure of the variation in abundance was obtained for the more abundant oribatid species. Results of analysismore » of variance tests showed that significant variation occurs both between stations within areas and between the areas, for certain of the abundant oribatid mites. Fager's (1957) trellis diagram method was used to detect significant joint occurrences between species; only a single recurrent group of species was found. Although the 3 study areas had different relative abundances of certain of the more numerous species, it was concluded that the differences between areas represent local variations in a single oribatid fauna, rather than elements of 2 or more faunas. (auth)« less
A geostatistical approach to predicting sulfur content in the Pittsburgh coal bed
Watson, W.D.; Ruppert, L.F.; Bragg, L.J.; Tewalt, S.J.
2001-01-01
The US Geological Survey (USGS) is completing a national assessment of coal resources in the five top coal-producing regions in the US. Point-located data provide measurements on coal thickness and sulfur content. The sample data and their geologic interpretation represent the most regionally complete and up-to-date assessment of what is known about top-producing US coal beds. The sample data are analyzed using a combination of geologic and Geographic Information System (GIS) models to estimate tonnages and qualities of the coal beds. Traditionally, GIS practitioners use contouring to represent geographical patterns of "similar" data values. The tonnage and grade of coal resources are then assessed by using the contour lines as references for interpolation. An assessment taken to this point is only indicative of resource quantity and quality. Data users may benefit from a statistical approach that would allow them to better understand the uncertainty and limitations of the sample data. To develop a quantitative approach, geostatistics were applied to the data on coal sulfur content from samples taken in the Pittsburgh coal bed (located in the eastern US, in the southwestern part of the state of Pennsylvania, and in adjoining areas in the states of Ohio and West Virginia). Geostatistical methods that account for regional and local trends were applied to blocks 2.7 mi (4.3 km) on a side. The data and geostatistics support conclusions concerning the average sulfur content and its degree of reliability at regional- and economic-block scale over the large, contiguous part of the Pittsburgh outcrop, but not to a mine scale. To validate the method, a comparison was made with the sulfur contents in sample data taken from 53 coal mines located in the study area. The comparison showed a high degree of similarity between the sulfur content in the mine samples and the sulfur content represented by the geostatistically derived contours. Published by Elsevier Science B.V.
Tong, Qing; Chen, Baorong; Zhang, Rui; Zuo, Chang
Variation in clinical enzyme analysis, particularly across different measuring systems and laboratories, represents a critical but long-lasting problem in diagnosis. Calibrators with traceability and commutability are imminently needed to harmonize analysis in laboratory medicine. Fresh frozen human serum pools were assigned values for alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyltransferase (GGT), creatine kinase (CK) and lactate dehydrogenase (LDH) by six laboratories with established International Federation of Clinical Chemistry and Laboratory Medicine reference measurement procedures. These serum pools were then used across 76 laboratories as a calibrator in the analysis of five enzymes. Bias and imprecision in the measurement of the five enzymes tested were significantly reduced by using the value-assigned serum in analytical systems with open and single-point calibration. The median (interquartile range) of the relative biases of ALT, AST, GGT, CK and LDH were 2.0% (0.6-3.4%), 0.8% (-0.8-2.3%), 1.0% (-0.5-2.0%), 0.2% (-0.3-1.0%) and 0.2% (-0.9-1.1%), respectively. Before calibration, the interlaboratory coefficients of variation (CVs) in the analysis of patient serum samples were 8.0-8.2%, 7.3-8.5%, 8.1-8.7%, 5.1-5.9% and 5.8-6.4% for ALT, AST, GGT, CK and LDH, respectively; after calibration, the CVs decreased to 2.7-3.3%, 3.0-3.6%, 1.6-2.1%, 1.8-1.9% and 3.3-3.5%, respectively. The results suggest that the use of fresh frozen serum pools significantly improved the comparability of test results in analytical systems with open and single-point calibration.
NASA Astrophysics Data System (ADS)
Aminzadeh, Milad; Breitenstein, Daniel; Or, Dani
2017-12-01
The intermittent nature of turbulent airflow interacting with the surface is readily observable in fluctuations of the surface temperature resulting from the thermal imprints of eddies sweeping the surface. Rapid infrared thermography has recently been used to quantify characteristics of the near-surface turbulent airflow interacting with the evaporating surfaces. We aim to extend this technique by using single-point rapid infrared measurements to quantify properties of a turbulent flow, including surface exchange processes, with a view towards the development of an infrared surface anemometer. The parameters for the surface-eddy renewal (α and β ) are inferred from infrared measurements of a single-point on the surface of a heat plate placed in a wind tunnel with prescribed wind speeds and constant mean temperatures of the surface. Thermally-deduced parameters are in agreement with values obtained from standard three-dimensional ultrasonic anemometer measurements close to the plate surface (e.g., α = 3 and β = 1/26 (ms)^{-1} for the infrared, and α = 3 and β = 1/19 (ms)^{-1} for the sonic-anemometer measurements). The infrared-based turbulence parameters provide new insights into the role of surface temperature and buoyancy on the inherent characteristics of interacting eddies. The link between the eddy-spectrum shape parameter α and the infrared window size representing the infrared field of view is investigated. The results resemble the effect of the sampling height above the ground in sonic anemometer measurements, which enables the detection of larger eddies with higher values of α . The physical basis and tests of the proposed method support the potential for remote quantification of the near-surface momentum field, as well as scalar-flux measurements in the immediate vicinity of the surface.
Peláez-Fernández, María Angeles; Ruiz-Lázaro, Pedro Manuel; Labrador, Francisco Javier; Raich, Rosa María
2014-02-20
To validate the best cut-off point of the Eating Attitudes Test (EAT-40), Spanish version, for the screening of eating disorders (ED) in the general population. This was a transversal cross-sectional study. The EAT-40 Spanish version was administered to a representative sample of 1.543 students, age range 12 to 21 years, in the Region of Madrid. Six hundred and two participants (probable cases and a random sample of controls) were interviewed. The best diagnostic prediction was obtained with a cut-off point of 21, with sensitivity: 88.2%; specificity: 62.1%; positive predictive value: 17.7%; negative predictive value: 62.1%. Use of a cut-off point of 21 is recommended in epidemiological studies of eating disorders in the Spanish general population. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Early Handedness in Infancy Predicts Language Ability in Toddlers
ERIC Educational Resources Information Center
Nelson, Eliza L.; Campbell, Julie M.; Michel, George F.
2014-01-01
Researchers have long been interested in the relationship between handedness and language in development. However, traditional handedness studies using single age groups, small samples, or too few measurement time points have not capitalized on individual variability and may have masked 2 recently identified patterns in infants: those with a…
FANTOM5 CAGE profiles of human and mouse samples.
Noguchi, Shuhei; Arakawa, Takahiro; Fukuda, Shiro; Furuno, Masaaki; Hasegawa, Akira; Hori, Fumi; Ishikawa-Kato, Sachi; Kaida, Kaoru; Kaiho, Ai; Kanamori-Katayama, Mutsumi; Kawashima, Tsugumi; Kojima, Miki; Kubosaki, Atsutaka; Manabe, Ri-Ichiroh; Murata, Mitsuyoshi; Nagao-Sato, Sayaka; Nakazato, Kenichi; Ninomiya, Noriko; Nishiyori-Sueki, Hiromi; Noma, Shohei; Saijyo, Eri; Saka, Akiko; Sakai, Mizuho; Simon, Christophe; Suzuki, Naoko; Tagami, Michihira; Watanabe, Shoko; Yoshida, Shigehiro; Arner, Peter; Axton, Richard A; Babina, Magda; Baillie, J Kenneth; Barnett, Timothy C; Beckhouse, Anthony G; Blumenthal, Antje; Bodega, Beatrice; Bonetti, Alessandro; Briggs, James; Brombacher, Frank; Carlisle, Ailsa J; Clevers, Hans C; Davis, Carrie A; Detmar, Michael; Dohi, Taeko; Edge, Albert S B; Edinger, Matthias; Ehrlund, Anna; Ekwall, Karl; Endoh, Mitsuhiro; Enomoto, Hideki; Eslami, Afsaneh; Fagiolini, Michela; Fairbairn, Lynsey; Farach-Carson, Mary C; Faulkner, Geoffrey J; Ferrai, Carmelo; Fisher, Malcolm E; Forrester, Lesley M; Fujita, Rie; Furusawa, Jun-Ichi; Geijtenbeek, Teunis B; Gingeras, Thomas; Goldowitz, Daniel; Guhl, Sven; Guler, Reto; Gustincich, Stefano; Ha, Thomas J; Hamaguchi, Masahide; Hara, Mitsuko; Hasegawa, Yuki; Herlyn, Meenhard; Heutink, Peter; Hitchens, Kelly J; Hume, David A; Ikawa, Tomokatsu; Ishizu, Yuri; Kai, Chieko; Kawamoto, Hiroshi; Kawamura, Yuki I; Kempfle, Judith S; Kenna, Tony J; Kere, Juha; Khachigian, Levon M; Kitamura, Toshio; Klein, Sarah; Klinken, S Peter; Knox, Alan J; Kojima, Soichi; Koseki, Haruhiko; Koyasu, Shigeo; Lee, Weonju; Lennartsson, Andreas; Mackay-Sim, Alan; Mejhert, Niklas; Mizuno, Yosuke; Morikawa, Hiromasa; Morimoto, Mitsuru; Moro, Kazuyo; Morris, Kelly J; Motohashi, Hozumi; Mummery, Christine L; Nakachi, Yutaka; Nakahara, Fumio; Nakamura, Toshiyuki; Nakamura, Yukio; Nozaki, Tadasuke; Ogishima, Soichi; Ohkura, Naganari; Ohno, Hiroshi; Ohshima, Mitsuhiro; Okada-Hatakeyama, Mariko; Okazaki, Yasushi; Orlando, Valerio; Ovchinnikov, Dmitry A; Passier, Robert; Patrikakis, Margaret; Pombo, Ana; Pradhan-Bhatt, Swati; Qin, Xian-Yang; Rehli, Michael; Rizzu, Patrizia; Roy, Sugata; Sajantila, Antti; Sakaguchi, Shimon; Sato, Hiroki; Satoh, Hironori; Savvi, Suzana; Saxena, Alka; Schmidl, Christian; Schneider, Claudio; Schulze-Tanzil, Gundula G; Schwegmann, Anita; Sheng, Guojun; Shin, Jay W; Sugiyama, Daisuke; Sugiyama, Takaaki; Summers, Kim M; Takahashi, Naoko; Takai, Jun; Tanaka, Hiroshi; Tatsukawa, Hideki; Tomoiu, Andru; Toyoda, Hiroo; van de Wetering, Marc; van den Berg, Linda M; Verardo, Roberto; Vijayan, Dipti; Wells, Christine A; Winteringham, Louise N; Wolvetang, Ernst; Yamaguchi, Yoko; Yamamoto, Masayuki; Yanagi-Mizuochi, Chiyo; Yoneda, Misako; Yonekura, Yohei; Zhang, Peter G; Zucchelli, Silvia; Abugessaisa, Imad; Arner, Erik; Harshbarger, Jayson; Kondo, Atsushi; Lassmann, Timo; Lizio, Marina; Sahin, Serkan; Sengstag, Thierry; Severin, Jessica; Shimoji, Hisashi; Suzuki, Masanori; Suzuki, Harukazu; Kawai, Jun; Kondo, Naoto; Itoh, Masayoshi; Daub, Carsten O; Kasukawa, Takeya; Kawaji, Hideya; Carninci, Piero; Forrest, Alistair R R; Hayashizaki, Yoshihide
2017-08-29
In the FANTOM5 project, transcription initiation events across the human and mouse genomes were mapped at a single base-pair resolution and their frequencies were monitored by CAGE (Cap Analysis of Gene Expression) coupled with single-molecule sequencing. Approximately three thousands of samples, consisting of a variety of primary cells, tissues, cell lines, and time series samples during cell activation and development, were subjected to a uniform pipeline of CAGE data production. The analysis pipeline started by measuring RNA extracts to assess their quality, and continued to CAGE library production by using a robotic or a manual workflow, single molecule sequencing, and computational processing to generate frequencies of transcription initiation. Resulting data represents the consequence of transcriptional regulation in each analyzed state of mammalian cells. Non-overlapping peaks over the CAGE profiles, approximately 200,000 and 150,000 peaks for the human and mouse genomes, were identified and annotated to provide precise location of known promoters as well as novel ones, and to quantify their activities.
FANTOM5 CAGE profiles of human and mouse samples
Noguchi, Shuhei; Arakawa, Takahiro; Fukuda, Shiro; Furuno, Masaaki; Hasegawa, Akira; Hori, Fumi; Ishikawa-Kato, Sachi; Kaida, Kaoru; Kaiho, Ai; Kanamori-Katayama, Mutsumi; Kawashima, Tsugumi; Kojima, Miki; Kubosaki, Atsutaka; Manabe, Ri-ichiroh; Murata, Mitsuyoshi; Nagao-Sato, Sayaka; Nakazato, Kenichi; Ninomiya, Noriko; Nishiyori-Sueki, Hiromi; Noma, Shohei; Saijyo, Eri; Saka, Akiko; Sakai, Mizuho; Simon, Christophe; Suzuki, Naoko; Tagami, Michihira; Watanabe, Shoko; Yoshida, Shigehiro; Arner, Peter; Axton, Richard A.; Babina, Magda; Baillie, J. Kenneth; Barnett, Timothy C.; Beckhouse, Anthony G.; Blumenthal, Antje; Bodega, Beatrice; Bonetti, Alessandro; Briggs, James; Brombacher, Frank; Carlisle, Ailsa J.; Clevers, Hans C.; Davis, Carrie A.; Detmar, Michael; Dohi, Taeko; Edge, Albert S.B.; Edinger, Matthias; Ehrlund, Anna; Ekwall, Karl; Endoh, Mitsuhiro; Enomoto, Hideki; Eslami, Afsaneh; Fagiolini, Michela; Fairbairn, Lynsey; Farach-Carson, Mary C.; Faulkner, Geoffrey J.; Ferrai, Carmelo; Fisher, Malcolm E.; Forrester, Lesley M.; Fujita, Rie; Furusawa, Jun-ichi; Geijtenbeek, Teunis B.; Gingeras, Thomas; Goldowitz, Daniel; Guhl, Sven; Guler, Reto; Gustincich, Stefano; Ha, Thomas J.; Hamaguchi, Masahide; Hara, Mitsuko; Hasegawa, Yuki; Herlyn, Meenhard; Heutink, Peter; Hitchens, Kelly J.; Hume, David A.; Ikawa, Tomokatsu; Ishizu, Yuri; Kai, Chieko; Kawamoto, Hiroshi; Kawamura, Yuki I.; Kempfle, Judith S.; Kenna, Tony J.; Kere, Juha; Khachigian, Levon M.; Kitamura, Toshio; Klein, Sarah; Klinken, S. Peter; Knox, Alan J.; Kojima, Soichi; Koseki, Haruhiko; Koyasu, Shigeo; Lee, Weonju; Lennartsson, Andreas; Mackay-sim, Alan; Mejhert, Niklas; Mizuno, Yosuke; Morikawa, Hiromasa; Morimoto, Mitsuru; Moro, Kazuyo; Morris, Kelly J.; Motohashi, Hozumi; Mummery, Christine L.; Nakachi, Yutaka; Nakahara, Fumio; Nakamura, Toshiyuki; Nakamura, Yukio; Nozaki, Tadasuke; Ogishima, Soichi; Ohkura, Naganari; Ohno, Hiroshi; Ohshima, Mitsuhiro; Okada-Hatakeyama, Mariko; Okazaki, Yasushi; Orlando, Valerio; Ovchinnikov, Dmitry A.; Passier, Robert; Patrikakis, Margaret; Pombo, Ana; Pradhan-Bhatt, Swati; Qin, Xian-Yang; Rehli, Michael; Rizzu, Patrizia; Roy, Sugata; Sajantila, Antti; Sakaguchi, Shimon; Sato, Hiroki; Satoh, Hironori; Savvi, Suzana; Saxena, Alka; Schmidl, Christian; Schneider, Claudio; Schulze-Tanzil, Gundula G.; Schwegmann, Anita; Sheng, Guojun; Shin, Jay W.; Sugiyama, Daisuke; Sugiyama, Takaaki; Summers, Kim M.; Takahashi, Naoko; Takai, Jun; Tanaka, Hiroshi; Tatsukawa, Hideki; Tomoiu, Andru; Toyoda, Hiroo; van de Wetering, Marc; van den Berg, Linda M.; Verardo, Roberto; Vijayan, Dipti; Wells, Christine A.; Winteringham, Louise N.; Wolvetang, Ernst; Yamaguchi, Yoko; Yamamoto, Masayuki; Yanagi-Mizuochi, Chiyo; Yoneda, Misako; Yonekura, Yohei; Zhang, Peter G.; Zucchelli, Silvia; Abugessaisa, Imad; Arner, Erik; Harshbarger, Jayson; Kondo, Atsushi; Lassmann, Timo; Lizio, Marina; Sahin, Serkan; Sengstag, Thierry; Severin, Jessica; Shimoji, Hisashi; Suzuki, Masanori; Suzuki, Harukazu; Kawai, Jun; Kondo, Naoto; Itoh, Masayoshi; Daub, Carsten O.; Kasukawa, Takeya; Kawaji, Hideya; Carninci, Piero; Forrest, Alistair R.R.; Hayashizaki, Yoshihide
2017-01-01
In the FANTOM5 project, transcription initiation events across the human and mouse genomes were mapped at a single base-pair resolution and their frequencies were monitored by CAGE (Cap Analysis of Gene Expression) coupled with single-molecule sequencing. Approximately three thousands of samples, consisting of a variety of primary cells, tissues, cell lines, and time series samples during cell activation and development, were subjected to a uniform pipeline of CAGE data production. The analysis pipeline started by measuring RNA extracts to assess their quality, and continued to CAGE library production by using a robotic or a manual workflow, single molecule sequencing, and computational processing to generate frequencies of transcription initiation. Resulting data represents the consequence of transcriptional regulation in each analyzed state of mammalian cells. Non-overlapping peaks over the CAGE profiles, approximately 200,000 and 150,000 peaks for the human and mouse genomes, were identified and annotated to provide precise location of known promoters as well as novel ones, and to quantify their activities. PMID:28850106
2D modeling of direct laser metal deposition process using a finite particle method
NASA Astrophysics Data System (ADS)
Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.
2018-05-01
Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.
A Computer Graphics Human Figure Application Of Biostereometrics
NASA Astrophysics Data System (ADS)
Fetter, William A.
1980-07-01
A study of improved computer graphic representation of the human figure is being conducted under a National Science Foundation grant. Special emphasis is given biostereometrics as a primary data base from which applications requiring a variety of levels of detail may be prepared. For example, a human figure represented by a single point can be very useful in overview plots of a population. A crude ten point figure can be adequate for queuing theory studies and simulated movement of groups. A one hundred point figure can usefully be animated to achieve different overall body activities including male and female figures. A one thousand point figure si-milarly animated, begins to be useful in anthropometrics and kinesiology gross body movements. Extrapolations of this order-of-magnitude approach ultimately should achieve very complex data bases and a program which automatically selects the correct level of detail for the task at hand. See Summary Figure 1.
Spatial variability of turbulent fluxes in the roughness sublayer of an even-aged pine forest
Katul, G.; Hsieh, C.-I.; Bowling, D.; Clark, K.; Shurpali, N.; Turnipseed, A.; Albertson, J.; Tu, K.; Hollinger, D.; Evans, B. M.; Offerle, B.; Anderson, D.; Ellsworth, D.; Vogel, C.; Oren, R.
1999-01-01
The spatial variability of turbulent flow statistics in the roughness sublayer (RSL) of a uniform even-aged 14 m (= h) tall loblolly pine forest was investigated experimentally. Using seven existing walkup towers at this stand, high frequency velocity, temperature, water vapour and carbon dioxide concentrations were measured at 15.5 m above the ground surface from October 6 to 10 in 1997. These seven towers were separated by at least 100 m from each other. The objective of this study was to examine whether single tower turbulence statistics measurements represent the flow properties of RSL turbulence above a uniform even-aged managed loblolly pine forest as a best-case scenario for natural forested ecosystems. From the intensive space-time series measurements, it was demonstrated that standard deviations of longitudinal and vertical velocities (??(u), ??(w)) and temperature (??(T)) are more planar homogeneous than their vertical flux of momentum (u(*)2) and sensible heat (H) counterparts. Also, the measured H is more horizontally homogeneous when compared to fluxes of other scalar entities such as CO2 and water vapour. While the spatial variability in fluxes was significant (> 15%), this unique data set confirmed that single tower measurements represent the 'canonical' structure of single-point RSL turbulence statistics, especially flux-variance relationships. Implications to extending the 'moving-equilibrium' hypothesis for RSL flows are discussed. The spatial variability in all RSL flow variables was not constant in time and varied strongly with spatially averaged friction velocity u(*), especially when u(*) was small. It is shown that flow properties derived from two-point temporal statistics such as correlation functions are more sensitive to local variability in leaf area density when compared to single point flow statistics. Specifically, that the local relationship between the reciprocal of the vertical velocity integral time scale (I(w)) and the arrival frequency of organized structures (u??/h) predicted from a mixing-layer theory exhibited dependence on the local leaf area index. The broader implications of these findings to the measurement and modelling of RSL flows are also discussed.
Performance of Copan WASP for Routine Urine Microbiology
Quiblier, Chantal; Jetter, Marion; Rominski, Mark; Mouttet, Forouhar; Böttger, Erik C.; Keller, Peter M.
2015-01-01
This study compared a manual workup of urine clinical samples with fully automated WASPLab processing. As a first step, two different inocula (1 and 10 μl) and different streaking patterns were compared using WASP and InoqulA BT instrumentation. Significantly more single colonies were produced with the10-μl inoculum than with the 1-μl inoculum, and automated streaking yielded significantly more single colonies than manual streaking on whole plates (P < 0.001). In a second step, 379 clinical urine samples were evaluated using WASP and the manual workup. Average numbers of detected morphologies, recovered species, and CFUs per milliliter of all 379 urine samples showed excellent agreement between WASPLab and the manual workup. The percentage of urine samples clinically categorized as positive or negative did not differ between the automated and manual workflow, but within the positive samples, automated processing by WASPLab resulted in the detection of more potential pathogens. In summary, the present study demonstrates that (i) the streaking pattern, i.e., primarily the number of zigzags/length of streaking lines, is critical for optimizing the number of single colonies yielded from primary cultures of urine samples; (ii) automated streaking by the WASP instrument is superior to manual streaking regarding the number of single colonies yielded (for 32.2% of the samples); and (iii) automated streaking leads to higher numbers of detected morphologies (for 47.5% of the samples), species (for 17.4% of the samples), and pathogens (for 3.4% of the samples). The results of this study point to an improved quality of microbiological analyses and laboratory reports when using automated sample processing by WASP and WASPLab. PMID:26677255
Fu, Glenn K; Wilhelmy, Julie; Stern, David; Fan, H Christina; Fodor, Stephen P A
2014-03-18
We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification.
Patrut, Adrian; Woodborne, Stephan; von Reden, Karl F.; Hall, Grant; Hofmeyr, Michele; Lowy, Daniel A.; Patrut, Roxana T.
2015-01-01
The article reports the radiocarbon investigation results of the Lebombo Eco Trail tree, a representative African baobab from Mozambique. Several wood samples collected from the large inner cavity and from the outer part of the tree were investigated by AMS radiocarbon dating. According to dating results, the age values of all samples increase from the sampling point with the distance into the wood. For samples collected from the cavity walls, the increase of age values with the distance into the wood (up to a point of maximum age) represents a major anomaly. The only realistic explanation for this anomaly is that such inner cavities are, in fact, natural empty spaces between several fused stems disposed in a ring-shaped structure. We named them false cavities. Several important differences between normal cavities and false cavities are presented. Eventually, we dated other African baobabs with false inner cavities. We found that this new architecture enables baobabs to reach large sizes and old ages. The radiocarbon date of the oldest sample was 1425 ± 24 BP, which corresponds to a calibrated age of 1355 ± 15 yr. The dating results also show that the Lebombo baobab consists of five fused stems, with ages between 900 and 1400 years; these five stems build the complete ring. The ring and the false cavity closed 800–900 years ago. The results also indicate that the stems stopped growing toward the false cavity over the past 500 years. PMID:25621989
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.
Jung, Sin-Ho
2017-07-01
In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.
Effects of time and sampling location on concentrations of β-hydroxybutyric acid in dairy cows.
Mahrt, A; Burfeind, O; Heuwieser, W
2014-01-01
Two trials were conducted to examine factors potentially influencing the measurement of blood β-hydroxybutyric acid (BHBA) in dairy cows. The objective of the first trial was to study effects of sampling time on BHBA concentration in continuously fed dairy cows. Furthermore, we determined test characteristics of a single BHBA measurement at a random time of the day to diagnose subclinical ketosis considering commonly used cut-points (1.2 and 1.4 mmol/L). Finally, we set out to evaluate if test characteristics could be enhanced by repeating measurements after different time intervals. During 4 herd visits, a total of 128 cows (8 to 28 d in milk) fed 10 times daily were screened at 0900 h and preselected by BHBA concentration. Blood samples were drawn from the tail vessels and BHBA concentrations were measured using an electronic BHBA meter (Precision Xceed, Abbott Diabetes Care Ltd., Witney, UK). Cows with BHBA concentrations ≥0.8 mmol/L at this time were enrolled in the trial (n=92). Subsequent BHBA measurements took place every 3h for a total of 8 measurements during 24 h. The effect of sampling time on BHBA concentrations was tested in a repeated-measures ANOVA repeating sampling time. Sampling time did not affect BHBA concentrations in continuously fed dairy cows. Defining the average daily BHBA concentration calculated from the 8 measurements as the gold standard, a single measurement at a random time of the day to diagnose subclinical ketosis had a sensitivity of 0.90 or 0.89 at the 2 BHBA cut-points (1.2 and 1.4 mmol/L). Specificity was 0.88 or 0.90 using the same cut-points. Repeating measurements after different time intervals improved test characteristics only slightly. In the second experiment, we compared BHBA concentrations of samples drawn from 3 different blood sampling locations (tail vessels, jugular vein, and mammary vein) of 116 lactating dairy cows. Concentrations of BHBA differed in samples from the 3 sampling locations. Mean BHBA concentration was 0.3 mmol/L lower when measured in the mammary vein compared with the jugular vein and 0.4 mmol/L lower in the mammary vein compared with the tail vessels. We conclude that to measure BHBA, blood samples of continuously fed dairy cows can be drawn at any time of the day. A single measurement provides very good test characteristics for on-farm conditions. Blood samples for BHBA measurement should be drawn from the jugular vein or tail vessels; the mammary vein should not be used for this purpose. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hao, Pengyu; Wang, Li; Niu, Zheng
2015-01-01
A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao
2018-06-05
Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.
Rodrigues, Valdemir; Estrany, Joan; Ranzini, Mauricio; de Cicco, Valdir; Martín-Benito, José Mª Tarjuelo; Hedo, Javier; Lucas-Borja, Manuel E
2018-05-01
Stream water quality is controlled by the interaction of natural and anthropogenic factors over a range of temporal and spatial scales. Among these anthropogenic factors, land cover changes at catchment scale can affect stream water quality. This work aims to evaluate the influence of land use and seasonality on stream water quality in a representative tropical headwater catchment named as Córrego Água Limpa (Sao Paulo, Brasil), which is highly influenced by intensive agricultural activities and urban areas. Two systematic sampling approach campaigns were implemented with six sampling points along the stream of the headwater catchment to evaluate water quality during the rainy and dry seasons. Three replicates were collected at each sampling point in 2011. Electrical conductivity, nitrates, nitrites, sodium superoxide, Chemical Oxygen Demand (DQO), colour, turbidity, suspended solids, soluble solids and total solids were measured. Water quality parameters differed among sampling points, being lower at the headwater sampling point (0m above sea level), and then progressively higher until the last downstream sampling point (2500m above sea level). For the dry season, the mean discharge was 39.5ls -1 (from April to September) whereas 113.0ls -1 were averaged during the rainy season (from October to March). In addition, significant temporal and spatial differences were observed (P<0.05) for the fourteen parameters during the rainy and dry period. The study enhance significant relationships among land use and water quality and its temporal effect, showing seasonal differences between the land use and water quality connection, highlighting the importance of multiple spatial and temporal scales for understanding the impacts of human activities on catchment ecosystem services. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal foot shape for a passive dynamic biped.
Kwan, Maxine; Hubbard, Mont
2007-09-21
Passive walking dynamics describe the motion of a biped that is able to "walk" down a shallow slope without any actuation or control. Instead, the walker relies on gravitational and inertial effects to propel itself forward, exhibiting a gait quite similar to that of humans. These purely passive models depend on potential energy to overcome the energy lost when the foot impacts the ground. Previous research has demonstrated that energy loss at heel-strike can vary widely for a given speed, depending on the nature of the collision. The point of foot contact with the ground (relative to the hip) can have a significant effect: semi-circular (round) feet soften the impact, resulting in much smaller losses than point-foot walkers. Collisional losses are also lower if a single impulse is broken up into a series of smaller impulses that gradually redirect the velocity of the center of mass rather than a single abrupt impulse. Using this principle, a model was created where foot-strike occurs over two impulses, "heel-strike" and "toe-strike," representative of the initial impact of the heel and the following impact as the ball of the foot strikes the ground. Having two collisions with the flat-foot model did improve efficiency over the point-foot model. Representation of the flat-foot walker as a rimless wheel helped to explain the optimal flat-foot shape, driven by symmetry of the virtual spoke angles. The optimal long period foot shape of the simple passive walking model was not very representative of the human foot shape, although a reasonably anthropometric foot shape was predicted by the short period solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
Ambient air quality has traditionally been monitored using a network of fixed point sampling sites that are strategically placed to represent regional (e.g., county or town) rather than local (e.g., neighborhood) air quality trends. This type of monitoring data has been used to m...
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Continued fractions with limit periodic coefficients
NASA Astrophysics Data System (ADS)
Buslaev, V. I.
2018-02-01
The boundary properties of functions represented by limit periodic continued fractions of a fairly general form are investigated. Such functions are shown to have no single-valued meromorphic extension to any neighbourhood of any non-isolated boundary point of the set of convergence of the continued fraction. The boundary of the set of meromorphy has the property of symmetry in an external field determined by the parameters of the continued fraction. Bibliography: 26 titles.
Improved Methodology for Developing Cost Uncertainty Models for Naval Vessels
2009-04-22
Deegan , 2007). Risk cannot be assessed with a point estimate, as it represents a single value that serves as a best guess for the parameter to be...or stakeholders ( Deegan & Fields, 2007). This paper analyzes the current NAVSEA 05C Cruiser (CG(X)) probabilistic cost model including data...provided by Mr. Chris Deegan and his CG(X) analysts. The CG(X) model encompasses all factors considered for cost of the entire program, including
USDA-ARS?s Scientific Manuscript database
Importance: Human milk is the subject of many nutrition studies but methods for representative sample collection are not established. Our recently improved, validated methods for analyzing micronutrients in human milk now enable systematic study of factors affecting their concentration. Objective...
Redefining the WISC-R: Implications for Professional Practice and Public Policy.
ERIC Educational Resources Information Center
Macmann, Gregg M.; Barnett, David W.
1992-01-01
The factor structure of the Wechsler Intelligence Scale for Children (Revised) was examined in the standardization sample using new methods of factor analysis. The substantial overlap across factors was most parsimoniously represented by a single general factor. Implications for public policy regarding the purposes and outcomes of special…
Oversampling of digitized images. [effects on interpolation in signal processing
NASA Technical Reports Server (NTRS)
Fischel, D.
1976-01-01
Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.
2017-03-26
logistic constraints and associated travel time between points in the central and western Great Basin. The geographic and temporal breadth of our...surveys (MacKenzie and Royle 2005). In most cases, less time is spent traveling between sites on a given day when the single-day design is implemented...with the single-day design (110 hr). These estimates did not include return- travel time , which did not limit sampling effort. As a result, we could
NASA Astrophysics Data System (ADS)
Deng, Shuping; Liu, Hongxia; Li, Decong; Wang, Jinsong; Cheng, Feng; Shen, Lanxian; Deng, Shukang
2017-05-01
Single-crystal samples of Sr-filled Ge-based type I clathrate have been prepared by the Sn-flux method, and their thermoelectric properties investigated. The obtained samples exhibited n-type conduction with carrier concentration varying from 2.8 × 1019/cm3 to 6.8 × 1019/cm3 as the carrier mobility changed from 23.9 cm2/V-s to 15.1 cm2/V-s at room temperature. Structural analysis indicated that all samples were type I clathrate in space group pm\\bar{it{3}}n . The total content of group IV (Ge + Sn) atoms in the crystalline structure increased with increasing x value (where x defines the atomic ratio of starting elements, Sr:Ga:Ge:Sn = 8:16: x:20), reaching a maximum value of 31.76 at.% for the sample with x = 30; consequently, the lattice parameters increased. The melting points for all samples were approximately 1012 K, being considerably lower than that of single-crystal Sr8Ga16Ge30 prepared by other methods. The electrical conductivity increased while the absolute value of α increased gradually with increasing temperature; the maximum value of α reached 193 μV/K at 750 K for the sample with x = 24. The sample with x = 30 exhibited lower lattice thermal conductivity of 0.80 W/m-K. As a result, among all the Sn-flux samples, single-crystal Sr7.92Ga15.04Sn0.35Ge30.69 had the largest ZT value of 1.0 at about 750 K.
2000-05-01
a vector , ρ "# represents the set of voxel densities sorted into a vector , and ( )A ρ $# "# represents a 8 mapping of the voxel densities to...density vector in equation (4) suggests that solving for ρ "# by direct inversion is not possible, calling for an iterative technique beginning with...the vector of measured spectra, and D is the diagonal matrix of the inverse of the variances. The diagonal matrix provides weighting terms, which
Pecoraro, Carlo; Babbucci, Massimiliano; Villamor, Adriana; Franch, Rafaella; Papetti, Chiara; Leroy, Bruno; Ortega-Garcia, Sofia; Muir, Jeff; Rooker, Jay; Arocha, Freddy; Murua, Hilario; Zudaire, Iker; Chassot, Emmanuel; Bodin, Nathalie; Tinti, Fausto; Bargelloni, Luca; Cariani, Alessia
2016-02-01
Global population genetic structure of yellowfin tuna (Thunnus albacares) is still poorly understood despite its relevance for the tuna fishery industry. Low levels of genetic differentiation among oceans speak in favour of the existence of a single panmictic population worldwide of this highly migratory fish. However, recent studies indicated genetic structuring at a much smaller geographic scales than previously considered, pointing out that YFT population genetic structure has not been properly assessed so far. In this study, we demonstrated for the first time, the utility of 2b-RAD genotyping technique for investigating population genetic diversity and differentiation in high gene-flow species. Running de novo pipeline in Stacks, a total of 6772 high-quality genome-wide SNPs were identified across Atlantic, Indian and Pacific population samples representing all major distribution areas. Preliminary analyses showed shallow but significant population structure among oceans (FST=0.0273; P-value<0.01). Discriminant Analysis of Principal Components endorsed the presence of genetically discrete yellowfin tuna populations among three oceanic pools. Although such evidence needs to be corroborated by increasing sample size, these results showed the efficiency of this genotyping technique in assessing genetic divergence in a marine fish with high dispersal potential. Copyright © 2015 Elsevier B.V. All rights reserved.
Gaikowski, M.P.; Larson, W.J.; Steuer, J.J.; Gingerich, W.H.
2004-01-01
Accurate estimates of drug concentrations in hatchery effluent are critical to assess the environmental risk of hatchery drug discharge resulting from disease treatment. This study validated two dilution simple n models to estimate chloramine-T environmental introduction concentrations by comparing measured and predicted chloramine-T concentrations using the US Geological Survey's Upper Midwest Environmental Sciences Center aquaculture facility effluent as an example. The hydraulic characteristics of our treated raceway and effluent and the accuracy of our water flow rate measurements were confirmed with the marker dye rhodamine WT. We also used the rhodamine WT data to develop dilution models that would (1) estimate the chloramine-T concentration at a given time and location in the effluent system and (2) estimate the average chloramine-T concentration at a given location over the entire discharge period. To test our models, we predicted the chloramine-T concentration at two sample points based on effluent flow and the maintenance of chloramine-T at 20 mg/l for 60 min in the same raceway used with rhodamine WT. The effluent sample points selected (sample points A and B) represented 47 and 100% of the total effluent flow, respectively. Sample point B is-analogous to the discharge of a hatchery that does not have a detention lagoon, i.e. The sample site was downstream of the last dilution water addition following treatment. We then applied four chloramine-T flow-through treatments at 20mg/l for 60 min and measured the chloramine-T concentration in water samples collected every 15 min for about 180 min from the treated raceway and sample points A and B during and after application. The predicted chloramine-T concentration at each sampling interval was similar to the measured chloramine-T concentration at sample points A and B and was generally bounded by the measured 90% confidence intervals. The predicted aver,age chloramine-T concentrations at sample points A or B (2.8 and 1.3 mg/l, respectively) were not significantly different (P > 0.05) from the average measured chloramine-T concentrations (2.7 and 1.3 mg/l, respectively). The close agreement between our predicted and measured chloramine-T concentrations indicate either of the dilution models could be used to adequately predict the chloramine-T environmental introduction concentration in Upper Midwest Environmental Sciences Center effluent. (C) 2003 Elsevier B.V. All rights reserved.
Stojanovic, Gordana S; Jovanović, Snežana C; Zlatković, Bojan K
2015-06-01
The present study is engaged in the chemical composition of methanol extracts of Sedum taxa from the central part of the Balkan Peninsula, and representatives from other genera of Crassulaceae (Crassula, Echeveria and Kalanchoe) considered as out-groups. The chemical composition of extracts was determined by HPLC analysis, according to retention time of standards and characteristic absorption spectra of components. Identified components were considered as original variables with possible chemotaxonomic significance. Relationships of examined plant samples were investigated by agglomerative hierarchical cluster analysis (AHC). The obtained results showed how the distribution of methanol extract components (mostly phenolics) affected grouping of the examined samples. The obtained clustering showed satisfactory grouping of the examined samples, among which some representatives of the Sedum series, Rupestria and Magellensia, are the most remote. The out-group samples were not clearly singled out with regard to Sedum samples as expected; this especially applies to samples of Crassula ovata and Echeveria lilacina, while Kalanchoe daigremontiana was more separated from most of the Sedum samples.
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
NASA Astrophysics Data System (ADS)
Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.
2017-05-01
Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.
Armour, Cherie; Carragher, Natacha; Elhai, Jon D
2013-01-01
Since the initial inclusion of PTSD in the DSM nomenclature, PTSD symptomatology has been distributed across three symptom clusters. However, a wealth of empirical research has concluded that PTSD's latent structure is best represented by one of two four-factor models: Numbing or Dysphoria. Recently, a newly proposed five-factor Dysphoric Arousal model, which separates the DSM-IV's Arousal cluster into two factors of Anxious Arousal and Dysphoric Arousal, has gathered support across a variety of trauma samples. To date, the Dysphoric Arousal model has not been assessed using nationally representative epidemiological data. We employed confirmatory factor analysis to examine PTSD's latent structure in two independent population based surveys from American (NESARC) and Australia (NSWHWB). We specified and estimated the Numbing model, the Dysphoria model, and the Dysphoric Arousal model in both samples. Results revealed that the Dysphoric Arousal model provided superior fit to the data compared to the alternative models. In conclusion, these findings suggest that items D1-D3 (sleeping difficulties; irritability; concentration difficulties) represent a separate, fifth factor within PTSD's latent structure using nationally representative epidemiological data in addition to single trauma specific samples. Copyright © 2012 Elsevier Ltd. All rights reserved.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Page, Michael M; Taranto, Mario; Ramsay, Duncan; van Schie, Greg; Glendenning, Paul; Gillett, Melissa J; Vasikaran, Samuel D
2018-01-01
Objective Primary aldosteronism is a curable cause of hypertension which can be treated surgically or medically depending on the findings of adrenal vein sampling studies. Adrenal vein sampling studies are technically demanding with a high failure rate in many centres. The use of intraprocedural cortisol measurement could improve the success rates of adrenal vein sampling but may be impracticable due to cost and effects on procedural duration. Design Retrospective review of the results of adrenal vein sampling procedures since commencement of point-of-care cortisol measurement using a novel single-use semi-quantitative measuring device for cortisol, the adrenal vein sampling Accuracy Kit. Success rate and complications of adrenal vein sampling procedures before and after use of the adrenal vein sampling Accuracy Kit. Routine use of the adrenal vein sampling Accuracy Kit device for intraprocedural measurement of cortisol commenced in 2016. Results Technical success rate of adrenal vein sampling increased from 63% of 99 procedures to 90% of 48 procedures ( P = 0.0007) after implementation of the adrenal vein sampling Accuracy Kit. Failure of right adrenal vein cannulation was the main reason for an unsuccessful study. Radiation dose decreased from 34.2 Gy.cm 2 (interquartile range, 15.8-85.9) to 15.7 Gy.cm 2 (6.9-47.3) ( P = 0.009). No complications were noted, and implementation costs were minimal. Conclusions Point-of-care cortisol measurement during adrenal vein sampling improved cannulation success rates and reduced radiation exposure. The use of the adrenal vein sampling Accuracy Kit is now standard practice at our centre.
De Jong, G D; Hoback, W W
2006-06-01
Carrion insect succession studies have historically used repeated sampling of single or a few carcasses to produce data, either weighing the carcasses, removing a qualitative subsample of the fauna present, or both, on every visit over the course of decomposition and succession. This study, conducted in a set of related experimental hypotheses with two trials in a single season, investigated the effect that repeated sampling has on insect succession, determined by the number of taxa collected on each visit and by community composition. Each trial lasted at least 21 days, with daily visits on the first 14 days. Rat carcasses used in this study were all placed in the field on the same day, but then either sampled qualitatively on every visit (similar to most succession studies) or ignored until a given day of succession, when they were sampled qualitatively (a subsample) and then destructively sampled in their entirety. Carcasses sampled on every visit were in two groups: those from which only a sample of the fauna was taken and those from which a sample of fauna was taken and the carcass was weighed for biomass determination. Of the carcasses visited only once, the number of taxa in subsamples was compared to the actual number of taxa present when the carcass was destructively sampled to determine if the subsamples adequately represented the total carcass fauna. Data from the qualitative subsamples of those carcasses visited only once were also compared to data collected from carcasses that were sampled on every visit to investigate the effect of the repeated sampling. A total of 39 taxa were collected from carcasses during the study and the component taxa are discussed individually in relation to their role in succession. Number of taxa differed on only one visit between the qualitative subsamples and the actual number of taxa present, primarily because the organisms missed by the qualitative sampling were cryptic (hidden deep within body cavities) or rare (only represented by very few specimens). There were no differences discovered between number of taxa in qualitative subsamples from carcasses sampled repeatedly (with or without biomass determinations) and those sampled only a single time. Community composition differed considerably in later stages of decomposition, with disparate communities due primarily to small numbers of rare taxa. These results indicate that the methods used historically for community composition determination in experimental forensic entomology are generally adequate.
NASA Astrophysics Data System (ADS)
Chen, Yu-Chih; Cheng, Yu-Heng; Ingram, Patrick; Yoon, Euisik
2016-06-01
Proteolytic degradation of the extracellular matrix (ECM) is critical in cancer invasion, and recent work suggests that heterogeneous cancer populations cooperate in this process. Despite the importance of cell heterogeneity, conventional proteolytic assays measure average activity, requiring thousands of cells and providing limited information about heterogeneity and dynamics. Here, we developed a microfluidic platform that provides high-efficiency cell loading and simple valveless isolation, so the proteolytic activity of a small sample (10-100 cells) can be easily characterized. Combined with a single cell derived (clonal) sphere formation platform, we have successfully demonstrated the importance of microenvironmental cues for proteolytic activity and also investigated the difference between clones. Furthermore, the platform allows monitoring single cells at multiple time points, unveiling different cancer cell line dynamics in proteolytic activity. The presented tool facilitates single cell proteolytic analysis using small samples, and our findings illuminate the heterogeneous and dynamic nature of proteolytic activity.
Sundararaman, Sesh A.; Liu, Weimin; Keele, Brandon F.; Learn, Gerald H.; Bittinger, Kyle; Mouacha, Fatima; Ahuka-Mundeke, Steve; Manske, Magnus; Sherrill-Mix, Scott; Li, Yingying; Malenke, Jordan A.; Delaporte, Eric; Laurent, Christian; Mpoudi Ngole, Eitel; Kwiatkowski, Dominic P.; Shaw, George M.; Rayner, Julian C.; Peeters, Martine; Sharp, Paul M.; Bushman, Frederic D.; Hahn, Beatrice H.
2013-01-01
Wild-living chimpanzees and gorillas harbor a multitude of Plasmodium species, including six of the subgenus Laverania, one of which served as the progenitor of Plasmodium falciparum. Despite the magnitude of this reservoir, it is unknown whether apes represent a source of human infections. Here, we used Plasmodium species-specific PCR, single-genome amplification, and 454 sequencing to screen humans from remote areas of southern Cameroon for ape Laverania infections. Among 1,402 blood samples, we found 1,000 to be Plasmodium mitochondrial DNA (mtDNA) positive, all of which contained human parasites as determined by sequencing and/or restriction enzyme digestion. To exclude low-abundance infections, we subjected 514 of these samples to 454 sequencing, targeting a region of the mtDNA genome that distinguishes ape from human Laverania species. Using algorithms specifically developed to differentiate rare Plasmodium variants from 454-sequencing error, we identified single and mixed-species infections with P. falciparum, Plasmodium malariae, and/or Plasmodium ovale. However, none of the human samples contained ape Laverania parasites, including the gorilla precursor of P. falciparum. To characterize further the diversity of P. falciparum in Cameroon, we used single-genome amplification to amplify 3.4-kb mtDNA fragments from 229 infected humans. Phylogenetic analysis identified 62 new variants, all of which clustered with extant P. falciparum, providing further evidence that P. falciparum emerged following a single gorilla-to-human transmission. Thus, unlike Plasmodium knowlesi-infected macaques in southeast Asia, African apes harboring Laverania parasites do not seem to serve as a recurrent source of human malaria, a finding of import to ongoing control and eradication measures. PMID:23569255
Family Structure and Adolescent Drug Use: An Exploration of Single-Parent Families
Hemovich, Vanessa; Crano, William D.
2011-01-01
Data from the 2004 Monitoring the Future survey examined a nationally representative cross-sectional sample of 8th to 12th grade adolescents in rural and urban schools from across the United States (N = 37,507). Results found that drug use among daughters living with single fathers significantly exceeded that of daughters living with single mothers, while gender of parent was not associated with sons’ usage. This distinction in adolescent drug use between mother-only versus father-only households is largely overlooked in contemporary studies. Factors responsible for variations in sons’ and daughters’ usage in single-parent families have important implications for future drug prevention efforts. PMID:20001697
An Investigation to Improve Classifier Accuracy for Myo Collected Data
2017-02-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT A naïve Bayes classifier trained with 1,360 samples from 17 volunteers performs at...movement data from 17 volunteers . Each volunteer performed 8 gestures (Freeze, Rally Point, Hurry Up, Down, Come, Stop, Line Abreast Formation, and Vehicle...line chart was plotted for each gesture’s feature (e.g., Pitch, xAcc) per user. All 10 recorded samples of a particular gesture for a single volunteer
Removal of single point diamond-turning marks by abrasive jet polishing.
Li, Z Z; Wang, J M; Peng, X Q; Ho, L T; Yin, Z Q; Li, S Y; Cheung, C F
2011-06-01
Single point diamond turning (SPDT) is highly controllable and versatile in producing axially symmetric forms, non-axially-symmetric forms, microstructured surfaces, and free forms. However, the fine SPDT marks left in the surface limit its performance, and they are difficult to reduce or eliminate. It is unpractical for traditional methods to remove the fine marks without destroying their forms, especially for the aspheres and free forms. This paper introduces abrasive jet polishing (AJP) for the posttreatment of diamond-turned surfaces to remove the periodic microstructures. Samples of diamond-turned electroless nickel plated plano mirror were used in the experiments. One sample with an original surface roughness of more than 400 nm decreased to 4 nm after two iterations abrasive jet polishing; the surface roughness of another sample went from 3.7 nm to 1.4 nm after polishing. The periodic signatures on both of the samples were removed entirely after polishing. Contrastive experimental research was carried out on electroless nickel mirror with magnetorheological finishing, computer controlled optical surfacing, and AJP. The experimental results indicate that AJP is more appropriate in removing the periodic SPDT marks. Also, a figure maintaining experiment was carried out with the AJP process; the uniform polishing process shows that the AJP process can remove the periodic turning marks without destroying the original form.
NASA Technical Reports Server (NTRS)
Garland, J. L.; Mills, A. L.; Young, J. S.
2001-01-01
The relative effectiveness of average-well-color-development-normalized single-point absorbance readings (AWCD) vs the kinetic parameters mu(m), lambda, A, and integral (AREA) of the modified Gompertz equation fit to the color development curve resulting from reduction of a redox sensitive dye from microbial respiration of 95 separate sole carbon sources in microplate wells was compared for a dilution series of rhizosphere samples from hydroponically grown wheat and potato ranging in inoculum densities of 1 x 10(4)-4 x 10(6) cells ml-1. Patterns generated with each parameter were analyzed using principal component analysis (PCA) and discriminant function analysis (DFA) to test relative resolving power. Samples of equivalent cell density (undiluted samples) were correctly classified by rhizosphere type for all parameters based on DFA analysis of the first five PC scores. Analysis of undiluted and 1:4 diluted samples resulted in misclassification of at least two of the wheat samples for all parameters except the AWCD normalized (0.50 abs. units) data, and analysis of undiluted, 1:4, and 1:16 diluted samples resulted in misclassification for all parameter types. Ordination of samples along the first principal component (PC) was correlated to inoculum density in analyses performed on all of the kinetic parameters, but no such influence was seen for AWCD-derived results. The carbon sources responsible for classification differed among the variable types with the exception of AREA and A, which were strongly correlated. These results indicate that the use of kinetic parameters for pattern analysis in CLPP may provide some additional information, but only if the influence of inoculum density is carefully considered. c2001 Elsevier Science Ltd. All rights reserved.
Martínez-Sánchez, Jose M; Fu, Marcela; Ariza, Carles; López, María J; Saltó, Esteve; Pascual, José A; Schiaffino, Anna; Borràs, Josep M; Peris, Mercè; Agudo, Antonio; Nebot, Manel; Fernández, Esteve
2009-01-01
To assess the optimal cut-point for salivary cotinine concentration to identify smoking status in the adult population of Barcelona. We performed a cross-sectional study of a representative sample (n=1,117) of the adult population (>16 years) in Barcelona (2004-2005). This study gathered information on active and passive smoking by means of a questionnaire and a saliva sample for cotinine determination. We analyzed sensitivity and specificity according to sex, age, smoking status (daily and occasional), and exposure to second-hand smoke at home. ROC curves and the area under the curve were calculated. The prevalence of smokers (daily and occasional) was 27.8% (95% CI: 25.2-30.4%). The optimal cut-point to discriminate smoking status was 9.2 ng/ml (sensitivity=88.7% and specificity=89.0%). The area under the ROC curve was 0.952. The optimal cut-point was 12.2 ng/ml in men and 7.6 ng/ml in women. The optimal cut-point was higher at ages with a greater prevalence of smoking. Daily smokers had a higher cut-point than occasional smokers. The optimal cut-point to discriminate smoking status in the adult population is 9.2 ng/ml, with sensitivities and specificities around 90%. The cut-point was higher in men and in younger people. The cut-point increases with higher prevalence of daily smokers.
Sensitivity to sequencing depth in single-cell cancer genomics.
Alves, João M; Posada, David
2018-04-16
Querying cancer genomes at single-cell resolution is expected to provide a powerful framework to understand in detail the dynamics of cancer evolution. However, given the high costs currently associated with single-cell sequencing, together with the inevitable technical noise arising from single-cell genome amplification, cost-effective strategies that maximize the quality of single-cell data are critically needed. Taking advantage of previously published single-cell whole-genome and whole-exome cancer datasets, we studied the impact of sequencing depth and sampling effort towards single-cell variant detection. Five single-cell whole-genome and whole-exome cancer datasets were independently downscaled to 25, 10, 5, and 1× sequencing depth. For each depth level, ten technical replicates were generated, resulting in a total of 6280 single-cell BAM files. The sensitivity of variant detection, including structural and driver mutations, genotyping, clonal inference, and phylogenetic reconstruction to sequencing depth was evaluated using recent tools specifically designed for single-cell data. Altogether, our results suggest that for relatively large sample sizes (25 or more cells) sequencing single tumor cells at depths > 5× does not drastically improve somatic variant discovery, characterization of clonal genotypes, or estimation of single-cell phylogenies. We suggest that sequencing multiple individual tumor cells at a modest depth represents an effective alternative to explore the mutational landscape and clonal evolutionary patterns of cancer genomes.
Comprehensive genetic testing for female and male infertility using next-generation sequencing.
Patel, Bonny; Parets, Sasha; Akana, Matthew; Kellogg, Gregory; Jansen, Michael; Chang, Chihyu; Cai, Ying; Fox, Rebecca; Niknazar, Mohammad; Shraga, Roman; Hunter, Colby; Pollock, Andrew; Wisotzkey, Robert; Jaremko, Malgorzata; Bisignano, Alex; Puig, Oscar
2018-05-19
To develop a comprehensive genetic test for female and male infertility in support of medical decisions during assisted reproductive technology (ART) protocols. We developed a next-generation sequencing (NGS) gene panel consisting of 87 genes including promoters, 5' and 3' untranslated regions, exons, and selected introns. In addition, sex chromosome aneuploidies and Y chromosome microdeletions were analyzed concomitantly using the same panel. The NGS panel was analytically validated by retrospective analysis of 118 genomic DNA samples with known variants in loci representative of female and male infertility. Our results showed analytical accuracy of > 99%, with > 98% sensitivity for single-nucleotide variants (SNVs) and > 91% sensitivity for insertions/deletions (indels). Clinical sensitivity was assessed with samples containing variants representative of male and female infertility, and it was 100% for SNVs/indels, CFTR IVS8-5T variants, sex chromosome aneuploidies, and copy number variants (CNVs) and > 93% for Y chromosome microdeletions. Cost analysis shows potential savings when comparing this single NGS assay with the standard approach, which includes multiple assays. A single, comprehensive, NGS panel can simplify the ordering process for healthcare providers, reduce turnaround time, and lower the overall cost of testing for genetic assessment of infertility in females and males, while maintaining accuracy.
Yu, Feiqiao Brian; Blainey, Paul C; Schulz, Frederik; Woyke, Tanja; Horowitz, Mark A; Quake, Stephen R
2017-07-05
Metagenomics and single-cell genomics have enabled genome discovery from unknown branches of life. However, extracting novel genomes from complex mixtures of metagenomic data can still be challenging and represents an ill-posed problem which is generally approached with ad hoc methods. Here we present a microfluidic-based mini-metagenomic method which offers a statistically rigorous approach to extract novel microbial genomes while preserving single-cell resolution. We used this approach to analyze two hot spring samples from Yellowstone National Park and extracted 29 new genomes, including three deeply branching lineages. The single-cell resolution enabled accurate quantification of genome function and abundance, down to 1% in relative abundance. Our analyses of genome level SNP distributions also revealed low to moderate environmental selection. The scale, resolution, and statistical power of microfluidic-based mini-metagenomics make it a powerful tool to dissect the genomic structure of microbial communities while effectively preserving the fundamental unit of biology, the single cell.
NASA Astrophysics Data System (ADS)
Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.
2018-02-01
We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.
Morris-Berry, C M; Pollard, M; Gao, S; Thompson, C; Singer, H S
2013-11-15
Single-point-in-time ELISA optical densities for three putative antibodies identified in Sydenham's chorea, the streptococcal group A carbohydrate antigen, N-acetyl-beta-d-glucosamine, tubulin, and the dopamine 2 receptor, showed no differences in children with PANDAS (n=44) or Tourette syndrome (n=40) as compared to controls (n=24). Anti-tubulin and D2 receptor antibodies assessed in serial samples from 12 PANDAS subjects obtained prior to a documented exacerbation, during the exacerbation (with or without a temporally associated streptococcal infection), and following the exacerbation, showed no evidence of antibody levels correlating with a clinical exacerbation. These data do not support hypotheses suggesting an autoimmune hypothesis in either TS or PANDAS. © 2013.
Trefz, Phillip; Rösner, Lisa; Hein, Dietmar; Schubert, Jochen K; Miekisch, Wolfram
2013-04-01
Needle trap devices (NTDs) have shown many advantages such as improved detection limits, reduced sampling time and volume, improved stability, and reproducibility if compared with other techniques used in breath analysis such as solid-phase extraction and solid-phase micro-extraction. Effects of sampling flow (2-30 ml/min) and volume (10-100 ml) were investigated in dry gas standards containing hydrocarbons, aldehydes, and aromatic compounds and in humid breath samples. NTDs contained (single-bed) polymer packing and (triple-bed) combinations of divinylbenzene/Carbopack X/Carboxen 1000. Substances were desorbed from the NTDs by means of thermal expansion and analyzed by gas chromatography-mass spectrometry. An automated CO2-controlled sampling device for direct alveolar sampling at the point-of-care was developed and tested in pilot experiments. Adsorption efficiency for small volatile organic compounds decreased and breakthrough increased when sampling was done with polymer needles from a water-saturated matrix (breath) instead from dry gas. Humidity did not affect analysis with triple-bed NTDs. These NTDs showed only small dependencies on sampling flow and low breakthrough from 1-5 %. The new sampling device was able to control crucial parameters such as sampling flow and volume. With triple-bed NTDs, substance amounts increased linearly with increasing sample volume when alveolar breath was pre-concentrated automatically. When compared with manual sampling, automatic sampling showed comparable or better results. Thorough control of sampling and adequate choice of adsorption material is mandatory for application of needle trap micro-extraction in vivo. The new CO2-controlled sampling device allows direct alveolar sampling at the point-of-care without the need of any additional sampling, storage, or pre-concentration steps.
Flexibility in data interpretation: effects of representational format
Braithwaite, David W.; Goldstone, Robert L.
2013-01-01
Graphs and tables differentially support performance on specific tasks. For tasks requiring reading off single data points, tables are as good as or better than graphs, while for tasks involving relationships among data points, graphs often yield better performance. However, the degree to which graphs and tables support flexibility across a range of tasks is not well-understood. In two experiments, participants detected main and interaction effects in line graphs and tables of bivariate data. Graphs led to more efficient performance, but also lower flexibility, as indicated by a larger discrepancy in performance across tasks. In particular, detection of main effects of variables represented in the graph legend was facilitated relative to detection of main effects of variables represented in the x-axis. Graphs may be a preferable representational format when the desired task or analytical perspective is known in advance, but may also induce greater interpretive bias than tables, necessitating greater care in their use and design. PMID:24427145
Fu, Yijun; Xie, Qixue; Lao, Jihong; Wang, Lu
2016-01-01
Fiber shedding is a critical problem in biomedical textile debridement materials, which leads to infection and impairs wound healing. In this work, single fiber pull-out test was proposed as an in vitro evaluation for the fiber shedding property of a textile pile debridement material. Samples with different structural design (pile densities, numbers of ground yarns and coating times) were prepared and estimated under this testing method. Results show that single fiber pull-out test offers an appropriate in vitro evaluation for the fiber shedding property of textile pile debridement materials. Pull-out force for samples without back-coating exhibited a slight escalating trend with the supplement in pile density and number of ground yarn plies, while back-coating process significantly raised the single fiber pull-out force. For fiber shedding mechanism analysis, typical pull-out behavior and failure modes of the single fiber pull-out test were analyzed in detail. Three failure modes were found in this study, i.e., fiber slippage, coating point rupture and fiber breakage. In summary, to obtain samples with desirable fiber shedding property, fabric structural design, preparation process and raw materials selection should be taken into full consideration. PMID:28773428
NASA Astrophysics Data System (ADS)
Ferrari, Fabio; Lavagna, Michèle
2018-06-01
The design of formations of spacecraft in a three-body environment represents one of the most promising challenges for future space missions. Two or more cooperating spacecraft can greatly answer some very complex mission goals, not achievable by a single spacecraft. The dynamical properties of a low acceleration environment such as the vicinity of libration points associated to a three-body system, can be effectively exploited to design spacecraft configurations able of satisfying tight relative position and velocity requirements. This work studies the evolution of an uncontrolled formation orbiting in the proximity of periodic orbits about collinear libration points under the Circular and Elliptic Restricted Three-Body Problems. A three spacecraft triangularly-shaped formation is assumed as a representative geometry to be investigated. The study identifies initial configurations that provide good performance in terms of formation keeping, and investigates key parameters that control the relative dynamics between the spacecraft within the three-body system. Formation keeping performance is quantified by monitoring shape and size changes of the triangular formation. The analysis has been performed under five degrees of freedom to define the geometry, the orientation and the location of the triangle in the synodic rotating frame.
Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules
Panzeri, Francesco
2017-01-01
We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142
Wavefront correction using machine learning methods for single molecule localization microscopy
NASA Astrophysics Data System (ADS)
Tehrani, Kayvan F.; Xu, Jianquan; Kner, Peter
2015-03-01
Optical Aberrations are a major challenge in imaging biological samples. In particular, in single molecule localization (SML) microscopy techniques (STORM, PALM, etc.) a high Strehl ratio point spread function (PSF) is necessary to achieve sub-diffraction resolution. Distortions in the PSF shape directly reduce the resolution of SML microscopy. The system aberrations caused by the imperfections in the optics and instruments can be compensated using Adaptive Optics (AO) techniques prior to imaging. However, aberrations caused by the biological sample, both static and dynamic, have to be dealt with in real time. A challenge for wavefront correction in SML microscopy is a robust optimization approach in the presence of noise because of the naturally high fluctuations in photon emission from single molecules. Here we demonstrate particle swarm optimization for real time correction of the wavefront using an intensity independent metric. We show that the particle swarm algorithm converges faster than the genetic algorithm for bright fluorophores.
Feature Relevance Assessment of Multispectral Airborne LIDAR Data for Tree Species Classification
NASA Astrophysics Data System (ADS)
Amiri, N.; Heurich, M.; Krzystek, P.; Skidmore, A. K.
2018-04-01
The presented experiment investigates the potential of Multispectral Laser Scanning (MLS) point clouds for single tree species classification. The basic idea is to simulate a MLS sensor by combining two different Lidar sensors providing three different wavelngthes. The available data were acquired in the summer 2016 at the same date in a leaf-on condition with an average point density of 37 points/m2. For the purpose of classification, we segmented the combined 3D point clouds consisiting of three different spectral channels into 3D clusters using Normalized Cut segmentation approach. Then, we extracted four group of features from the 3D point cloud space. Once a varity of features has been extracted, we applied forward stepwise feature selection in order to reduce the number of irrelevant or redundant features. For the classification, we used multinomial logestic regression with L1 regularization. Our study is conducted using 586 ground measured single trees from 20 sample plots in the Bavarian Forest National Park, in Germany. Due to lack of reference data for some rare species, we focused on four classes of species. The results show an improvement between 4-10 pp for the tree species classification by using MLS data in comparison to a single wavelength based approach. A cross validated (15-fold) accuracy of 0.75 can be achieved when all feature sets from three different spectral channels are used. Our results cleary indicates that the use of MLS point clouds has great potential to improve detailed forest species mapping.
NASA Astrophysics Data System (ADS)
Sordillo, Laura A.; Sordillo, Peter P.; Budansky, Yury; Pu, Yang; Alfano, R. R.
2015-03-01
Fluorescence profiles from breast cancer and breast normal tissue samples with excitation wavelengths at 280 nm and 340 nm were obtained using the conventional LS-50 Perkin-Elmer spectrometer. Fluorescence ratios from these tissue samples, demonstrated by emission peaks at 340 nm, 440 nm and 460 nm and likely representing tryptophan and NADH, show increased relative content of tryptophan in malignant samples. Double ratio (DR) techniques were used to measure the severity of disease. The single excitation double ratio (Single-DR) method utilizes the emission intensity peaks from the spectrum acquired using a single excitation of 280 nm; while the dual excitation double ratio (dual-DR) method utilizes the emission intensity peaks from the spectra acquired using an excitation of 280 nm and 340 nm. Single-DR and dual-DR from 13 patients with breast carcinoma were compared in terms of their efficiency to distinguish high from low/intermediate tumors. Similar results were found with both methods. Results suggest that dual excitation wavelengths may be as effective as single excitation wavelength in calculating the relative content of biomolecules in breast cancer tissue, as well as for the assessment of the malignant potential of these tumors.
DEVELOPMENT OF A SAMPLING PROCEDURE FOR LARGE NITROGEN- AND SULFUR-BEARING AEROSOLS
A single-stage impactor was modified to utilize a removable TFE impaction surface mounted on the end of an annular denuder. hen used with a polycarbonate filter coated with silicone oil, its cut point was 2.5 um and bounce was <1% for 8-um particles. ignificant bounce occurred wi...
USDA-ARS?s Scientific Manuscript database
Methods to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminant. While informative, many of these methods suffer from poor recovery rates and only provide a snapshot of the microbial load at the time of collectio...
ERIC Educational Resources Information Center
Hipp, John R.
2009-01-01
Using a sample of households nested in census tracts in 24 metropolitan areas over four time points, this study provides a robust test of the determinants of neighborhood satisfaction, taking into account the census tract context. Consistent with social disorganization theory, the presence of racial/ethnic heterogeneity and single-parent…
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Quantitative evaluation of software packages for single-molecule localization microscopy.
Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael
2015-08-01
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
Quantum communications system with integrated photonic devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordholt, Jane E.; Peterson, Charles Glen; Newell, Raymond Thorson
Security is increased in quantum communication (QC) systems lacking a true single-photon laser source by encoding a transmitted optical signal with two or more decoy-states. A variable attenuator or amplitude modulator randomly imposes average photon values onto the optical signal based on data input and the predetermined decoy-states. By measuring and comparing photon distributions for a received QC signal, a single-photon transmittance is estimated. Fiber birefringence is compensated by applying polarization modulation. A transmitter can be configured to transmit in conjugate polarization bases whose states of polarization (SOPs) can be represented as equidistant points on a great circle on themore » Poincare sphere so that the received SOPs are mapped to equidistant points on a great circle and routed to corresponding detectors. Transmitters are implemented in quantum communication cards and can be assembled from micro-optical components, or transmitter components can be fabricated as part of a monolithic or hybrid chip-scale circuit.« less
A new approach in the derivation of relativistic variation of mass with speed
NASA Astrophysics Data System (ADS)
Dikshit, Biswaranjan
2015-05-01
The expression for relativistic variation of mass with speed has been derived in the literature in the following ways: by considering the principles of electrodynamics; by considering elastic collision between two identical particles in which momentum and energy are conserved; or by more advanced methods such as the Lagrangian approach. However, in this paper, the same expression is derived simply by applying the law of conservation of momentum to the motion of a single particle that is subjected to a force (which may be non-electromagnetic) at some point in its trajectory. The advantage of this method is that, in addition to being simple, we can observe how the mass is increased from rest mass to relativistic mass when the speed is changed from 0 to a value of v, as only a single particle is involved in the analysis. This is in contrast to the two particles considered in most text books, in which one represents rest mass and the other represents relativistic mass.
3-Fluorosalicylaldoxime at 6.5 GPa
Wood, Peter A.; Forgan, Ross S.; Parsons, Simon; Pidcock, Elna; Tasker, Peter A.
2009-01-01
3-Fluorosalicylaldoxime, C7H6FNO2, unlike many salicylaldoxime derivatives, forms a crystal structure containing hydrogen-bonded chains rather than centrosymmetric hydrogen-bonded ring motifs. Each chain interacts with two chains above and two chains below via π–π stacking contacts [shortest centroid–centroid distance = 3.295 (1) Å]. This structure at 6.5 GPa represents the final point in a single-crystal compression study. PMID:21583672
Single point estimation of phenytoin dosing: a reappraisal.
Koup, J R; Gibaldi, M; Godolphin, W
1981-11-01
A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.
NASA Technical Reports Server (NTRS)
Haggerty, James J.
1986-01-01
The major programs that generate new technology and therefore expand the bank of knowledge available for future transfer are outlined. The focal point of this volume contains a representative sampling of spinoff products and processes that resulted from technology utilization, or secondary application. The various mechanisms NASA employs to stimulate technology utilization are described and in an appendix, are listed contact sources for further information.
Changes in Pell Grant Participation and Median Income of Recipients. Data Point. NCES 2016-407
ERIC Educational Resources Information Center
Ifill, Nicole; Velez, Erin Dunlop
2016-01-01
This report is based on data from four iterations of the National Postsecondary Student Aid Study (NPSAS), a large, nationally representative sample survey of students that focuses on how they finance their education. NPSAS includes data on federal Pell Grant awards, which are need-based grants awarded to low-income students, primarily…
ERIC Educational Resources Information Center
Criado, Raquel; Sanchez, Aquilino
2009-01-01
The goal of this paper is to verify up to what point ELT textbooks used in Spanish educational settings comply with the official regulations prescribed, which fully advocate the Communicative Language Teaching Method (CLT). For that purpose, seven representative coursebooks of different educational levels and modalities in Spain--secondary, upper…
ERIC Educational Resources Information Center
Chow, Jason C.; Wehby, Joseph H.
2018-01-01
A growing body of evidence points to the common co-occurrence of language and behavioral difficulties in children. Primary studies often focus on this relation in children with identified deficits. However, it is unknown whether this relation holds across other children at risk or representative samples of children or over time. The purpose of…
ERIC Educational Resources Information Center
Adams, Olin L., III; Robichaux, Rebecca R.; Guarino, A. J.
2010-01-01
This research compares the status of managerial accounting practices in public four-year colleges and universities and in private four-year colleges and universities. The investigators surveyed a national sample of chief financial officers (CFOs) at two points in time, 1998-99 and 2003-04. In 1998-99 CFOs representing private institutions reported…
[Bromatological characteristics of pecan nuts (Carya illinoensis Koch) cultivated in Brazil].
de Carvalho, V D
1975-01-01
The A. studied pecan nuts cultivated in Brazil: two samples represented North American varieties and three others Brazilian hybrids. The comparison between physical classification and chemical composition, specially amino acid contents pointed to non significant differences, all beeing useful for commercial purposes. The A. stresses the importance of the culture of pecan nuts in Brazil.
Undergraduates Who Do Not Apply for Financial Aid. Data Point. NCES 2016-406
ERIC Educational Resources Information Center
Ifill, Nicole
2016-01-01
This report is based on data from the 2011-12 National Postsecondary Student Aid Study (NPSAS:12), a large, nationally representative sample survey of students that focuses on how they finance their education. NPSAS includes data on the application for and receipt of financial aid, including grants, loans, assistantships, scholarships,…
Predicting the Academic Success of Community College Students in Specific Programs of Study.
ERIC Educational Resources Information Center
Yess, James P.
The intent of this study was to determine the influence of selected independent variables on the graduating grade point average (GPA) of community college students in various programs of study. A sample of 483 students from one community college represented seven programs of study: Business Administration-General, Business Administration-Transfer,…
Reports of Bullying and Other Unfavorable Conditions at School. Data Point. NCES 2016-169
ERIC Educational Resources Information Center
Cidade, Melissa; Lessne, Deborah
2016-01-01
Data from the School Crime Supplement (SCS) to the National Crime Victimization Survey (2013), a nationally representative sample survey of students ages 12 through 18, were used to evaluate co-occurring reports of bullying and other unfavorable conditions at school. Analysis is restricted to those respondents who were enrolled in grades 6 through…
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
NASA Astrophysics Data System (ADS)
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.
Development of a single-axis ultrasonic levitator and the study of the radial particle oscillations
NASA Astrophysics Data System (ADS)
Baer, Sebastian; Andrade, Marco A. B.; Esen, Cemal; Adamowski, Julio Cezar; Ostendorf, Andreas
2012-05-01
This work describes the development and analysis of a new single-axis acoustic levitator, which consists of a 38 kHz Langevin-type piezoelectric transducer with a concave radiating surface and a concave reflector. The new levitator design allows to significantly reducing the electric power necessary to levitate particles and to stabilize the levitated sample in both radial and axial directions. In this investigation the lateral oscillations of a levitated particle were measured with a single point Laser Doppler Vibrometer (LDV) and an image evaluation technique. The lateral oscillations were measured for different values of particle diameter, particle density and applied electrical power.
Visual Search Across the Life Span
ERIC Educational Resources Information Center
Hommel, Bernhard; Li, Karen Z. H.; Li, Shu-Chen
2004-01-01
Gains and losses in visual search were studied across the life span in a representative sample of 298 individuals from 6 to 89 years of age. Participants searched for single-feature and conjunction targets of high or low eccentricity. Search was substantially slowed early and late in life, age gradients were more pronounced in conjunction than in…
SU-E-QI-15: Single Point Dosimetry by Means of Cerenkov Radiation Energy Transfer (CRET)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volotskova, O; Jenkins, C; Xing, L
2014-06-15
Purpose: Cerenkov light is generated when a charged particles with energy greater then 250 keV, moves faster than the speed of light in a given medium. Both x-ray photons and electrons produce optical Cerenkov photons during the static megavoltage linear accelerator (LINAC) operational mode. Recently, Cerenkov radiation gained considerable interest as possible candidate as a new imaging modality. Optical signals generated by Cerenkov radiation may act as a surrogate for the absorbed superficial radiation dose. We demonstrated a novel single point dosimetry method for megavoltage photon and electron therapy utilizing down conversion of Cerenkov photons. Methods: The custom build signalmore » characterization system was used: a sample holder (probe) with adjacent light tight compartments was connected via fiber-optic cables to a photon counting photomultiplier tube (PMT). One compartment contains a medium only while the other contains medium and red-shifting nano-particles (Q-dots, nanoclusters). By taking the difference between the two signals (Cerenkov photons and CRET photons) we obtain a measure of the down-converted light, which we expect to be proportional to dose as measured with an adjacent ion chamber. Experimental results are compared to Monte Carlo simulations performed using the GEANT4 code. Results: The signal correlation between CR signal, CRET readings and dose produced by LINAC at a single point were investigated. The experimental results were compared with simulations. The dose linearity, signal to noise ratio and dose rate dependence were tested with custom build CRET based probe. Conclusion: Performance characteristics of the proposed single point CRET based probe were evaluated. The direct use of the induced Cerenkov emission and CRET in an irradiated single point volume as an indirect surrogate for the imparted dose was investigated. We conclude that CRET is a promising optical based dosimetry method that offers advantages over those already proposed.« less
NASA Astrophysics Data System (ADS)
Ribeiro, P.; Silva, P. F.; Moita, P.; Kratinová, Z.; Marques, F. O.; Henry, B.
2013-10-01
This study revisits the palaeomagnetism of the Sines massif (˜76 Ma) in the southwestern Iberian Margin (Portugal). The palaeomagnetic analysis was complemented by a comprehensive study of the magnetic mineralogy by means of rock magnetic measurements and petrographic observations. The overall dispersion of palaeomagnetic directions (declination ranging between ˜N0° and ˜N50°) and their migration observed during stepwise demagnetizations have revealed the superposition of remanence components. We interpret this complex palaeomagnetic behaviour as related to the regional hydrothermalism associated with the last stages of Late Cretaceous magmatic activity. This environment favoured mineralogical alteration and a partial chemical remagnetization, giving in most samples a composite magnetization, which has been erroneously interpreted as the primary one in a previous study, then leading to a questionable model for Cretaceous Iberia rotation. Nonetheless, for some samples a single component has been isolated. Interesting rock magnetic properties and microscopic observations point to a well-preserved magnetic mineralogy for these samples, with magnetite clearly of primary origin. The associated ChRM mean direction (D/I = 3.9°/46.5°, α95 = 1.7°, N = 31 samples) then represents the true primary magnetization of the Sines massif. This new palaeomagnetic direction and the corresponding palaeomagnetic pole (long = 332.0°, lat = -79.5°, A95 = 1.7°) agrees with those from the other palaeomagnetic works for the same period and region (e.g. the Sintra and Monchique massifs), yielding a lack of significant rotation of Iberia relative to stable Europe since the uppermost Late Cretaceous (Campanian-Maastrichtian).
Exposures of tungsten nanostructures to divertor plasmas in DIII-D
Rudakov, D. L.; Wong, C. P. C.; Doerner, R. P.; ...
2016-01-22
Tungsten nanostructures (W-fuzz) prepared in the PISCES-A linear device have been found to survive direct exposure to divertor plasmas in DIII-D. W-fuzz was exposed in the lower divertor of DIII-D using the divertor material evaluation system. Two samples were exposed in lower single null (LSN) deuterium H-mode plasmas. The first sample was exposed in three discharges terminated by vertical displacement event disruptions, and the second in two discharges near the lowered X-point. More recently, three samples were exposed near the lower outer strike point in predominantly helium H-mode LSN plasmas. In all cases, the W-fuzz survived plasma exposure with littlemore » obvious damage except in the areas where unipolar arcing occurred. In conclusion, arcing is effective in W-fuzz removal, and it appears that surfaces covered with W-fuzz can be more prone to arcing than smooth W surfaces.« less
CMOS imager for pointing and tracking applications
NASA Technical Reports Server (NTRS)
Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)
2006-01-01
Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.
Quantification of HIV-1 DNA using real-time recombinase polymerase amplification.
Crannell, Zachary Austin; Rohrman, Brittany; Richards-Kortum, Rebecca
2014-06-17
Although recombinase polymerase amplification (RPA) has many advantages for the detection of pathogenic nucleic acids in point-of-care applications, RPA has not yet been implemented to quantify sample concentration using a standard curve. Here, we describe a real-time RPA assay with an internal positive control and an algorithm that analyzes real-time fluorescence data to quantify HIV-1 DNA. We show that DNA concentration and the onset of detectable amplification are correlated by an exponential standard curve. In a set of experiments in which the standard curve and algorithm were used to analyze and quantify additional DNA samples, the algorithm predicted an average concentration within 1 order of magnitude of the correct concentration for all HIV-1 DNA concentrations tested. These results suggest that quantitative RPA (qRPA) may serve as a powerful tool for quantifying nucleic acids and may be adapted for use in single-sample point-of-care diagnostic systems.
NASA Astrophysics Data System (ADS)
Miskowiec, A.; Bai, M.; Lever, M.; Taub, H.; Hansen, F. Y.; Jenkins, T.; Tyagi, M.; Neumann, D. A.; Diallo, S. O.; Mamontov, E.; Herwig, K. W.
2011-03-01
We have extended our investigation of the quasielastic neutron scattering from single-supported bilayer lipid membranes to a sample of lower hydration using the backscattering spectrometer BASIS at the SNS of ORNL. To focus on the diffusive motion of the water, tail-deuterated DMPC membranes were deposited onto Si O2 -coated Si(100) substrates and characterized by AFM. Compared to a sample of higher hydration, the dryer sample does not have a step-like freezing transition at ~ 267 K and shows less intensity at higher temperatures of a broad Lorentzian component representing bulk-like water. However, the broad component of the ``wet'' and ``dry'' samples behaves similarly at lower temperatures. The dryer sample also shows evidence of a narrow Lorentzian component that has a different temperature dependence than that attributed to conformational changes of the alkyl tails of the lipid molecules in the wet sample. We tentatively identify this slower diffusive motion (time scale ~ 1 ns) with water more tightly bound to the membrane. Supported by NSF Grant No. DMR-0705974.
Naqvi, Shahid A; D'Souza, Warren D
2005-04-01
Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.
An automated single ion hit at JAERI heavy ion microbeam to observe individual radiation damage
NASA Astrophysics Data System (ADS)
Kamiya, Tomihiro; Sakai, Takuro; Naitoh, Yutaka; Hamano, Tsuyoshi; Hirao, Toshio
1999-10-01
Microbeam scanning and a single ion hit technique have been combined to establish an automated beam positioning and single ion hit system at the JAERI Takasaki heavy ion microbeam system. Single ion irradiation on preset points of a sample in various patterns can be performed automatically in a short period. The reliability of the system was demonstrated using CR-39 nuclear track detectors. Single ion hit patterns were achieved with a positioning accuracy of 2 μm or less. In measurement of single event transient current using this system, the reduction of the pulse height by accumulation of radiation damages was observed by single ion injection to the same local areas. This technique showed a possibility to get some quantitative information about the lateral displacement of an individual radiation effect in silicon PIN photodiodes. This paper will give details of the irradiation system and present results from several experiments.
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
Neighborhood sampling: how many streets must an auditor walk?
McMillan, Tracy E; Cubbin, Catherine; Parmenter, Barbara; Medina, Ashley V; Lee, Rebecca E
2010-03-12
This study tested the representativeness of four street segment sampling protocols using the Pedestrian Environment Data Scan (PEDS) in eleven neighborhoods surrounding public housing developments in Houston, TX. The following four street segment sampling protocols were used (1) all segments, both residential and arterial, contained within the 400 meter radius buffer from the center point of the housing development (the core) were compared with all segments contained between the 400 meter radius buffer and the 800 meter radius buffer (the ring); all residential segments in the core were compared with (2) 75% (3) 50% and (4) 25% samples of randomly selected residential street segments in the core. Analyses were conducted on five key variables: sidewalk presence; ratings of attractiveness and safety for walking; connectivity; and number of traffic lanes. Some differences were found when comparing all street segments, both residential and arterial, in the core to the ring. Findings suggested that sampling 25% of residential street segments within the 400 m radius of a residence sufficiently represents the pedestrian built environment. Conclusions support more cost effective environmental data collection for physical activity research.
Neighborhood sampling: how many streets must an auditor walk?
2010-01-01
This study tested the representativeness of four street segment sampling protocols using the Pedestrian Environment Data Scan (PEDS) in eleven neighborhoods surrounding public housing developments in Houston, TX. The following four street segment sampling protocols were used (1) all segments, both residential and arterial, contained within the 400 meter radius buffer from the center point of the housing development (the core) were compared with all segments contained between the 400 meter radius buffer and the 800 meter radius buffer (the ring); all residential segments in the core were compared with (2) 75% (3) 50% and (4) 25% samples of randomly selected residential street segments in the core. Analyses were conducted on five key variables: sidewalk presence; ratings of attractiveness and safety for walking; connectivity; and number of traffic lanes. Some differences were found when comparing all street segments, both residential and arterial, in the core to the ring. Findings suggested that sampling 25% of residential street segments within the 400 m radius of a residence sufficiently represents the pedestrian built environment. Conclusions support more cost effective environmental data collection for physical activity research. PMID:20226052
ATLASGAL -- A molecular view of an unbiased sample of massive star forming clumps
NASA Astrophysics Data System (ADS)
Figura, Charles; Urquhart, James; Wyrowski, Friedrich; Giannetti, Andrea; Kim, Wonju
2018-01-01
Massive stars play an important role in many areas of astrophysics, from regulating star formation to driving the evolution of their host galaxy. Study of these stars is made difficult by their short evolutionary timescales, small populations and greater distances, and further complicated because they reach the main sequence while still shrouded in their natal clumps. As a result, many aspects of their formation are still poorly understood.We have assembled a large and statistically representative collection of massive star-forming environments that span all evolutionary stages of development by correlating mid-infrared and dust continnum surveys. We have conducted follow-up single-pointing observations toward a sample of approximately 600 of these clumps with the Mopra telescope using an 8 GHz bandwidth that spans some 27 molecular and mm-radio recombination line transitions. These lines trace a wide range of interstellar conditions with varying thermal, chemical, and kinematic properties. Many of these lines exhibit hyperfine structure allowing more detailed measurements of the clump environment (e.g. rotation temperatures and column densities).From these twenty-seven lines, we have identified thirteen line intensity ratios that strongly trace the evolutionary state of these clumps. We have investigated individual molecular and mm-radio recombination lines, contrasting these with radio and sub-mm continuum observations. We present a summary of the results of the statistical analysis of the sample, and compare them with previous similar studies to test their utility as chemical clocks of the evolutionary processes.
Severi, Mirko; Becagli, Silvia; Traversi, Rita; Udisti, Roberto
2015-11-17
Recently, the increasing interest in the understanding of global climatic changes and on natural processes related to climate yielded the development and improvement of new analytical methods for the analysis of environmental samples. The determination of trace chemical species is a useful tool in paleoclimatology, and the techniques for the analysis of ice cores have evolved during the past few years from laborious measurements on discrete samples to continuous techniques allowing higher temporal resolution, higher sensitivity and, above all, higher throughput. Two fast ion chromatographic (FIC) methods are presented. The first method was able to measure Cl(-), NO3(-) and SO4(2-) in a melter-based continuous flow system separating the three analytes in just 1 min. The second method (called Ultra-FIC) was able to perform a single chromatographic analysis in just 30 s and the resulting sampling resolution was 1.0 cm with a typical melting rate of 4.0 cm min(-1). Both methods combine the accuracy, precision, and low detection limits of ion chromatography with the enhanced speed and high depth resolution of continuous melting systems. Both methods have been tested and validated with the analysis of several hundred meters of different ice cores. In particular, the Ultra-FIC method was used to reconstruct the high-resolution SO4(2-) profile of the last 10,000 years for the EDML ice core, allowing the counting of the annual layers, which represents a key point in dating these kind of natural archives.
High-pressure high-temperature phase diagram of gadolinium studied using a boron-doped heater anvil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, J. M.; Samudrala, G. K.; Velisavljevic, N.
A boron-doped designer heater anvil is used in conjunction with powder x-ray diffraction to collect structural information on a sample of quasi-hydrostatically loaded gadolinium metal up to pressures above 8 GPa and 600 K. The heater anvil consists of a natural diamond anvil that has been surface modified with a homoepitaxially-grown chemical-vapor-deposited layer of conducting boron-doped diamond, and is used as a DC heating element. Internally insulating both diamond anvils with sapphire support seats allows for heating and cooling of the high pressure area on the order of a few tens of seconds. This device is then used to scanmore » the phase diagram of the sample by oscillating the temperature while continuously increasing the externally applied pressure and collecting in situ time-resolved powder diffraction images. In the pressure-temperature range covered in the experiment the gadolinium sample is observed in its hcp, αSm, and dhcp phases. Under this temperature cycling, the hcp→αSm transition proceeds in discontinuous steps at points along the expected phase boundary. Additionally, the unit cell volumes of each phase deviate from the expected thermal expansion behavior just before each transition is observed from the diffraction data. From these measurements (representing only one hour of synchrotron x-ray collection time), a single-experiment equation of state and phase diagram of each phase of gadolinium is presented for the range of 0 - 10 GPa and 300 - 650 K.« less
High-pressure high-temperature phase diagram of gadolinium studied using a boron-doped heater anvil
Montgomery, J. M.; Samudrala, G. K.; Velisavljevic, N.; ...
2016-04-07
A boron-doped designer heater anvil is used in conjunction with powder x-ray diffraction to collect structural information on a sample of quasi-hydrostatically loaded gadolinium metal up to pressures above 8 GPa and 600 K. The heater anvil consists of a natural diamond anvil that has been surface modified with a homoepitaxially-grown chemical-vapor-deposited layer of conducting boron-doped diamond, and is used as a DC heating element. Internally insulating both diamond anvils with sapphire support seats allows for heating and cooling of the high pressure area on the order of a few tens of seconds. This device is then used to scanmore » the phase diagram of the sample by oscillating the temperature while continuously increasing the externally applied pressure and collecting in situ time-resolved powder diffraction images. In the pressure-temperature range covered in the experiment the gadolinium sample is observed in its hcp, αSm, and dhcp phases. Under this temperature cycling, the hcp→αSm transition proceeds in discontinuous steps at points along the expected phase boundary. Additionally, the unit cell volumes of each phase deviate from the expected thermal expansion behavior just before each transition is observed from the diffraction data. From these measurements (representing only one hour of synchrotron x-ray collection time), a single-experiment equation of state and phase diagram of each phase of gadolinium is presented for the range of 0 - 10 GPa and 300 - 650 K.« less
Horowitz, A.J.; Smith, J.J.; Elrick, K.A.
2001-01-01
A prototype 14-L Teflon? churn splitter was evaluated for whole-water sample-splitting capabilities over a range of sediment concentratons and grain sizes as well as for potential chemical contamination from both organic and inorganic constituents. These evaluations represent a 'best-case' scenario because they were performed in the controlled environment of a laboratory, and used monomineralic silica sand slurries of known concentration made up in deionized water. Further, all splitting was performed by a single operator, and all the requisite concentration analyses were performed by a single laboratory. The prototype Teflon? churn splitter did not appear to supply significant concentrations of either organic or inorganic contaminants at current U.S. Geological Survey (USGS) National Water Quality Laboratory detection and reporting limits when test samples were prepared using current USGS protocols. As with the polyethylene equivalent of the prototype Teflon? churn, the maximum usable whole-water suspended sediment concentration for the prototype churn appears to lie between 1,000 and 10,000 milligrams per liter (mg/L). Further, the maximum grain-size limit appears to lie between 125- and 250-microns (m). Tests to determine the efficacy of the valve baffle indicate that it must be retained to facilitate representative whole-water subsampling.
Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331
Völker, Sebastian; Kistemann, Thomas
2015-01-01
Legionella spp. represent a significant health risk for humans. To ensure hygienically safe drinking water, technical guidelines recommend a central potable water hot (PWH) supply temperature of at least 60°C at the calorifier. In a clinic building we monitored whether slightly lowered temperatures in the PWH system led to a systemic change in the growth of these pathogens. In four separate phases we tested different scenarios concerning PWH supply temperatures and disinfection with chlorine dioxide (ClO2). In each phase, we took 5 sets of samples at 17 representative sampling points in the building's drinking water plumbing system. In total we collected 476 samples from the PWH system. All samples were tested (culture-based) for Legionella spp. and serogroups. Additionally, quantitative parameters at each sampling point were collected, which could possibly be associated with the presence of Legionella spp. (Pseudomonas aeruginsoa, heterotrophic plate count at 20°C and 36°C, temperatures, time until constant temperatures were reached, and chlorine dioxide concentration). The presence of Legionella spp. showed no significant reactions after reducing the PWH supply temperature from 63°C to 60°C and 57°C, as long as disinfection with ClO2 was maintained. After omitting the disinfectant, the PWH system showed statistically significant growth rates at 57°C. PWH temperatures which are permanently lowered to less than recommended values should be carefully accompanied by frequent testing, a thorough evaluation of the building's drinking water plumbing system, and hygiene expertise.
Automatic photometric titrations of calcium and magnesium in carbonate rocks
Shapiro, L.; Brannock, W.W.
1955-01-01
Rapid nonsubjective methods have been developed for the determination of calcium and magnesium in carbonate rocks. From a single solution of the sample, calcium is titrated directly, and magnesium is titrated after a rapid removal of R2O3 and precipitation of calcium as the tungstate. A concentrated and a dilute solution of disodium ethylenediamine tetraacetate are used as titrants. The concentrated solution is added almost to the end point, then the weak solution is added in an automatic titrator to determine the end point precisely.
Khanal, Suraj P.; Mahfuz, Hassan; Rondinone, Adam Justin; ...
2015-11-12
The potential of improving the fracture toughness of synthetic hydroxyapatite (HAp) by incorporating carboxyl functionalized single walled carbon nanotubes (CfSWCNTs) and polymerized ε-caprolactam (nylon) was researched. A series of HAp samples with CfSWCNTs concentrations varying from 0 to 1.5 wt.%, without, and with nylon addition was prepared. X-ray diffraction (XRD), Scanning Electron Microscopy (SEM), and Transmission Electron Microscopy (TEM) were used to characterize the samples. The three point bending test was applied to measure the fracture toughness of the composites. A reproducible value of 3.6 ± 0.3 MPa.√m was found for samples containing 1 wt.% CfSWCNTs and nylon. This valuemore » is in the range of the cortical bone fracture toughness. Lastly, the increase of the CfSWCNTs content results to decrease of the fracture toughness, and formation of secondary phases.« less
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Carbon nanotubes buckypaper radiation studies for medical physics applications.
Alanazi, Abdulaziz; Alkhorayef, Mohammed; Alzimami, Khalid; Jurewicz, Izabela; Abuhadi, Nouf; Dalton, Alan; Bradley, D A
2016-11-01
Graphite ion chambers and semiconductor diode detectors have been used to make measurements in phantoms but these active devices represent a clear disadvantage when considered for in vivo dosimetry. In such circumstance, dosimeters with atomic number similar to human tissue are needed. Carbon nanotubes have properties that potentially meet the demand, requiring low voltage in active devices and an atomic number similar to adipose tissue. In this study, single-wall carbon nanotubes (SWCNTs) buckypaper has been used to measure the beta particle dose deposited from a strontium-90 source, the medium displaying thermoluminescence at potentially useful sensitivity. As an example, the samples show a clear response for a dose of 2Gy. This finding suggests that carbon nanotubes can be used as a passive dosimeter specifically for the high levels of radiation exposures used in radiation therapy. Furthermore, the finding points towards further potential applications such as for space radiation measurements, not least because the medium satisfies a demand for light but strong materials of minimal capacitance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Holte, A
1992-01-01
As a continuation of a cross-sectional study in 1981 involving a representative sample of 1886 women between 45 and 55 years of age, 200 pre-menopausal subjects were selected randomly to take part in a follow-up study. Eighty-seven single measures covering 26 areas of health complaints which have been associated with the menopause in medical textbooks were investigated. A tentative method for relating health complaints at several time points to menopausal status is proposed. A significant number of women reported an increase in vasomotor complaints, vaginal dryness, heart palpitations and social dysfunction following the menopause, although many reported no change or even a reduction in these complaints. On the other hand, a decrease in headache and breast tenderness was noted. No significant differences were observed between the numbers of women reporting an increase or a decrease respectively on any of the other 69 measures (20 complaints), which included anxiety, depression and irritability. Further analyses indicated that the increase in social dysfunction was caused by hot flushes and sweating. This paper raises a number of issues regarding the methodology of longitudinal studies.
Single-Nucleotide-Polymorphism-Based Association Mapping of Dog Stereotypes
Jones, Paul; Chase, Kevin; Martin, Alan; Davern, Pluis; Ostrander, Elaine A.; Lark, Karl G.
2008-01-01
Phenotypic stereotypes are traits, often polygenic, that have been stringently selected to conform to specific criteria. In dogs, Canis familiaris, stereotypes result from breed standards set for conformation, performance (behaviors), etc. As a consequence, phenotypic values measured on a few individuals are representative of the breed stereotype. We used DNA samples isolated from 148 dog breeds to associate SNP markers with breed stereotypes. Using size as a trait to test the method, we identified six significant quantitative trait loci (QTL) on five chromosomes that include candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Less well-documented data for behavioral stereotypes tentatively identified loci for herding, pointing, boldness, and trainability. Four significant loci were identified for longevity, a breed characteristic not under direct selection, but inversely correlated with breed size. The strengths and limitations of the approach are discussed as well as its potential to identify loci regulating the within-breed incidence of specific polygenic diseases. PMID:18505865
Epitaxial growth of single-orientation high-quality MoS2 monolayers
NASA Astrophysics Data System (ADS)
Bana, Harsh; Travaglia, Elisabetta; Bignardi, Luca; Lacovig, Paolo; Sanders, Charlotte E.; Dendzik, Maciej; Michiardi, Matteo; Bianchi, Marco; Lizzit, Daniel; Presel, Francesco; De Angelis, Dario; Apostol, Nicoleta; Das, Pranab Kumar; Fujii, Jun; Vobornik, Ivana; Larciprete, Rosanna; Baraldi, Alessandro; Hofmann, Philip; Lizzit, Silvano
2018-07-01
We present a study on the growth and characterization of high-quality single-layer MoS2 with a single orientation, i.e. without the presence of mirror domains. This single orientation of the MoS2 layer is established by means of x-ray photoelectron diffraction. The high quality is evidenced by combining scanning tunneling microscopy with x-ray photoelectron spectroscopy measurements. Spin- and angle-resolved photoemission experiments performed on the sample revealed complete spin-polarization of the valence band states near the K and -K points of the Brillouin zone. These findings open up the possibility to exploit the spin and valley degrees of freedom for encoding and processing information in devices that are based on epitaxially grown materials.
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
Splatterplots: overcoming overdraw in scatter plots.
Mayorga, Adrian; Gleicher, Michael
2013-09-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the data set as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how Splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Splatterplots: Overcoming Overdraw in Scatter Plots
Mayorga, Adrian; Gleicher, Michael
2014-01-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen. PMID:23846097
Splatterplots: Overcoming Overdraw in Scatter Plots.
Mayorga, Adrian; Gleicher, Michael
2013-03-20
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Britz, Alexander; Assefa, Tadesse A; Galler, Andreas; Gawelda, Wojciech; Diez, Michael; Zalden, Peter; Khakhulin, Dmitry; Fernandes, Bruno; Gessler, Patrick; Sotoudi Namin, Hamed; Beckmann, Andreas; Harder, Manuel; Yavaş, Hasan; Bressler, Christian
2016-11-01
The technical implementation of a multi-MHz data acquisition scheme for laser-X-ray pump-probe experiments with pulse limited temporal resolution (100 ps) is presented. Such techniques are very attractive to benefit from the high-repetition rates of X-ray pulses delivered from advanced synchrotron radiation sources. Exploiting a synchronized 3.9 MHz laser excitation source, experiments in 60-bunch mode (7.8 MHz) at beamline P01 of the PETRA III storage ring are performed. Hereby molecular systems in liquid solutions are excited by the pulsed laser source and the total X-ray fluorescence yield (TFY) from the sample is recorded using silicon avalanche photodiode detectors (APDs). The subsequent digitizer card samples the APD signal traces in 0.5 ns steps with 12-bit resolution. These traces are then processed to deliver an integrated value for each recorded single X-ray pulse intensity and sorted into bins according to whether the laser excited the sample or not. For each subgroup the recorded single-shot values are averaged over ∼10 7 pulses to deliver a mean TFY value with its standard error for each data point, e.g. at a given X-ray probe energy. The sensitivity reaches down to the shot-noise limit, and signal-to-noise ratios approaching 1000 are achievable in only a few seconds collection time per data point. The dynamic range covers 100 photons pulse -1 and is only technically limited by the utilized APD.
Carbone, Teresa; Gilio, Michele; Padula, Maria Carmela; Tramontano, Giuseppina; D'Angelo, Salvatore; Pafundi, Vito
2018-05-01
Indirect Immunofluorescence (IIF) is widely considered the Gold Standard for Antinuclear Antibody (ANA) screening. However, the high inter-reader variability remains the major disadvantage associated with ANA testing and the main reason for the increasing demand of the computer-aided immunofluorescence microscope. Previous studies proposed the quantification of the fluorescence intensity as an alternative for the classical end-point titer evaluation. However, the different distribution of bright/dark light linked to the nature of the self-antigen and its location in the cells result in different mean fluorescence intensities. The aim of the present study was to correlate Fluorescence Index (F.I.) with end-point titers for each well-defined ANA pattern. Routine serum samples were screened for ANA testing on HEp-2000 cells using Immuno Concepts Image Navigator System, and positive samples were serially diluted to assign the end-point titer. A comparison between F.I. and end-point titers related to 10 different staining patterns was made. According to our analysis, good technical performance of F.I. (97% sensitivity and 94% specificity) was found. A significant correlation between quantitative reading of F.I. and end-point titer groups was observed using Spearman's test and regression analysis. A conversion scale of F.I. in end-point titers for each recognized ANA-pattern was obtained. The Image Navigator offers the opportunity to improve worldwide harmonization of ANA test results. In particular, digital F.I. allows quantifying ANA titers by using just one sample dilution. It could represent a valuable support for the routine laboratory and an effective tool to reduce inter- and intra-laboratory variability. Copyright © 2018. Published by Elsevier B.V.
Auerbach, Scott S; Phadke, Dhiral P; Mav, Deepak; Holmgren, Stephanie; Gao, Yuan; Xie, Bin; Shin, Joo Heon; Shah, Ruchir R; Merrick, B Alex; Tice, Raymond R
2015-07-01
Formalin-fixed, paraffin-embedded (FFPE) pathology specimens represent a potentially vast resource for transcriptomic-based biomarker discovery. We present here a comparison of results from a whole transcriptome RNA-Seq analysis of RNA extracted from fresh frozen and FFPE livers. The samples were derived from rats exposed to aflatoxin B1 (AFB1 ) and a corresponding set of control animals. Principal components analysis indicated that samples were separated in the two groups representing presence or absence of chemical exposure, both in fresh frozen and FFPE sample types. Sixty-five percent of the differentially expressed transcripts (AFB1 vs. controls) in fresh frozen samples were also differentially expressed in FFPE samples (overlap significance: P < 0.0001). Genomic signature and gene set analysis of AFB1 differentially expressed transcript lists indicated highly similar results between fresh frozen and FFPE at the level of chemogenomic signatures (i.e., single chemical/dose/duration elicited transcriptomic signatures), mechanistic and pathology signatures, biological processes, canonical pathways and transcription factor networks. Overall, our results suggest that similar hypotheses about the biological mechanism of toxicity would be formulated from fresh frozen and FFPE samples. These results indicate that phenotypically anchored archival specimens represent a potentially informative resource for signature-based biomarker discovery and mechanistic characterization of toxicity. Copyright © 2014 John Wiley & Sons, Ltd.
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-01-01
Context: Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. Objective: To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Data Sources: Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. Study Selection: The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Data Extraction: Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Results: Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, –0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Conclusions: Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up. PMID:23016017
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-05-01
Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, -0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up.
Naumann, R; Alexander-Weber, Ch; Eberhardt, R; Giera, J; Spitzer, P
2002-11-01
Routine pH measurements are carried out with pH meter-glass electrode assemblies. In most cases the glass and reference electrodes are thereby fashioned into a single probe, the so-called 'combination electrode' or simply 'the pH electrode'. The use of these electrodes is subject to various effects, described below, producing uncertainties of unknown magnitude. Therefore, the measurement of pH of a sample requires a suitable calibration by certified standard buffer solutions (CRMs) traceable to primary pH standards. The procedures in use are based on calibrations at one point, at two points bracketing the sample pH and at a series of points, the so-called multi-point calibration. The multi-point calibration (MPC) is recommended if minimum uncertainty and maximum consistency are required over a wide range of unknown pH values. Details of uncertainty computations for the two-point and MPC procedure are given. Furthermore, the multi-point calibration is a useful tool to characterise the performance of pH electrodes. This is demonstrated with different commercial pH electrodes. ELECTRONIC SUPPLEMENTARY MATERIAL is available if you access this article at http://dx.doi.org/10.1007/s00216-002-1506-5. On that page (frame on the left side), a link takes you directly to the supplementary material.
A hybrid approach to device integration on a genetic analysis platform
NASA Astrophysics Data System (ADS)
Brennan, Des; Jary, Dorothee; Kurg, Ants; Berik, Evgeny; Justice, John; Aherne, Margaret; Macek, Milan; Galvin, Paul
2012-10-01
Point-of-care (POC) systems require significant component integration to implement biochemical protocols associated with molecular diagnostic assays. Hybrid platforms where discrete components are combined in a single platform are a suitable approach to integration, where combining multiple device fabrication steps on a single substrate is not possible due to incompatible or costly fabrication steps. We integrate three devices each with a specific system functionality: (i) a silicon electro-wetting-on-dielectric (EWOD) device to move and mix sample and reagent droplets in an oil phase, (ii) a polymer microfluidic chip containing channels and reservoirs and (iii) an aqueous phase glass microarray for fluorescence microarray hybridization detection. The EWOD device offers the possibility of fully integrating on-chip sample preparation using nanolitre sample and reagent volumes. A key challenge is sample transfer from the oil phase EWOD device to the aqueous phase microarray for hybridization detection. The EWOD device, waveguide performance and functionality are maintained during the integration process. An on-chip biochemical protocol for arrayed primer extension (APEX) was implemented for single nucleotide polymorphism (SNiP) analysis. The prepared sample is aspirated from the EWOD oil phase to the aqueous phase microarray for hybridization. A bench-top instrumentation system was also developed around the integrated platform to drive the EWOD electrodes, implement APEX sample heating and image the microarray after hybridization.
NASA Astrophysics Data System (ADS)
Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko
2018-04-01
Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.
Public Perceptions and the Situation of Males in Early Childhood Settings
ERIC Educational Resources Information Center
Tufan, Mumin
2018-01-01
The main focus areas of this research are pointing out the public perceptions and beliefs about male preschool teachers, fear of child sexual molestation, moral panic, and power relations in the society. The sample of the study composed of one white, female preschool teacher with a single interview transcript, working in the city of Tempe,…
It is estimated that protozoan parasites still account for greater than one third of waterborne disease outbreaks reported. Methods used to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminan...
Laser desorption mass spectrometry for molecular diagnosis
NASA Astrophysics Data System (ADS)
Chen, C. H. Winston; Taranenko, N. I.; Zhu, Y. F.; Allman, S. L.; Tang, K.; Matteson, K. J.; Chang, L. Y.; Chung, C. N.; Martin, Steve; Haff, Lawrence
1996-04-01
Laser desorption mass spectrometry has been used for molecular diagnosis of cystic fibrosis. Both 3-base deletion and single-base point mutation have been successfully detected by clinical samples. This new detection method can possibly speed up the diagnosis by one order of magnitude in the future. It may become a new biotechnology technique for population screening of genetic disease.
SEM technique for displaying the three-dimensional structure of wood
C.W. McMillin
1977-01-01
Samples of green Liriodendron tulipifera L. were bandsawed into l/4-inch cubes and boiled in water for 1 hour. Smooth intersecting radial, tangential, and transverse surfaces were prepared with a handheld, single-edge razor blade. After drying, the cubes were affixed to stubs so that the intersection point of the three sectioned surfaces was...
SEM technique for displaying the three-dimensional structure of wood
Charles W. McMillin
1977-01-01
Samples of green Liriodendron tulipifera L. were bandsawed into 1/4-inch cubes and boiled in water for 1 hour. Smooth intersecting radial, tangential, and transverse surfaces were prepared with a handheld, single-edge razor blade. After drying, the cubes were affixed to stubs so that the intersection point of the three sectioned surfaces was...
Targeting excited states in all-trans polyenes with electron-pair states.
Boguslawski, Katharina
2016-12-21
Wavefunctions restricted to electron pair states are promising models for strongly correlated systems. Specifically, the pair Coupled Cluster Doubles (pCCD) ansatz allows us to accurately describe bond dissociation processes and heavy-element containing compounds with multiple quasi-degenerate single-particle states. Here, we extend the pCCD method to model excited states using the equation of motion (EOM) formalism. As the cluster operator of pCCD is restricted to electron-pair excitations, EOM-pCCD allows us to target excited electron-pair states only. To model singly excited states within EOM-pCCD, we modify the configuration interaction ansatz of EOM-pCCD to contain also single excitations. Our proposed model represents a simple and cost-effective alternative to conventional EOM-CC methods to study singly excited electronic states. The performance of the excited state models is assessed against the lowest-lying excited states of the uranyl cation and the two lowest-lying excited states of all-trans polyenes. Our numerical results suggest that EOM-pCCD including single excitations is a good starting point to target singly excited states.
NASA Astrophysics Data System (ADS)
Sortor, R. N.; Goehring, B. M.; Bemis, S. P.; Ruleman, C.; Nichols, K. A.; Ward, D. J.; Frothingham, M.
2017-12-01
The Alaska Range is a transpressional orogen with modern exhumation initiating 6 Ma. The stratigraphic record of unroofing and uplift of the foreland basin is largely preserved along the northern flank of the Alaska Range in the Pliocene-Pleistocene aged Nenana Gravel, an extensive alluvial fan and braidplain deposit. Chronometric control on the Nenana Gravel is largely lacking, with the limited available age control based on a single Ar-Ar tephra date in an underlying unit and via stratigraphic inferences for the upper portions. Higher-resolution dating of the Nenana Gravel unit is imperative in order to quantify deposition rates and the timing of uplift and deformation of the foreland basin. Furthermore, a glacial unit has been found to lie unconformably on top of the unit at Suntrana Creek and may represent the initiation of glacial advances in the Alaska Range. We present a suite of 26Al/10Be cosmogenic nuclide burial ages collected from the lower, middle, and upper sections of the Nenana Gravel at Suntrana Creek, as well as the overlying glacial unit. Three samples from the lower Nenana Gravel yield an isochron burial age of 4.42+0.67/-0.13 Ma, which represents initiation of Nenana Gravel deposition and may equate to early unroofing of the Alaska Range. Two samples collected from the middle of the Nenana Gravel unit produced an average simple burial age of 2.25+/-0.45 Ma, with a single sample stratigraphically above dating to 0.99 +/-1.60. Two samples from the upper-most portion of the Nenana Gravel yielded an average simple burial age of 1.27+/-0.22 Ma, and one sample from the glacial unit overlying the Nenana Gravel was dated to 0.97+/-0.06 Ma, representing one of the earliest glacial advances in the region. In addition, the age of the glacial unit provides a minimum age for inception of foreland basin uplift and abandonment of the Nenana Gravel in this region.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
Design and construction of a point-contact spectroscopy rig with lateral scanning capability.
Tortello, M; Park, W K; Ascencio, C O; Saraf, P; Greene, L H
2016-06-01
The design and realization of a cryogenic rig for point-contact spectroscopy measurements in the needle-anvil configuration is presented. Thanks to the use of two piezoelectric nano-positioners, the tip can move along the vertical (z) and horizontal (x) direction and thus the rig is suitable to probe different regions of a sample in situ. Moreover, it can also form double point-contacts on different facets of a single crystal for achieving, e.g., an interferometer configuration for phase-sensitive measurements. For the later purpose, the sample holder can also host a Helmholtz coil for applying a small transverse magnetic field to the junction. A semi-rigid coaxial cable can be easily added for studying the behavior of Josephson junctions under microwave irradiation. The rig can be detached from the probe and thus used with different cryostats. The performance of this new probe has been tested in a Quantum Design PPMS system by conducting point-contact Andreev reflection measurements on Nb thin films over large areas as a function of temperature and magnetic field.
Design and construction of a point-contact spectroscopy rig with lateral scanning capability
NASA Astrophysics Data System (ADS)
Tortello, M.; Park, W. K.; Ascencio, C. O.; Saraf, P.; Greene, L. H.
2016-06-01
The design and realization of a cryogenic rig for point-contact spectroscopy measurements in the needle-anvil configuration is presented. Thanks to the use of two piezoelectric nano-positioners, the tip can move along the vertical (z) and horizontal (x) direction and thus the rig is suitable to probe different regions of a sample in situ. Moreover, it can also form double point-contacts on different facets of a single crystal for achieving, e.g., an interferometer configuration for phase-sensitive measurements. For the later purpose, the sample holder can also host a Helmholtz coil for applying a small transverse magnetic field to the junction. A semi-rigid coaxial cable can be easily added for studying the behavior of Josephson junctions under microwave irradiation. The rig can be detached from the probe and thus used with different cryostats. The performance of this new probe has been tested in a Quantum Design PPMS system by conducting point-contact Andreev reflection measurements on Nb thin films over large areas as a function of temperature and magnetic field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karmi, S.
1996-04-01
This document contains the baseline human health risk assessment and the ecological risk assessment (ERA) for the Point Lonely Distant Early Warning (DEW) Line radar installation. Twelve sites at the Point Lonely radar installation underwent remedial investigations (RIs) during the summer of 1993. The Vehicle Storage Area (SS14) was combined with the Inactive Landfill because the two sites were essentially co-located and were sampled during the RI as a single unit. Therefore, 11 sites are discussed in this risk assessment. The presence of chemical contamination in the soil, sediments, and surface water at the installation was evaluated and reported inmore » the Point Lonely Remedial Investigation/Feasibility Study (RI/FS). The analytical data reported in the RI/FS form the basis for the human health and ecological risk assessments. The primary chemicals of concern (COCs) at the 11 sites are diesel and gasoline from past spills and/or leaks, chlorinated solvents, and manganese. The 11 sites investigated and the types of samples collected at each site are presented.« less
Design and construction of a point-contact spectroscopy rig with lateral scanning capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tortello, M.; Park, W. K., E-mail: wkpark@illinois.edu; Ascencio, C. O.
2016-06-15
The design and realization of a cryogenic rig for point-contact spectroscopy measurements in the needle-anvil configuration is presented. Thanks to the use of two piezoelectric nano-positioners, the tip can move along the vertical (z) and horizontal (x) direction and thus the rig is suitable to probe different regions of a sample in situ. Moreover, it can also form double point-contacts on different facets of a single crystal for achieving, e.g., an interferometer configuration for phase-sensitive measurements. For the later purpose, the sample holder can also host a Helmholtz coil for applying a small transverse magnetic field to the junction. Amore » semi-rigid coaxial cable can be easily added for studying the behavior of Josephson junctions under microwave irradiation. The rig can be detached from the probe and thus used with different cryostats. The performance of this new probe has been tested in a Quantum Design PPMS system by conducting point-contact Andreev reflection measurements on Nb thin films over large areas as a function of temperature and magnetic field.« less
Selvin, Elizabeth; Wang, Dan; Matsushita, Kunihiro; Grams, Morgan E; Coresh, Josef
2018-06-19
Current clinical definitions of diabetes require repeated blood work to confirm elevated levels of glucose or hemoglobin A1c (HbA1c) to reduce the possibility of a false-positive diagnosis. Whether 2 different tests from a single blood sample provide adequate confirmation is uncertain. To examine the prognostic performance of a single-sample confirmatory definition of undiagnosed diabetes. Prospective cohort study. The ARIC (Atherosclerosis Risk in Communities) study. 13 346 ARIC participants (12 268 without diagnosed diabetes) with 25 years of follow-up for incident diabetes, cardiovascular outcomes, kidney disease, and mortality. Confirmed undiagnosed diabetes was defined as elevated levels of fasting glucose (≥7.0 mmol/L [≥126 mg/dL]) and HbA1c (≥6.5%) from a single blood sample. Among 12 268 participants without diagnosed diabetes, 978 had elevated levels of fasting glucose or HbA1c at baseline (1990 to 1992). Among these, 39% had both (confirmed undiagnosed diabetes), whereas 61% had only 1 elevated measure (unconfirmed undiagnosed diabetes). The confirmatory definition had moderate sensitivity (54.9%) but high specificity (98.1%) for identification of diabetes cases diagnosed during the first 5 years of follow-up, with specificity increasing to 99.6% by 15 years. The 15-year positive predictive value was 88.7% compared with 71.1% for unconfirmed cases. Confirmed undiagnosed diabetes was significantly associated with cardiovascular and kidney disease and mortality, with stronger associations than unconfirmed diabetes. Lack of repeated measurements of fasting glucose and HbA1c. A single-sample confirmatory definition of diabetes had a high positive predictive value for subsequent diagnosis and was strongly associated with clinical end points. Our results support the clinical utility of using a combination of elevated fasting glucose and HbA1c levels from a single blood sample to identify undiagnosed diabetes in the population. National Institute of Diabetes and Digestive and Kidney Diseases and National Heart, Lung, and Blood Institute.
SCMOS (Scalable Complementary Metal Oxide Silicon) Silicon Compiler Organelle Design and Insertion.
1987-12-01
polysilicon running horizontally), with the p-type toward Vdd and the n-type toward GND. * Substrate contacts are connected by metal to supply rails...IN’) + (CIN’) Note: The single quote (’) represents the ’not’ of the variable. Figure 2.3 Logic Expressions.. * First metal and polysilicon are... polysilicon . *All external connections to 1,10, CLOCK, Vdd and G.ND end at least 2 units past first metal that is not an 1,0 point. *All external
NASA Technical Reports Server (NTRS)
Greenhalgh, Phillip O.
2004-01-01
In the production of each Space Shuttle Reusable Solid Rocket Motor (RSRM), over 100,000 inspections are performed. ATK Thiokol Inc. reviewed these inspections to ensure a robust inspection system is maintained. The principal effort within this endeavor was the systematic identification and evaluation of inspections considered to be single-point. Single-point inspections are those accomplished on components, materials, and tooling by only one person, involving no other check. The purpose was to more accurately characterize risk and ultimately address and/or mitigate risk associated with single-point inspections. After the initial review of all inspections and identification/assessment of single-point inspections, review teams applied risk prioritization methodology similar to that used in a Process Failure Modes Effects Analysis to derive a Risk Prioritization Number for each single-point inspection. After the prioritization of risk, all single-point inspection points determined to have significant risk were provided either with risk-mitigating actions or rationale for acceptance. This effort gave confidence to the RSRM program that the correct inspections are being accomplished, that there is appropriate justification for those that remain as single-point inspections, and that risk mitigation was applied to further reduce risk of higher risk single-point inspections. This paper examines the process, results, and lessons learned in identifying, assessing, and mitigating risk associated with single-point inspections accomplished in the production of the Space Shuttle RSRM.
Iron under conditions close to the α - γ - ɛ triple point
NASA Astrophysics Data System (ADS)
Dewaele, Agnès; Svitlyk, Volodymyr; Bottin, François; Bouchet, Johann; Jacobs, Jeroen
2018-05-01
The stability domains and equations of state of α-Fe, ɛ-Fe, and γ-Fe have been measured using X-ray diffraction under conditions close to their triple point: 7 ≤ P ≤ 20 GPa and 480 ≤ T ≤ 820 K. Special attention was paid to ensure the hydrostatic compression of the sample, which was a single crystal at start. Narrow α - γ and α - ɛ coexistence domains were observed, while the γ - ɛ transformation appeared sluggish. The triple point is measured at 8.7 ± 1.0 GPa and 750 ± 30 K. Anharmonic effects are evidenced in the equation of state of ɛ-Fe and partly reproduced using ab initio molecular dynamics simulations.
Dating and sexual behavior among single parents of young children in the United States.
Gray, Peter B; Garcia, Justin R; Crosier, Benjamin S; Fisher, Helen E
2015-01-01
Theory and research on partnered parents suggests trade-offs between parenting and sexuality, with those trade-offs most pronounced among mothers of young children. However, little research has focused on how a growing demographic of single parents negotiates dating and sexual activity. The current study drew upon a 2012 nationally representative sample of 5,481 single Americans 21 years of age and older, of whom 4.3% were parents of a child age five or younger. Dependent variables were sexual thoughts, frequency of sexual activity, number of sexual partners in the past year, dates during the previous three months, and whether one was actively seeking a relationship partner. Covariates included parental age, sex/gender, sexual orientation, education, and income. Using the entire sample of singles, we found no main effects of number (0, 1, 2+) of children aged five years and younger or number of children aged two years and younger on dating and sexual behavior variables. Next, using analyses restricted to single parents (n = 2,121), we found that single parents with a child aged five years or younger, adjusting for covariates, reported greater frequency of sexual activity and first dates but no differences in other outcomes compared with single parents of older children.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
ERIC Educational Resources Information Center
Abraham, Nath. M.; Ememe, Ogbonna Nwuju; Egu, Rosemary Hannah N.
2012-01-01
This paper examines teacher job satisfaction for secondary school effectiveness. It was a descriptive survey. A sample of 512 teachers emerged from a population of 1280 representing 40% of the entire population. A 2-part, 15-item, 4-point scale instrument was used to generate data for answering 3 research questions. The instrument was validated by…
ERIC Educational Resources Information Center
Mihelic, Mojca Žveglic
2017-01-01
The starting points of primary school pupils in a foreign country differ significantly from those of native pupils. In Slovenia, the knowledge of pupils who are foreign citizens (foreign pupils) may be assessed with different accommodations for no more than two years. The presented research conducted on a representative sample of 697 Slovenian…
Sexual Orientation, Partnership Formation, and Substance Use in the Transition to Adulthood
ERIC Educational Resources Information Center
Austin, Erika Laine; Bozick, Robert
2012-01-01
Evidence suggests that lesbian and gay young adults use substances more frequently than their heterosexual peers. Based on the life course perspective, we argue that this difference may be due to the unavailability of marriage as a turning point in the lives of lesbian/gay young adults. We use data from a nationally representative sample of youth…
Wood and Bark Properties of Spruce Pine
F. G. Manwiller
1972-01-01
Weighted stem averagees were determined for wood and bard of 72 trees representing the commercial range of Pinus glabra Walt. The trees were stratified into three age classes (15, 30, and 45 years) and two growth rates (averaging 4.9 and 9.0 rings per inch). Within-stem variation was determined from 1,269 earlywood and alte wood sampling points in...
ERIC Educational Resources Information Center
von Soest, Tilmann; Wichstrom, Lars
2009-01-01
This study examines gender differences in the development of dieting among a representative sample of 1,368 Norwegian boys and girls. The respondents were followed over 3 time points from ages 13/14 to 20/21. Latent growth curve analyses were conducted showing that girls' dieting scores increased while boys' scores remained constant. Gender…
Employment Experience of Youths: Results from a Longitudinal Survey.
ERIC Educational Resources Information Center
Bureau of Labor Statistics, Washington, DC.
Nearly 3 out of 5 students (58 percent) who were 16 years old when the 1997-98 school year began worked for an employer at some point during the academic year. Findings were from the second round of the National Longitudinal Survey of Youth 1997, a nationally representative sample of about 9,000 young men and women born during 1980-84. Respondents…
Ren, Hang; Yang, Mingjuan; Zhang, Guoxia; Liu, Shiwei; Wang, Xinhui; Ke, Yuehua; Du, Xinying; Wang, Zhoujia; Huang, Liuyu; Liu, Chao; Chen, Zeliang
2016-04-01
A rapid and sensitive recombinase polymerase amplification (RPA) assay, Bruce-RPA, was developed for detection of Brucella. The assay could detect as few as 3 copies of Brucella per reaction within 20 min. Bruce-RPA represents a candidate point-of-care diagnosis assay for human brucellosis. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Liu, Ruth X.
2005-01-01
Data from a nationally representative sample of adolescents studied at two points in time are used to examine gender-specific influence of parent-youth closeness on youth's suicidal ideation and its variations by stages of adolescence and race or ethnicity. Logistic regression analyses yielded interesting findings: (a) Closeness with fathers…
Potential candidate genomic biomarkers of drug induced vascular injury in the rat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmas, Deidre A., E-mail: Deidre.A.Dalmas@gsk.com; Scicchitano, Marshall S., E-mail: Marshall.S.Scicchitano@gsk.com; Mullins, David, E-mail: David.R.Mullins@gsk.com
2011-12-15
Drug-induced vascular injury is frequently observed in rats but the relevance and translation to humans present a hurdle for drug development. Numerous structurally diverse pharmacologic agents have been shown to induce mesenteric arterial medial necrosis in rats, but no consistent biomarkers have been identified. To address this need, a novel strategy was developed in rats to identify genes associated with the development of drug-induced mesenteric arterial medial necrosis. Separate groups (n = 6/group) of male rats were given 28 different toxicants (30 different treatments) for 1 or 4 days with each toxicant given at 3 different doses (low, mid andmore » high) plus corresponding vehicle (912 total rats). Mesentery was collected, frozen and endothelial and vascular smooth muscle cells were microdissected from each artery. RNA was isolated, amplified and Affymetrix GeneChip Registered-Sign analysis was performed on selectively enriched samples and a novel panel of genes representing those which showed a dose responsive pattern for all treatments in which mesenteric arterial medial necrosis was histologically observed, was developed and verified in individual endothelial cell- and vascular smooth muscle cell-enriched samples. Data were confirmed in samples containing mesentery using quantitative real-time RT-PCR (TaqMan Trade-Mark-Sign ) gene expression profiling. In addition, the performance of the panel was also confirmed using similarly collected samples obtained from a timecourse study in rats given a well established vascular toxicant (Fenoldopam). Although further validation is still required, a novel gene panel has been developed that represents a strategic opportunity that can potentially be used to help predict the occurrence of drug-induced mesenteric arterial medial necrosis in rats at an early stage in drug development. -- Highlights: Black-Right-Pointing-Pointer A gene panel was developed to help predict rat drug-induced mesenteric MAN. Black-Right-Pointing-Pointer A gene panel was identified following treatment of rats with 28 different toxicants. Black-Right-Pointing-Pointer There was a strong correlation of genes and histologic evidence of mesenteric MAN. Black-Right-Pointing-Pointer Many genes were also regulated prior to histologic evidence of arterial effects.« less
Trippi, Michael H.; Belkin, Harvey E.; Dai, Shifeng; Tewalt, Susan J.; Chou, Chiu-Jung; Trippi, Michael H.; Belkin, Harvey E.; Dai, Shifeng; Tewalt, Susan J.; Chou, Chiu-Jung
2015-01-01
Geographic information system (GIS) information may facilitate energy studies, which in turn provide input for energy policy decisions. The U.S. Geological Survey (USGS) has compiled geographic information system (GIS) data representing the known coal mine locations and coal-mining areas of China as of 2001. These data are now available for download, and may be used in a GIS for a variety of energy resource and environmental studies of China. Province-scale maps were also created to display the point locations of coal mines and the coal-mining areas. In addition, coal-field outlines from a previously published map by Dai and others (2012) were also digitized and are available for download as a separate GIS data file, and shown in a nation-scale map of China. Chemical data for 332 coal samples from a previous USGS study of China and Taiwan (Tewalt and others, 2010) are included in a downloadable GIS point shapefile, and shown on a nation-scale map of China. A brief report summarizes the methodology used for creation of the shapefiles and the chemical analyses run on the samples.
NASA Technical Reports Server (NTRS)
2008-01-01
This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point. The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil. The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer. The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Study on the initial value for the exterior orientation of the mobile version
NASA Astrophysics Data System (ADS)
Yu, Zhi-jing; Li, Shi-liang
2011-10-01
Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.
2010-04-13
AYMAN GIRGIS (EM10 MATERIALS TEST ENGINEER, JACOBS ESTS GROUP/JTI) AND ERIC EARHART (AEROSPACE ENGINEER, ER41 PROPULSION STRUCTURAL & DYNAMICS ANALYSIS BRANCH) DISCUSS DATA PRODUCED BY A UNIQUE MECHANICAL TEST SETUP THAT MEASURES STRAIN ON A SINGLE SAMPLE, USING TWO DIFFERENT TECHNIQUES AT THE SAME TIME. THE TEST FIXTURE HOLDS A SPECIMEN THAT REPRESENTS A LIQUID OXYGEN (LOX) BEARING FROM THE J2-X ENGINE.
ERIC Educational Resources Information Center
National Field Research Center Inc., Iowa City, IA.
This report, together with volume II, (multiple degree programs), detail 105 post-secondary wastewater treatment programs from 33 states. These programs represent a sample, only, of the various programs available nationwide. Enrollment and graduate statistics are presented. The total number of faculty involved in all the programs surveyed was…
Comorbid Fluency Difficulties in Reading and Math: Longitudinal Stability across Early Grades
ERIC Educational Resources Information Center
Koponen, Tuire; Aro, Mikko; Poikkeus, Anna-Maija; Niemi, Pekka; Lerkkanen, Marja-Kristiina; Ahonen, Timo; Nurmi, Jari-Erik
2018-01-01
We examined the prevalence of comorbidity of dysfluent reading and math skills longitudinally in a representative sample (N = 1,928) and the stability of comorbid and single difficulties from first to fourth grades. The findings indicated that half the children who showed very low performance in one skill also evidenced low or very low performance…
Powell, C.L.
1998-01-01
Sedimentary rocks more than 1.6 kilometers thick are attributed to the upper Miocene to upper Pliocene Purisima Formation in the greater San Francisco Bay area. These rocks occur as scattered, discontinuous outcrops from Point Reyes National Seashore in the north to south of Santa Cruz. Lithologic divisions of the Formation appear to be of local extent and are of limited use in correlating over this broad area. The Purisima Formation occurs in several fault-bounded terranes which demonstrate different stratigraphic histories and may be found to represent more than a single depositional basin. The precise age and stratigraphic relationship of these scattered outcrops are unresolved and until they are put into a stratigraphic and paleogeographic context the tectonic significance of the Purisima Foramtion can only be surmised. This paper will attempt to resolve some of these problems. Mollusks and echinoderms are recorded from the literature and more than 70 USGS collections that have not previously been reported. With the exception of one locality, the faunas suggest deposition in normal marine conditions at water depths of less than 50 m and with water temperatures the same or slightly cooler than exist along the present coast of central California. The single exception is a fauna from outcrops between Seal Cove and Pillar Point, where both mollusks and foraminifers suggest water depths greater than 100 m. Three molluscan faunas, the La Honda, the Pillar Point, and the Santa Cruz, are recognized based on USGS collections and published literature for the Purisima Formation. These biostratigraphically distinct faunas aid in the correlation of the scattered Purisima Formation outcrops. The lowermost La Honda fauna suggests shallow-water depths and an age of late Miocene to early Pliocene. This age is at odds with a younger age determination from an ash bed in the lower Purisima Formation along the central San Mateo County coast. The Pillar Point fauna contains only a single age diagnostic taxon, Lituyapecten purisimaensis (Arnold), which is reported as Pliocene in age, but it only occurs in the Purisima Formation, so its age here is an example of circular reasoning. However, based on tentative lithologic correlations this fauna may represent the same period of time as the upper part of the La Honda fauna. This fauna differs from either the La Honda or Santa Cruz faunas in that it represent significantly deeper water. The uppermost Santa Cruz fauna also suggests shallow-water depths and a possible age range of early to late Pliocene. The bivalve molluscan taxon Lyonsia, and gastropod taxon Rictaxis sp., cf. R. punctocaelatus (Carpenter) are reported here for the first time from the Purisima Formation.
Microfluidic point-of-care blood panel based on a novel technique: Reversible electroosmotic flow
Mohammadi, Mahdi; Madadi, Hojjat; Casals-Terré, Jasmina
2015-01-01
A wide range of diseases and conditions are monitored or diagnosed from blood plasma, but the ability to analyze a whole blood sample with the requirements for a point-of-care device, such as robustness, user-friendliness, and simple handling, remains unmet. Microfluidics technology offers the possibility not only to work fresh thumb-pricked whole blood but also to maximize the amount of the obtained plasma from the initial sample and therefore the possibility to implement multiple tests in a single cartridge. The microfluidic design presented in this paper is a combination of cross-flow filtration with a reversible electroosmotic flow that prevents clogging at the filter entrance and maximizes the amount of separated plasma. The main advantage of this design is its efficiency, since from a small amount of sample (a single droplet ∼10 μl) almost 10% of this (approx 1 μl) is extracted and collected with high purity (more than 99%) in a reasonable time (5–8 min). To validate the quality and quantity of the separated plasma and to show its potential as a clinical tool, the microfluidic chip has been combined with lateral flow immunochromatography technology to perform a qualitative detection of the thyroid-stimulating hormone and a blood panel for measuring cardiac Troponin and Creatine Kinase MB. The results from the microfluidic system are comparable to previous commercial lateral flow assays that required more sample for implementing fewer tests. PMID:26396660
Comeron, Josep M; Reed, Jordan; Christie, Matthew; Jacobs, Julia S; Dierdorff, Jason; Eberl, Daniel F; Manak, J Robert
2016-04-05
Accurate and rapid identification or confirmation of single nucleotide polymorphisms (SNPs), point mutations and other human genomic variation facilitates understanding the genetic basis of disease. We have developed a new methodology (called MENA (Mismatch EndoNuclease Array)) pairing DNA mismatch endonuclease enzymology with tiling microarray hybridization in order to genotype both known point mutations (such as SNPs) as well as identify previously undiscovered point mutations and small indels. We show that our assay can rapidly genotype known SNPs in a human genomic DNA sample with 99% accuracy, in addition to identifying novel point mutations and small indels with a false discovery rate as low as 10%. Our technology provides a platform for a variety of applications, including: (1) genotyping known SNPs as well as confirming newly discovered SNPs from whole genome sequencing analyses; (2) identifying novel point mutations and indels in any genomic region from any organism for which genome sequence information is available; and (3) screening panels of genes associated with particular diseases and disorders in patient samples to identify causative mutations. As a proof of principle for using MENA to discover novel mutations, we report identification of a novel allele of the beethoven (btv) gene in Drosophila, which encodes a ciliary cytoplasmic dynein motor protein important for auditory mechanosensation.
Gentilini, Fabio; Turba, Maria E
2014-01-01
A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.
Calibrating binary lumped parameter models
NASA Astrophysics Data System (ADS)
Morgenstern, Uwe; Stewart, Mike
2017-04-01
Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.
Integrated crystal mounting and alignment system for high-throughput biological crystallography
Nordmeyer, Robert A.; Snell, Gyorgy P.; Cornell, Earl W.; Kolbe, William F.; Yegian, Derek T.; Earnest, Thomas N.; Jaklevich, Joseph M.; Cork, Carl W.; Santarsiero, Bernard D.; Stevens, Raymond C.
2007-09-25
A method and apparatus for the transportation, remote and unattended mounting, and visual alignment and monitoring of protein crystals for synchrotron generated x-ray diffraction analysis. The protein samples are maintained at liquid nitrogen temperatures at all times: during shipment, before mounting, mounting, alignment, data acquisition and following removal. The samples must additionally be stably aligned to within a few microns at a point in space. The ability to accurately perform these tasks remotely and automatically leads to a significant increase in sample throughput and reliability for high-volume protein characterization efforts. Since the protein samples are placed in a shipping-compatible layered stack of sample cassettes each holding many samples, a large number of samples can be shipped in a single cryogenic shipping container.
Integrated crystal mounting and alignment system for high-throughput biological crystallography
Nordmeyer, Robert A.; Snell, Gyorgy P.; Cornell, Earl W.; Kolbe, William; Yegian, Derek; Earnest, Thomas N.; Jaklevic, Joseph M.; Cork, Carl W.; Santarsiero, Bernard D.; Stevens, Raymond C.
2005-07-19
A method and apparatus for the transportation, remote and unattended mounting, and visual alignment and monitoring of protein crystals for synchrotron generated x-ray diffraction analysis. The protein samples are maintained at liquid nitrogen temperatures at all times: during shipment, before mounting, mounting, alignment, data acquisition and following removal. The samples must additionally be stably aligned to within a few microns at a point in space. The ability to accurately perform these tasks remotely and automatically leads to a significant increase in sample throughput and reliability for high-volume protein characterization efforts. Since the protein samples are placed in a shipping-compatible layered stack of sample cassettes each holding many samples, a large number of samples can be shipped in a single cryogenic shipping container.
SymPix: A Spherical Grid for Efficient Sampling of Rotationally Invariant Operators
NASA Astrophysics Data System (ADS)
Seljebotn, D. S.; Eriksen, H. K.
2016-02-01
We present SymPix, a special-purpose spherical grid optimized for efficiently sampling rotationally invariant linear operators. This grid is conceptually similar to the Gauss-Legendre (GL) grid, aligning sample points with iso-latitude rings located on Legendre polynomial zeros. Unlike the GL grid, however, the number of grid points per ring varies as a function of latitude, avoiding expensive oversampling near the poles and ensuring nearly equal sky area per grid point. The ratio between the number of grid points in two neighboring rings is required to be a low-order rational number (3, 2, 1, 4/3, 5/4, or 6/5) to maintain a high degree of symmetries. Our main motivation for this grid is to solve linear systems using multi-grid methods, and to construct efficient preconditioners through pixel-space sampling of the linear operator in question. As a benchmark and representative example, we compute a preconditioner for a linear system that involves the operator \\widehat{{\\boldsymbol{D}}}+{\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}}, where \\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}} may be described as both local and rotationally invariant operators, and {\\boldsymbol{N}} is diagonal in the pixel domain. For a bandwidth limit of {{\\ell }}{max} = 3000, we find that our new SymPix implementation yields average speed-ups of 360 and 23 for {\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}}, respectively, compared with the previous state-of-the-art implementation.
An interacting spin-flip model for one-dimensional proton conduction
NASA Astrophysics Data System (ADS)
Chou, Tom
2002-05-01
A discrete asymmetric exclusion process (ASEP) is developed to model proton conduction along one-dimensional water wires. Each lattice site represents a water molecule that can be in only one of three states; protonated, left-pointing and right-pointing. Only a right- (left-) pointing water can accept a proton from its left (). Results of asymptotic mean field analysis and Monte Carlo simulations for the three-species, open boundary exclusion model are presented and compared. The mean field results for the steady-state proton current suggest a number of regimes analogous to the low and maximal current phases found in the single-species ASEP (Derrida B 1998 Phys. Rep. 301 65-83). We find that the mean field results are accurate (compared with lattice Monte Carlo simulations) only in certain regimes. Refinements and extensions including more elaborate forces and pore defects are also discussed.
Vapor-liquid equilibria for an R134a/lubricant mixture: Measurements and equation-of-state modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, M.L.; Holcomb, C.D.; Outcalt, S.L.
2000-07-01
The authors measured bubble point pressures and coexisting liquid densities for two mixtures of R-134a and a polyolester (POE) lubricant. The mass fraction of the lubricant was approximately 9% and 12%, and the temperature ranged from 280 K to 355 K. The authors used the Elliott, Suresh, and Donohue (ESD) equation of state to model the bubble point pressure data. The bubble point pressures were represented with an average absolute deviation of 2.5%. A binary interaction parameter reduced the deviation to 1.4%. The authors also applied the ESD model to other R-134a/POE lubricant data in the literature. As the concentrationmore » of the lubricant increased, the performance of the model deteriorated markedly. However, the use of a single binary interaction parameter reduced the deviations significantly.« less
NASA Astrophysics Data System (ADS)
Kapustin, P.; Svetukhin, V.; Tikhonchev, M.
2017-06-01
The atomic displacement cascade simulations near symmetric tilt grain boundaries (GBs) in hexagonal close packed-Zirconium were considered in this paper. Further defect structure analysis was conducted. Four symmetrical tilt GBs -∑14?, ∑14? with the axis of rotation [0 0 0 1] and ∑32?, ∑32? with the axis of rotation ? - were considered. The molecular dynamics method was used for atomic displacement cascades' simulation. A tendency of the point defects produced in the cascade to accumulate near the GB plane, which was an obstacle to the spread of the cascade, was discovered. The results of the point defects' clustering produced in the cascade were obtained. The clusters of both types were represented mainly by single point defects. At the same time, vacancies formed clusters of a large size (more than 20 vacancies per cluster), while self-interstitial atom clusters were small-sized.
Soil Sampling Techniques For Alabama Grain Fields
NASA Technical Reports Server (NTRS)
Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.
2003-01-01
Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAZILEVSKY,A.MAKDISI,Y.ET AL.
2002-09-09
The Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL) was commissioned for polarized proton-proton collisions, at the center of mass energy {radical}s = 200 GeV during the run in 2001-2002. The authors have measured the single transverse-spin asymmetry A{sub N} for production of photons, neutral pions, and neutrons at the very forward angle. The asymmetries for the photon and neutral pion sample were consistent with zero within the experimental uncertainties. In contrast, the neutron sample exhibited an unexpectedly large asymmetry. This large asymmetry will be used for the non-destructive polarimeter for polarized proton beams at the collisionmore » points in the RHIC interaction region.« less