Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Is Obsidian Hydration Dating Affected by Relative Humidity?
Friedman, I.; Trembour, F.W.; Smith, G.I.; Smith, F.L.
1994-01-01
Experiments carried out under temperatures and relative humidities that approximate ambient conditions show that the rate of hydration of obsidian is a function of the relative humidity, as well as of previously established variables of temperature and obsidian chemical composition. Measurements of the relative humidity of soil at 25 sites and at depths of between 0.01 and 2 m below ground show that in most soil environments, at depths below about 0.25 m, the relative humidity is constant at 100%. We have found that the thickness of the hydrated layer developed on obsidian outcrops exposed to the sun and to relative humidities of 30-90% is similar to that formed on other portions of the outcrop that were shielded from the sun and exposed to a relative humidity of approximately 100%. Surface samples of obsidian exposed to solar heating should hydrate more rapidly than samples buried in the ground. However, the effect of the lower mean relative humidity experiences by surface samples tends to compensate for the elevated temperature, which may explain why obsidian hydration ages of surface samples usually approximate those derived from buried samples.
EBSD and TEM Characterization of High Burn-up Mixed Oxide Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teague, Melissa C.; Gorman, Brian P.; Miller, Brandon D.
2014-01-01
Understanding and studying the irradiation behavior of high burn-up oxide fuel is critical to licensing of future fast breeder reactors. Advancements in experimental techniques and equipment are allowing for new insights into previously irradiated samples. In this work dual column focused ion beam (FIB)/scanning electron microscope (SEM) was utilized to prepared transmission electron microscope samples from mixed oxide fuel with a burn-up of 6.7% FIMA. Utilizing the FIB/SEM for preparation resulted in samples with a dose rate of <0.5 mRem/h compared to approximately 1.1 R/h for a traditionally prepared TEM sample. The TEM analysis showed that the sample taken frommore » the cooler rim region of the fuel pellet had approximately 2.5x higher dislocation density than that of the sample taken from the mid-radius due to the lower irradiation temperature of the rim. The dual column FIB/SEM was additionally used to prepared and serially slice approximately 25 um cubes. High quality electron back scatter diffraction (EBSD) were collected from the face at each step, showing, for the first time, the ability to obtain EBSD data from high activity irradiated fuel.« less
Spectrophotometry of 2 complete samples of flat radio spectrum quasars
NASA Technical Reports Server (NTRS)
Wampler, E. J.; Gaskell, C. M.; Burke, W. L.; Baldwin, J. A.
1983-01-01
Spectrophotometry of two complete samples of flat-spectrum radio quasars show that for these objects there is a strong correlation between the equivalent width of the CIV wavelength 1550 emission line and the luminosity of the underlying continuum. Assuming Friedmann cosmologies, the scatter in this correlation is a minimum for q (sub o) is approximately 1. Alternatively, luminosity evolution can be invoked to give compact distributions for q (sub o) is approximately 0 models. A sample of Seyfert galaxies observed with IUE shows that despite some dispersion the average equivalent width of CIV wavelength 1550 in Seyfert galaxies is independent of the underlying continuum luminosity. New redshifts for 4 quasars are given.
Performance evaluation of digital phase-locked loops for advanced deep space transponders
NASA Technical Reports Server (NTRS)
Nguyen, T. M.; Hinedi, S. M.; Yeh, H.-G.; Kyriacou, C.
1994-01-01
The performances of the digital phase-locked loops (DPLL's) for the advanced deep-space transponders (ADT's) are investigated. DPLL's considered in this article are derived from the analog phase-locked loop, which is currently employed by the NASA standard deep space transponder, using S-domain to Z-domain mapping techniques. Three mappings are used to develop digital approximations of the standard deep space analog phase-locked loop, namely the bilinear transformation (BT), impulse invariant transformation (IIT), and step invariant transformation (SIT) techniques. The performance in terms of the closed loop phase and magnitude responses, carrier tracking jitter, and response of the loop to the phase offset (the difference between in incoming phase and reference phase) is evaluated for each digital approximation. Theoretical results of the carrier tracking jitter for command-on and command-off cases are then validated by computer simulation. Both theoretical and computer simulation results show that at high sampling frequency, the DPLL's approximated by all three transformations have the same tracking jitter. However, at low sampling frequency, the digital approximation using BT outperforms the others. The minimum sampling frequency for adequate tracking performance is determined for each digital approximation of the analog loop. In addition, computer simulation shows that the DPLL developed by BT provides faster response to the phase offset than IIT and SIT.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Alvi, Naveed Ul Hassan; Hussain, Sajjad; Jensen, Jen; Nur, Omer; Willander, Magnus
2011-12-12
Light-emitting diodes (LEDs) based on zinc oxide (ZnO) nanorods grown by vapor-liquid-solid catalytic growth method were irradiated with 2-MeV helium (He+) ions. The fabricated LEDs were irradiated with fluencies of approximately 2 × 1013 ions/cm2 and approximately 4 × 1013 ions/cm2. Scanning electron microscopy images showed that the morphology of the irradiated samples is not changed. The as-grown and He+-irradiated LEDs showed rectifying behavior with the same I-V characteristics. Photoluminescence (PL) measurements showed that there is a blue shift of approximately 0.0347 and 0.082 eV in the near-band emission (free exciton) and green emission of the irradiated ZnO nanorods, respectively. It was also observed that the PL intensity of the near-band emission was decreased after irradiation of the samples. The electroluminescence (EL) measurements of the fabricated LEDs showed that there is a blue shift of 0.125 eV in the broad green emission after irradiation and the EL intensity of violet emission approximately centered at 398 nm nearly disappeared after irradiations. The color-rendering properties show a small decrease in the color-rendering indices of 3% after 2 MeV He+ ions irradiation.
A study on modification of nanoporous rice husk silica for hydrophobic nano filter.
Kim, Hee Jin; So, Soo Jeong; Han, Chong Soo
2010-05-01
Nanoporous rice husk silica (RHS) was modified with alkylsilylation reagents, hexamethyldisilazane, diethoxydiphenylsilane, dichlorodimethylsilane and n-octodecyltrimethoxysilane. The silica samples were characterized with Raman spectrometer, thermal gravimetric analyzer, scanning electron microscope, nitrogen adsorption measurement and solid state nuclear magnetic resonance spectrometer. Raman spectra of the modified silica showed growth of the peaks of C-H stretching and CH3 bending at approximateluy 3000 cm(-1) and approximately 1500 cm(-1), respectively. Weight losses of 3 approximately 5% were observed in thermo gravimetric profiles of the modified silica. The microscopic shape of RHS, approximately 20 nm primary particles and their aggregates was almost not changed by the modification but there were colligations of the silica particles in the sample treated with dichlorodimethylsilane or diethoxydiphenylsilane. BET adsorption experiment showed the modification significantly decreased the mean pore size of the silica from approximately 5 nm to approximately 4 nm as well as the pore volume from 0.5 cm3/g to 0.4 cm3/g except the case of treatment with n-octodecyltrimethoxysilane. 29Si Solid NMR Spectra of the silica samples showed that there were decrease in the relative intensities of Q2 and Q3 peaks and large increments in Q4 after the modification except for the case of bulky n-octodecyltrimethoxysilane. From the results, it was concluded that the alkylsilylation reagents reacted with hydroxyl groups on the silica particles as well as in the nano pores while the size of the reagent molecule affected its diffusion and reaction with the hydroxyl groups in the pores.
Permeability of gypsum samples dehydrated in air
NASA Astrophysics Data System (ADS)
Milsch, Harald; Priegnitz, Mike; Blöcher, Guido
2011-09-01
We report on changes in rock permeability induced by devolatilization reactions using gypsum as a reference analog material. Cylindrical samples of natural alabaster were dehydrated in air (dry) for up to 800 h at ambient pressure and temperatures between 378 and 423 K. Subsequently, the reaction kinetics, so induced changes in porosity, and the concurrent evolution of sample permeability were constrained. Weighing the heated samples in predefined time intervals yielded the reaction progress where the stoichiometric mass balance indicated an ultimate and complete dehydration to anhydrite regardless of temperature. Porosity showed to continuously increase with reaction progress from approximately 2% to 30%, whilst the initial bulk volume remained unchanged. Within these limits permeability significantly increased with porosity by almost three orders of magnitude from approximately 7 × 10-19 m2 to 3 × 10-16 m2. We show that - when mechanical and hydraulic feedbacks can be excluded - permeability, reaction progress, and porosity are related unequivocally.
Characterizations of Pr-doped Yb3Al5O12 single crystals for scintillator applications
NASA Astrophysics Data System (ADS)
Yoshida, Yasuki; Shinozaki, Kenji; Igashira, Takuya; Kawano, Naoki; Okada, Go; Kawaguchi, Noriaki; Yanagida, Takayuki
2018-04-01
Yb3Al5O12 (YbAG) single crystals doped with different concentrations of Pr were synthesized by the Floating Zone (FZ) method. Then, we evaluated their basic optical and scintillation properties. All the samples showed photoluminescence (PL) with two emission bands appeared approximately 300-500 nm and 550-600 nm due to the charge transfer luminescence of Yb3+ and intrinsic luminescence of the garnet structure, respectively. A PL decay profile of each sample was approximated by a sum of two exponential decay functions, and the obtained decay times were 1 ns and 3-4 ns. In the scintillation spectra, we observed emission peaks in the ranges from 300 to 400 nm and from 450 to 550 nm for all the samples. The origins of these emissions were attributed to charge transfer luminescence of Yb3+ and intrinsic luminescence of the garnet structure, respectively. The scintillation decay times became longer with increasing the Pr concentrations. Among the present samples, the 0.1% Pr-doped sample showed the lowest scintillation afterglow level. In addition, pulse height spectrum of 5.5 MeV α-rays was demonstrated using the Pr-doped YbAG, and we confirmed that all the samples showed a full energy deposited peak. Above all, the 0.1% Pr-doped sample showed the highest light yield with a value of 14 ph/MeV under α-rays excitation.
Detection of cracks in shafts with the Approximated Entropy algorithm
NASA Astrophysics Data System (ADS)
Sampaio, Diego Luchesi; Nicoletti, Rodrigo
2016-05-01
The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Cawello, Willi; Schäfer, Carina
2014-08-01
Frequent plasma sampling to monitor pharmacokinetic (PK) profile of antiepileptic drugs (AEDs), is invasive, costly and time consuming. For drugs with a well-defined PK profile, such as AED lacosamide, equations can accurately approximate PK parameters from one steady-state plasma sample. Equations were derived to approximate steady-state peak and trough lacosamide plasma concentrations (Cpeak,ss and Ctrough,ss, respectively) and area under concentration-time curve during dosing interval (AUCτ,ss) from one plasma sample. Lacosamide (ka: ∼2 h(-1); ke: ∼0.05 h(-1), corresponding to half-life of 13 h) was calculated to reach Cpeak,ss after ∼1 h (tmax,ss). Equations were validated by comparing approximations to reference PK parameters obtained from single plasma samples drawn 3-12h following lacosamide administration, using data from double-blind, placebo-controlled, parallel-group PK study. Values of relative bias (accuracy) between -15% and +15%, and root mean square error (RMSE) values≤15% (precision) were considered acceptable for validation. Thirty-five healthy subjects (12 young males; 11 elderly males, 12 elderly females) received lacosamide 100mg/day for 4.5 days. Equation-derived PK values were compared to reference mean Cpeak,ss, Ctrough,ss and AUCτ,ss values. Equation-derived PK data had a precision of 6.2% and accuracy of -8.0%, 2.9%, and -0.11%, respectively. Equation-derived versus reference PK values for individual samples obtained 3-12h after lacosamide administration showed correlation (R2) range of 0.88-0.97 for AUCτ,ss. Correlation range for Cpeak,ss and Ctrough,ss was 0.65-0.87. Error analyses for individual sample comparisons were independent of time. Derived equations approximated lacosamide Cpeak,ss, Ctrough,ss and AUCτ,ss using one steady-state plasma sample within validation range. Approximated PK parameters were within accepted validation criteria when compared to reference PK values. Copyright © 2014 Elsevier B.V. All rights reserved.
Porosity of the Marcellus Shale: A contrast matching small-angle neutron scattering study
Bahadur, Jitendra; Ruppert, Leslie F.; Pipich, Vitaliy; Sakurovs, Richard; Melnichenko, Yuri B.
2018-01-01
Neutron scattering techniques were used to determine the effect of mineral matter on the accessibility of water and toluene to pores in the Devonian Marcellus Shale. Three Marcellus Shale samples, representing quartz-rich, clay-rich, and carbonate-rich facies, were examined using contrast matching small-angle neutron scattering (CM-SANS) at ambient pressure and temperature. Contrast matching compositions of H2O, D2O and toluene, deuterated toluene were used to probe open and closed pores of these three shale samples. Results show that although the mean pore radius was approximately the same for all three samples, the fractal dimension of the quartz-rich sample was higher than for the clay-rich and carbonate-rich samples, indicating different pore size distributions among the samples. The number density of pores was highest in the clay-rich sample and lowest in the quartz-rich sample. Contrast matching with water and toluene mixtures shows that the accessibility of pores to water and toluene also varied among the samples. In general, water accessed approximately 70–80% of the larger pores (>80 nm radius) in all three samples. At smaller pore sizes (~5–80 nm radius), the fraction of accessible pores decreases. The lowest accessibility to both fluids is at pore throat size of ~25 nm radii with the quartz-rich sample exhibiting lower accessibility than the clay- and carbonate-rich samples. The mechanism for this behaviour is unclear, but because the mineralogy of the three samples varies, it is likely that the inaccessible pores in this size range are associated with organics and not a specific mineral within the samples. At even smaller pore sizes (~<2.5 nm radius), in all samples, the fraction of accessible pores to water increases again to approximately 70–80%. Accessibility to toluene generally follows that of water; however, in the smallest pores (~<2.5 nm radius), accessibility to toluene decreases, especially in the clay-rich sample which contains about 30% more closed pores than the quartz- and carbonate-rich samples. Results from this study show that mineralogy of producing intervals within a shale reservoir can affect accessibility of pores to water and toluene and these mineralogic differences may affect hydrocarbon storage and production and hydraulic fracturing characteristics
Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisping, Lynn E.
2007-01-31
This document contains the calendar year 2007 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis and may not be collected in 2007 in which case the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2007.
NASA Technical Reports Server (NTRS)
Morris, Richard V.; Golden, D. C.; Bell, James F., III; Lauer, H. V., Jr.
1995-01-01
Visible and near-IR refectivity, Moessbauer, and X ray diffraction data were obtained on powders of impact melt rock from the Manicouagan Impact Crater located in Quebec, Canada. The iron mineralogy is dominated by pyroxene for the least oxidized samples and by hematite for the most oxidized samples. Phyllosilicate (smectite) contents up to approximately 15 wt % were found in some heavily oxidized samples. Nanophase hematite and/or paramagnetic ferric iron is observed in all samples. No hydrous ferric oxides (e.g., goethite, lepidocrocite, and ferrihydrite) were detected, which implies the alteration occurred above 250 C. Oxidative alteration is thought to have occurred predominantly during late-stage crystallization and subsolidus cooling of the impact melt by invasion of oxidizing vapors and/or solutions while the impact melt rocks were still hot. The near-IR band minimum correlated with the extent of aleration Fe(3+)/Fe(sub tot) and ranged from approximately 1000 nm (high-Ca pyroxene) to approximately 850 nm (bulk, well-crystalline hematite) for least and most oxidized samples, respectively. Intermediate band positions (900-920 nm) are attributed to low-Ca pyroxene and/or a composite band from hematite-pyroxene assemblages. Manicouagan data are consistent with previous assignments of hematite and pyroxene to the approximately 850 and approximately 1000nm bands observed in Martian reflectivity spectra. Manicouagan data also show that possible assignments for intermediate band positions (900-920 nm) in Martian spectra are pyroxene and/or hematite-pyroxene assemblages. By analogy with impact melt sheets and in agreement with observables for Mars, oxidative alteration of Martian impact melt sheets above 250 C and subsequent erosion could produce rocks and soils with variable proportions of hematite (both bulk and nanophase), pyroxene, and phyllosilicates as iron-bearing mineralogies. If this process is dominant, these phases on Mars were formed rapidly at relativly high temperatures on a sporadic basis throughout the history of the planet. The Manicouagan samples also show that this mineralogical diversity can be accomplished at constant chemical composition, which is also indicated for Mars from the analyses of soil at the two Viking landing sites.
The Evolution of Ly-alpha Emitting Galaxies Between z = 2.1 and z = 3.l
NASA Technical Reports Server (NTRS)
Ciardullo, Robin; Gronwall,Caryl; Wolf, Christopher; McCathran, Emily; Bond, Nicholas A.; Gawiser, Eric; Guaita, Lucia; Feldmeier, John J.; Treister, Ezequiel; Padilla, Nelson;
2011-01-01
We describe the results of a new, wide-field survey for z= 3.1 Ly-alpha emission-line galaxies (LAEs) in the Extended Chandra Deep Field South (ECDF-S). By using a nearly top-hat 5010 Angstrom filter and complementary broadband photometry from the MUSYC survey, we identify a complete sample of 141 objects with monochromatic fluxes brighter than 2.4E-17 ergs/cm^2/s and observers-frame equivalent widths greater than 80 Angstroms (i.e., 20 Angstroms in the rest-frame of Ly-alpha). The bright-end of this dataset is dominated by x-ray sources and foreground objects with GALEX detections, but when these interlopers are removed, we are still left with a sample of 130 LAE candidates, 39 of which have spectroscopic confirmations. This sample overlaps the set of objects found in an earlier ECDF-S survey, but due to our filter's redder bandpass, it also includes 68 previously uncataloged sources. We confirm earlier measurements of the z=3.1 LAE emission-line luminosity function, and show that an apparent anti-correlation between equivalent width and continuum brightness is likely due to the effect of correlated errors in our heteroskedastic dataset. Finally, we compare the properties of z=3.1 LAEs to LAEs found at z=2.1. We show that in the approximately 1 Gyr after z approximately 3, the LAE luminosity function evolved significantly, with L * fading by approximately 0.4 mag, the number density of sources with L greater than 1.5E42 ergs/s declining by approximately 50%, and the equivalent width scalelength contracting from 70^{+7}_{-5} Angstroms to 50^{+9}_{-6} Angstroms. When combined with literature results, our observations demonstrate that over the redshift range z approximately 0 to z approximately 4, LAEs contain less than approximately 10% of the star-formation rate density of the universe.
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Solutions to Faculty Work Overload: A Study of Job Sharing
ERIC Educational Resources Information Center
Freeman, Brenda J.; Coll, Kenneth M.
2009-01-01
This study investigated the opinions of a national sample of counselor education chairs and college of education deans regarding the advantages and disadvantages of faculty job sharing. Results showed favorable responses toward faculty job sharing from approximately half the sample, despite limited experience with job sharing. The study found few…
NASA Technical Reports Server (NTRS)
Achilles, C. N.; Downs, R. T.; Rampe, E. B.; Morris, R. V.; Bristow, T. F.; Ming, D. W.; Blake, D. F.; Vaniman, D. T.; Morrison, S. M.; Sutter, B.;
2017-01-01
The Mars Science Laboratory rover, Curiosity, is exploring the lowermost formation of Gale crater's central mound. Within this formation, three samples named Marimba, Quela, and Sebina have been analyzed by the CheMin X-ray diffractometer and the Alpha Particle X-ray Spectrometer (APXS) to determine mineralogy and bulk elemental chemistry, respectively. Marimba and Quela were also analyzed by the SAM (Sample Analysis at Mars) instrument to characterize the type and abundance of volatile phases detected in evolved gas analyses (EGA). CheMin data show similar proportions of plagioclase, hematite, and Ca-sulfates along with a mixture of di- and trioctahedral smectites at abundances of approximately 28, approximately 16, and approximately 18 wt% for Marimba, Quela, and Sebina. Approximately 50 wt% of each mudstone is comprised of X-ray amorphous and trace crystalline phases present below the CheMin detection limit (approximately 1 wt%). APXS measurements reveal a distinct bulk elemental chemistry that cannot be attributed to the clay mineral variation alone indicating a variable amorphous phase assemblage exists among the three mudstones. To explore the amorphous component, the calculated amorphous composition and SAM EGA results are used to identify amorphous phases unique to each mudstone. For example, the amorphous fraction of Marimba has twice the FeO wt% compared to Quela and Sebina yet, SAM EGA data show no evidence for Fe-sulfates. These data imply that Fe must reside in alternate Fe-bearing amorphous phases (e.g., nanophase iron oxides, ferrihydrite, etc.). Constraining the composition, abundances, and proposed identity of the amorphous fraction provides an opportunity to speculate on the past physical, chemical, and/or diagenetic processes which produced such phases in addition to sediment sources, lake chemistry, and the broader geologic history of Gale crater.
Assessment of microbiological quality of drinking water from household tanks in Bermuda.
Lévesque, B; Pereg, D; Watkinson, E; Maguire, J S; Bissonnette, L; Gingras, S; Rouja, P; Bergeron, M G; Dewailly, E
2008-06-01
Bermuda residents collect rainwater from rooftops to fulfil their freshwater needs. The objective of this study was to assess the microbiological quality of drinking water in household tanks throughout Bermuda. The tanks surveyed were selected randomly from the electoral register. Governmental officers visited the selected household (n = 102) to collect water samples and administer a short questionnaire about the tank characteristics, the residents' habits in terms of water use, and general information on the water collecting system and its maintenance. At the same time, water samples were collected for analysis and total coliforms and Escherichia coli were determined by 2 methods (membrane filtration and culture on chromogenic media, Colilert kit). Results from the 2 methods were highly correlated and showed that approximately 90% of the samples analysed were contaminated with total coliforms in concentrations exceeding 10 CFU/100 mL, and approximately 66% of samples showed contamination with E. coli. Tank cleaning in the year prior to sampling seems to protect against water contamination. If rainwater collection from roofs is the most efficient mean for providing freshwater to Bermudians, it must not be considered a source of high quality drinking water because of the high levels of microbial contamination.
Approximate number word knowledge before the cardinal principle.
Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C
2015-02-01
Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong
2013-12-01
Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.
Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories
Fonteneau, Raphael; Murphy, Susan A.; Wehenkel, Louis; Ernst, Damien
2013-01-01
In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of “artificial trajectories” from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning. PMID:24049244
Characterization of the enhancement effect of Na2CO3 on the sulfur capture capacity of limestones.
Laursen, Karin; Kern, Arnt A; Grace, John R; Lim, C Jim
2003-08-15
It has been known for a long time that certain additives (e.g., NaCl, CaCl2, Na2CO3, Fe2O3) can increase the sulfur dioxide capture-capacity of limestones. In a recent study we demonstrated that very small amounts of Na2CO3 can be very beneficial for producing sorbents of very high sorption capacities. This paper explores what contributes to these significant increases. Mercury porosimetry measurements of calcined limestone samples reveal a change in the pore-size from 0.04-0.2 microm in untreated samples to 2-10 microm in samples treated with Na2CO3--a pore-size more favorable for penetration of sulfur into the particles. The change in pore-size facilitates reaction with lime grains throughout the whole particle without rapid plugging of pores, avoiding premature change from a fast chemical reaction to a slow solid-state diffusion controlled process, as seen for untreated samples. Calcination in a thermogravimetric reactor showed that Na2CO3 increased the rate of calcination of CaCO3 to CaO, an effect which was slightly larger at 825 degrees C than at 900 degrees C. Peak broadening analysis of powder X-ray diffraction data of the raw, calcined, and sulfated samples revealed an unaffected calcite size (approximately 125-170 nm) but a significant increase in the crystallite size for lime (approximately 60-90 nm to approximately 250-300 nm) and less for anhydrite (approximately 125-150 nm to approximately 225-250 nm). The increase in the crystallite and pore-size of the treated limestones is attributed to an increase in ionic mobility in the crystal lattice due to formation of vacancies in the crystals when Ca is partly replaced by Na.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Moein, Mohammad Mahdi; Jabbar, Dunia; Colmsjö, Anders; Abdel-Rehim, Mohamed
2014-10-31
In the present work, a needle trap utilizing a molecularly imprinted sol-gel xerogel was prepared for the on-line microextraction of bilirubin from plasma and urine samples. Each prepared needle could be used for approximately one hundred extractions before it was discarded. Imprinted and non-imprinted sol-gel xerogel were applied for the extraction of bilirubin from plasma and urine samples. The produced molecularly imprinted sol-gel xerogel polymer showed high binding capacity and fast adsorption/desorption kinetics for bilirubin in plasma and urine samples. The adsorption capacity of molecularly imprinted sol-gel xerogel polymer was approximately 60% higher than that of non-imprinted polymer. The effect of the conditioning, washing and elution solvents, pH, extraction time, adsorption capacity and imprinting factor were investigated. The limit of detection and the lower limit of quantification were set to 1.6 and 5nmolL(-1), respectively using plasma or urine samples. The standard calibration curves were obtained within the concentration range of 5-1000nmolL(-1) in both plasma and urine samples. The coefficients of determination values (R(2)) were ≥0.998 for all runs. The extraction recovery was approximately 80% for BR in the human plasma and urine samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Louis, Laurent; David, Christian; Špaček, Petr; Wong, Teng-Fong; Fortin, Jérôme; Song, Sheng Rong
2012-01-01
The study of seismic anisotropy has become a powerful tool to decipher rock physics attributes in reservoirs or in complex tectonic settings. We compare direct 3-D measurements of P-wave velocity in 132 different directions on spherical rock samples to the prediction of the approximate model proposed by Louis et al. based on a tensorial approach. The data set includes measurements on dry spheres under confining pressure ranging from 5 to 200 MPa for three sandstones retrieved at a depth of 850, 1365 and 1394 metres in TCDP hole A (Taiwan Chelungpu Fault Drilling Project). As long as the P-wave velocity anisotropy is weak, we show that the predictions of the approximate model are in good agreement with the measurements. As the tensorial method is designed to work with cylindrical samples cored in three orthogonal directions, a significant gain both in the number of measurements involved and in sample preparation is achieved compared to measurements on spheres. We analysed the pressure dependence of the velocity field and show that as the confining pressure is raised the velocity increases, the anisotropy decreases but remains significant even at high pressure, and the shape of the ellipsoid representing the velocity (or elastic) fabric evolves from elongated to planar. These observations can be accounted for by considering the existence of both isotropic and anisotropic crack distributions and their evolution with applied pressure.
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Lead isotope systematics of some Apollo 17 soils and some separated components from 76501
NASA Technical Reports Server (NTRS)
Church, S. E.; Tilton, G. R.
1974-01-01
Isotopic lead data from bulk samples of Apollo 17 soils were analyzed, and they define a chord in a concordia diagram, showing the presence of a component or components containing excess radiogenic lead with Pb-207/Pb-206 equal to about 1.32. The chord is distinctly different from the cataclysm chord, for which Pb-207/Pb-206 is approximately 1.45. Nitric acid analysis of plagioclase indicates lead ages of around 4.35 AE, in agreement with previous findings. Agglutinates from soil 76501,34 show loss of approximately 15% of lead.
Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan
2005-07-10
We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.
HST images of very compact blue galaxies at z approximately 0.2
NASA Technical Reports Server (NTRS)
Koo, David C.; Bershady, Matthew A.; Wirth, Gregory D.; Stanford, S. Adam; Majewski, Steven R.
1994-01-01
We present the results of Hubble Space Telescope (HST) Wide-Field Camera (WFC) imaging of seven very compact, very blue galaxies with B less than or equal to 21 and redshifts z approximately 0.1 to 0.35. Based on deconvolved images, we estimate typical half-light diameters of approximately 0.65 sec, corresponding to approximately 1.4 h(exp -1) kpc at redshifts z approximately 0.2. The average rest frame surface brightness within this diameter is mu(sub v) approximately 20.5 mag arcsec(exp -2), approximately 1 mag brighter than that of typical late-type blue galaxies. Ground-based spectra show strong, narrow emission lines indicating high ionization; their very blue colors suggest recent bursts of star-formation; their typical luminosities are approximately 4 times fainter than that of field galaxies. These characteristics suggest H II galaxies as likely local counterparts of our sample, though our most luminous targets appear to be unusually compact for their luminosities.
Kernel Wiener filter and its application to pattern recognition.
Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko
2010-11-01
The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.
Reverse-transformation austenite structure control with micro/nanometer size
NASA Astrophysics Data System (ADS)
Wu, Hui-bin; Niu, Gang; Wu, Feng-juan; Tang, Di
2017-05-01
To control the reverse-transformation austenite structure through manipulation of the micro/nanometer grain structure, the influences of cold deformation and annealing parameters on the microstructure evolution and mechanical properties of 316L austenitic stainless steel were investigated. The samples were first cold-rolled, and then samples deformed to different extents were annealed at different temperatures. The microstructure evolutions were analyzed by optical microscopy, scanning electron microscopy (SEM), magnetic measurements, and X-ray diffraction (XRD); the mechanical properties are also determined by tensile tests. The results showed that the fraction of stain-induced martensite was approximately 72% in the 90% cold-rolled steel. The micro/nanometric microstructure was obtained after reversion annealing at 820-870°C for 60 s. Nearly 100% reversed austenite was obtained in samples annealed at 850°C, where grains with a diameter ≤ 500 nm accounted for 30% and those with a diameter > 0.5 μm accounted for 70%. The micro/nanometer-grain steel exhibited not only a high strength level (approximately 959 MPa) but also a desirable elongation of approximately 45%.
Effect of separate sampling on classification accuracy.
Shahrokh Esfahani, Mohammad; Dougherty, Edward R
2014-01-15
Measurements are commonly taken from two phenotypes to build a classifier, where the number of data points from each class is predetermined, not random. In this 'separate sampling' scenario, the data cannot be used to estimate the class prior probabilities. Moreover, predetermined class sizes can severely degrade classifier performance, even for large samples. We employ simulations using both synthetic and real data to show the detrimental effect of separate sampling on a variety of classification rules. We establish propositions related to the effect on the expected classifier error owing to a sampling ratio different from the population class ratio. From these we derive a sample-based minimax sampling ratio and provide an algorithm for approximating it from the data. We also extend to arbitrary distributions the classical population-based Anderson linear discriminant analysis minimax sampling ratio derived from the discriminant form of the Bayes classifier. All the codes for synthetic data and real data examples are written in MATLAB. A function called mmratio, whose output is an approximation of the minimax sampling ratio of a given dataset, is also written in MATLAB. All the codes are available at: http://gsp.tamu.edu/Publications/supplementary/shahrokh13b.
Extraction and labeling methods for microarrays using small amounts of plant tissue.
Stimpson, Alexander J; Pereira, Rhea S; Kiss, John Z; Correll, Melanie J
2009-03-01
Procedures were developed to maximize the yield of high-quality RNA from small amounts of plant biomass for microarrays. Two disruption techniques (bead milling and pestle and mortar) were compared for the yield and the quality of RNA extracted from 1-week-old Arabidopsis thaliana seedlings (approximately 0.5-30 mg total biomass). The pestle and mortar method of extraction showed enhanced RNA quality at the smaller biomass samples compared with the bead milling technique, although the quality in the bead milling could be improved with additional cooling steps. The RNA extracted from the pestle and mortar technique was further tested to determine if the small quantity of RNA (500 ng-7 microg) was appropriate for microarray analyses. A new method of low-quantity RNA labeling for microarrays (NuGEN Technologies, Inc.) was used on five 7-day-old seedlings (approximately 2.5 mg fresh weight total) of Arabidopsis that were grown in the dark and exposed to 1 h of red light or continued dark. Microarray analyses were performed on a small plant sample (five seedlings; approximately 2.5 mg) using these methods and compared with extractions performed with larger biomass samples (approximately 500 roots). Many well-known light-regulated genes between the small plant samples and the larger biomass samples overlapped in expression changes, and the relative expression levels of selected genes were confirmed with quantitative real-time polymerase chain reaction, suggesting that these methods can be used for plant experiments where the biomass is extremely limited (i.e. spaceflight studies).
Bolea, Juan; Pueyo, Esther; Orini, Michele; Bailón, Raquel
2016-01-01
The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR) on nonlinear heart rate variability (HRV) indices (correlation dimension, sample, and approximate entropy) as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS) modulation. First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database. The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position ( p < 0.05), concomitant with a 28% increase in mean HR ( p < 0.05). After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends. HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.
Evidence for Reduced Species Star Formation Rates in the Centers of Massive Galaxies at zeta = 4
NASA Technical Reports Server (NTRS)
Jung, Intae; Finkelstein, Steven L.; Song, Mimi; Dickinson, Mark; Dekel, Avishai; Ferguson, Henry C.; Fontana, Adriano; Koekemoer, Anton M.; Lu, Yu; Mobasher, Bahram;
2017-01-01
We perform the first spatially-resolved stellar population study of galaxies in the early universe z equals 3.5 -6.5, utilizing the Hubble Space Telescope Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) imaging dataset over the GOODS-S field. We select a sample of 418 bright and extended galaxies at z less than or approximately equal to 3.5-6.5 from a parent sample of approximately 8000 photometric-redshift selected galaxies from Finkelstein et al. We first examine galaxies at 3.5 less than or equal to z less than or approximately equal to 4.0 using additional deep K-band survey data from the HAWK-I UDS and GOODS Survey (HUGS) which covers the 4000 Angstrom break at these redshifts. We measure the stellar mass, star formation rate, and dust extinction for galaxy inner and outer regions via spatially-resolved spectral energy distribution fitting based on a Markov Chain Monte Carlo algorithm. By comparing specific star formation rates (sSFRs) between inner and outer parts of the galaxies we find that the majority of galaxies with the high central mass densities show evidence for a preferentially lower sSFR in their centers than in their outer regions, indicative of reduced sSFRs in their central regions. We also study galaxies at z approximately equal to 5 and 6 (here limited to high spatial resolution in the rest-frame ultraviolet only), finding that they show sSFRs which are generally independent of radial distance from the center of the galaxies. This indicates that stars are formed uniformly at all radii in massive galaxies at z approximately equal to 5-6, contrary tomassive galaxies at z. less than approximately equal to 4.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Normal and compound poisson approximations for pattern occurrences in NGS reads.
Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu
2012-06-01
Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
[Method for concentration determination of mineral-oil fog in the air of workplace].
Xu, Min; Zhang, Yu-Zeng; Liu, Shi-Feng
2008-05-01
To study the method of concentration determination of mineral-oil fog in the air of workplace. Four filter films such as synthetic fabric filter film, beta glass fiber filter film, chronic filter paper and microporous film were used in this study. Two kinds of dust samplers were used to collect the sample, one sampling at fast flow rate in a short time and the other sampling at slow flow rate with long duration. Subsequently, the filter membrane was weighed with electronic analytical balance. According to sampling efficiency and incremental size, the adsorbent ability of four different filter membranes was compared. When the flow rate was between 10 approximately 20 L/min and the sampling time was between 10 approximately 15 min, the average sampling efficiency of synthetic fabric filter film was 95.61% and the increased weight ranged from 0.87 to 2.60 mg. When the flow rate was between 10 approximately 20 L/min and sampling time was between 10 approximately 15 min, the average sampling efficiency of beta glass fiber filter film was 97.57% and the increased weight was 0.75 approximately 2.47 mg. When the flow rate was between 5 approximately 10 L/min and the sampling time between 10 approximately 20 min, the average sampling efficiency of chronic filter paper and microporous film was 48.94% and 63.15%, respectively and the increased weight was 0.75 approximately 2.15 mg and 0.23 approximately 0.85 mg, respectively. When the flow rate was 3.5 L/min and the sampling time was between 100 approximately 166 min, the average sampling efficiency of filter film were 94.44% and 93.45%, respectively and the average increased weight was 1.28 mg for beta glass fiber filter film and 0.78 mg for beta glass fiber filter film and synthetic fabric synthetic fabric filter film. The average sampling efficiency of chronic filter paper and microporous film were 37.65% and 88.21%, respectively. The average increased weight was 4.30 mg and 1.23 mg, respectively. Sampling with synthetic fabric filter film and beta glass fiber filter film is credible, accurate, simple and feasible for determination of the concentration of mineral-oil fog in workplaces.
Monte Carlo sampling of Wigner functions and surface hopping quantum dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kube, Susanna; Lasser, Caroline; Weber, Marcus
2009-04-01
The article addresses the achievable accuracy for a Monte Carlo sampling of Wigner functions in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The approximation of Wigner functions is realized by an adaption of the Metropolis algorithm for real-valued functions with disconnected support. The integration, which is necessary for computing values of the Wigner function, uses importance sampling with a Gaussian weight function. The numerical experiments agree with theoretical considerations and show an error of 2-3%.
Diagenetic Mineralogy at Gale Crater, Mars
NASA Technical Reports Server (NTRS)
Vaniman, David; Blake, David; Bristow, Thomas F.; Chipera, Steve; Gellert, Ralf; Ming, Douglas; Morris, Richard; Rampe, E. B.; Rapin, William
2015-01-01
Three years into exploration of sediments in Gale crater on Mars, the Mars Science Laboratory rover Curiosity has provided data on several modes and episodes of diagenetic mineral formation. Curiosity determines mineralogy principally by X-ray diffraction (XRD), but with supporting data from thermal-release profiles of volatiles, bulk chemistry, passive spectroscopy, and laser-induced breakdown spectra of targeted spots. Mudstones at Yellowknife Bay, within the landing ellipse, contain approximately 20% phyllosilicate that we interpret as authigenic smectite formed by basalt weathering in relatively dilute water, with associated formation of authigenic magnetite as in experiments by Tosca and Hurowitz [Goldschmidt 2014]. Varied interlayer spacing of the smectite, collapsed at approximately 10 A or expanded at approximately 13.2 A, is evidence of localized diagenesis that may include partial intercalation of metal-hydroxyl groups in the approximately 13.2 A material. Subsequent sampling of stratigraphically higher Windjana sandstone revealed sediment with multiple sources, possible concentration of detrital magnetite, and minimal abundance of diagenetic minerals. Most recent sampling has been of lower strata at Mount Sharp, where diagenesis is widespread and varied. Here XRD shows that hematite first becomes abundant and products of diagenesis include jarosite and cristobalite. In addition, bulk chemistry identifies Mg-sulfate concretions that may be amorphous or crystalline. Throughout Curiosity's traverse, later diagenetic fractures (and rarer nodules) of mm to dm scale are common and surprisingly constant and simple in Ca-sulfate composition. Other sulfates (Mg,Fe) appear to be absent in this later diagenetic cycle, and circumneutral solutions are indicated. Equally surprising is the rarity of gypsum and common occurrence of bassanite and anhydrite. Bassanite, rare on Earth, plays a major role at this location on Mars. Dehydration of gypsum to bassanite in the dry atmosphere of Mars has been proposed but considered unlikely based on lab studies of dehydration kinetics in powdered samples. Dehydration is even less likely for bulk vein samples, as lab data show dehydration rates one to two orders of magnitude slower in bulk samples than in powders. On Mars, exposure ages of 100 Ma or more may be a significant factor in dehydration of hydrous phases.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Presolar Materials in a Giant Cluster IDP of Probable Cometary Origin
NASA Technical Reports Server (NTRS)
Messenger, S.; Brownlee, D. E.; Joswiak, D. J.; Nguyen, A. N.
2015-01-01
Chondritic porous interplanetary dust particles (CP-IDPs) have been linked to comets by their fragile structure, primitive mineralogy, dynamics, and abundant interstellar materials. But differences have emerged between 'cometary' CP-IDPs and comet 81P/Wild 2 Stardust Mission samples. Particles resembling Ca-Al-rich inclusions (CAIs), chondrules, and amoeboid olivine aggregates (AOAs) in Wild 2 samples are rare in CP-IDPs. Unlike IDPs, presolar materials are scarce in Wild 2 samples. These differences may be due to selection effects, such as destruction of fine grained (presolar) components during the 6 km/s aerogel impact collection of Wild 2 samples. Large refractory grains observed in Wild 2 samples are also unlikely to be found in most (less than 30 micrometers) IDPs. Presolar materials provide a measure of primitive-ness of meteorites and IDPs. Organic matter in IDPs and chondrites shows H and N isotopic anomalies attributed to low-T interstellar or protosolar disk chemistry, where the largest anomalies occur in the most primitive samples. Presolar silicates are abundant in meteorites with low levels of aqueous alteration (Acfer 094 approximately 200 ppm) and scarce in altered chondrites (e.g. Semarkona approximately 20 ppm). Presolar silicates in minimally altered CP-IDPs range from approximately 400 ppm to 15,000 ppm, possibly reflecting variable levels of destruction in the solar nebula or statistical variations due to small sample sizes. Here we present preliminary isotopic and mineralogical studies of a very large CP-IDP. The goals of this study are to more accurately determine the abundances of presolar components of CP-IDP material for comparison with comet Wild 2 samples and meteorites. The large mass of this IDP presents a unique opportunity to accurately determine the abundance of pre-solar grains in a likely cometary sample.
Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A
2015-10-01
Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Mairinger, Fabian D; Walter, Robert Fh; Vollbrecht, Claudia; Hager, Thomas; Worm, Karl; Ting, Saskia; Wohlschläger, Jeremias; Zarogoulidis, Paul; Zarogoulidis, Konstantinos; Schmid, Kurt W
2014-01-01
Isothermal multiple displacement amplification (IMDA) can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. A total of 250 μg DNA (concentration 5 μg/μL) was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA.
Non-linear HRV indices under autonomic nervous system blockade.
Bolea, Juan; Pueyo, Esther; Laguna, Pablo; Bailón, Raquel
2014-01-01
Heart rate variability (HRV) has been studied as a non-invasive technique to characterize the autonomic nervous system (ANS) regulation of the heart. Non-linear methods based on chaos theory have been used during the last decades as markers for risk stratification. However, interpretation of these nonlinear methods in terms of sympathetic and parasympathetic activity is not fully established. In this work we study linear and non-linear HRV indices during ANS blockades in order to assess their relation with sympathetic and parasympathetic activities. Power spectral content in low frequency (0.04-0.15 Hz) and high frequency (0.15-0.4 Hz) bands of HRV, as well as correlation dimension, sample and approximate entropies were computed in a database of subjects during single and dual ANS blockade with atropine and/or propranolol. Parasympathetic blockade caused a significant decrease in the low and high frequency power of HRV, as well as in correlation dimension and sample and approximate entropies. Sympathetic blockade caused a significant increase in approximate entropy. Sympathetic activation due to postural change from supine to standing caused a significant decrease in all the investigated non-linear indices and a significant increase in the normalized power in the low frequency band. The other investigated linear indices did not show significant changes. Results suggest that parasympathetic activity has a direct relation with sample and approximate entropies.
Berg, J H; Farrell, J E; Brown, L R
1990-02-01
The release of fluoride from glass ionomer materials is one of the most important features of this newly implemented material, and the remineralization effects of this phenomenon have been documented (Hicks and Silverstone 1986). This paper examines the effects of glass ionomer/silver cermet restorations on the plaque levels of interproximal mutans streptococci. Fifteen patients with Class II lesions in primary molars were selected for study. Interproximal plaque samples were obtained from each of the lesion sites and from one caries-free site approximal to a primary molar. One lesion was restored with composite resin to serve as a treated control to the glass ionomer/silver cermet (Ketac Silver, ESPE/Premier Sales Corp., Norristown, Pennsylvania) test site. A sound (unaltered) interproximal site served as the untreated control site. Plaque samples were collected before and at one week, one month, and three months post-treatment. Samples were serially diluted to enable colony counts of mutans streptococci. One week post-treatment counts showed that the glass ionomer/silver cermet restorations significantly reduced (P less than 0.05) the approximal plaque levels of mutans streptococci. Conversely, the untreated and treated control sites did not exhibit reductions in approximal plaque levels of mutans streptococci. These results indicate that glass ionomer restorations may be inhibitory to the growth of mutans streptococci in dental plaque approximal to this restorative material in the primary dentition.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright © 2015 Elsevier B.V. All rights reserved.
Drevinskas, Tomas; Mickienė, Rūta; Maruška, Audrius; Stankevičius, Mantas; Tiso, Nicola; Mikašauskaitė, Jurgita; Ragažinskienė, Ona; Levišauskas, Donatas; Bartkuvienė, Violeta; Snieškienė, Vilija; Stankevičienė, Antanina; Polcaro, Chiara; Galli, Emanuela; Donati, Enrica; Tekorius, Tomas; Kornyšova, Olga; Kaškonienė, Vilma
2016-02-01
The miniaturization and optimization of a white rot fungal bioremediation experiment is described in this paper. The optimized procedure allows determination of the degradation kinetics of anthracene. The miniaturized procedure requires only 2.5 ml of culture medium. The experiment is more precise, robust, and better controlled comparing it to classical tests in flasks. Using this technique, different parts, i.e., the culture medium, the fungi, and the cotton seal, can be analyzed. A simple sample preparation speeds up the analytical process. Experiments performed show degradation of anthracene up to approximately 60% by Irpex lacteus and up to approximately 40% by Pleurotus ostreatus in 25 days. Bioremediation of anthracene by the consortium of I. lacteus and P. ostreatus shows the biodegradation of anthracene up to approximately 56% in 23 days. At the end of the experiment, the surface tension of culture medium decreased comparing it to the blank, indicating generation of surfactant compounds.
Chance-constrained economic dispatch with renewable energy and storage
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.; ...
2018-04-19
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
Chance-constrained economic dispatch with renewable energy and storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
Imaging the host galaxies of high-redshift radio-quiet QSOs
NASA Technical Reports Server (NTRS)
Lowenthal, James D.; Heckman, Timothy M.; Lehnert, Matthew, D.; Elias, J. H.
1995-01-01
We present new deep K-band and optical images of four radio-quiet QSOs at z approximately = 1 and six radio-quiet QSOs at z approximately = 2.5, as well as optical images only of six more at z approximately = 2.5. We have examined the images carefully for evidence of extended 'fuzz' from any putative QSO host galaxy. None of the z approximately = 2.5 QSOs shows any extended emission, and only two of the z approximately = 1 QSOs show marginal evidence for extended emission. Our 3 sigma detection limits in the K images, m(sub K) approximately = 21 for an isolated source, would correspond approximately to an unevolved L(sup star) elliptical galaxy at z = 2.5 or 2-3 mag fainter than an L(sup star) elliptical at z = 1, although our limits on host galaxy light are weaker than this due to the difficulty of separating galaxy light from QSO light. We simulate simple models of disk and elliptical host galaxies, and find that the marginal emission around the two z approximately = 1 QSOs can be explained by disks or bulges that are approximately 1-2 mag brighter than an unevolved L(sup star) galaxy in one case and approximately 1.5-2.5 mag brighter than L(sub star) in the other. For two other z approximately = 1 QSOs, we have only upper limits (L approximately = L(sup star)). The hosts of the high-redshift sample must be no brighter than about 3 mag above an unevolved L(sup star) galaxy, and are at least 1 magnitude fainter than the hosts of radio-loud QSOs at the same redshift. If the easily detected K-band light surrounding a previous sample of otherwise similar but radio-loud QSOs is starlight, then it must evolve on timescales of greater than or approximately equal to 10(exp 8) yr (e.g., Chambers & Charlot 1990); therefore our non-detection of host galaxy fuzz around radio-quiet QSOs supports the view that high-redshift radio-quiet and radio-loud QSOs inhabit different host objects, rather than being single types of objects that turn their radio emission on and off over short timescales. This is consistent with the general trend at low redshifts that radio-loud QSOs are found in giant elliptical galaxies while radio-quiet QSOs are found in less luminous disk galaxies. It also suggests that the processes responsible for the spectacular properties of radio-loud AGNs at high redshifts might not be generally relevent to the (far more numerous) radio-quiet population.
Exhibitionism: findings from a Midwestern police contact sample.
Bader, Shannon M; Schoeneman-Morris, Katherine A; Scalora, Mario J; Casady, Thomas K
2008-06-01
This study used a police sample to examine offense characteristics, recidivism rates, and other types of sexual offending among individuals suspected of exhibitionism. The sample consisted of 202 incidents of indecent exposure perpetrated by 106 identified individuals. Demographic information showed that one quarter of the sample had symptoms of a mental illness and one quarter had a history of substance abuse. More than 84% of the sample had other nonsexual criminal charges. Approximately 30% of the perpetrators were charged for more than one exposure incident. Masturbating during the offense, exposing to child victims, and speaking to the victim did not show any relationship to the occurrence of more sexually aggressive behaviors. However, individuals who had subsequent rape or molestation charges (16.9%) were more likely than those who did not to have had multiple exposure incidents and a history of physical assault charges.
The Surface Chemical Composition of Lunar Samples and Its Significance for Optical Properties
NASA Technical Reports Server (NTRS)
Gold, T.; Bilson, E.; Baron, R. L.
1976-01-01
The surface iron, titanium, calcium, and silicon concentration in numerous lunar soil and rock samples was determined by Auger electron spectroscopy. All soil samples show a large increase in the iron to oxygen ratio compared with samples of pulverized rock or with results of the bulk chemical analysis. A solar wind simulation experiment using 2 keV energy alpha -particles showed that an ion dose corresponding to approximately 30,000 years of solar wind increased the iron concentration on the surface of the pulverized Apollo 14 rock sample 14310 to the concentration measured in the Apollo 14 soil sample 14163, and the albedo of the pulverized rock decreased from 0.36 to 0.07. The low albedo of the lunar soil is related to the iron + titanium concentration on its surface. A solar wind sputter reduction mechanism is discussed as a possible cause for both the surface chemical and optical properties of the soil.
ElSohly, Mahmoud A.; Mehmedic, Zlatko; Foster, Susan; Gon, Chandrani; Chandra, Suman; Church, James C.
2016-01-01
BACKGROUND Marijuana is the most widely used illicit drug in the United States and all over the world. Reports indicate that the potency of cannabis preparation has been increasing. This report examines the concentration of cannabinoids in illicit cannabis products seized by DEA (drug and enforcement administration) over the last two decades, with particular emphasis on Δ9-THC and cannabidiol (CBD). METHODS Samples in this report are received over time from DEA confiscated materials and processed for analysis using a validated ‘gas chromatograph with flame ionization detector (GC/FID)’ method. RESULTS A total of 38,681samples of cannabis preparations were received and analyzed between January 1, 1995 and December 31, 2014. The data showed that, while the number of marijuana samples seized over the last four years has declined, the number of sinsemilla samples has increased. Overall, the potency of illicit cannabis plant material has consistently risen over time since 1995 from approximately 4% in 1995 to approximately 12% in 2014. On the other hand, the CBD content has fallen on average from approximately 0.28% in 2001 to <0.15% in 2014, resulting in a change in the ratio of THC to CBD from 14 times in 1995 to approximately 80 times in 2014. CONCLUSION It is concluded that there is a shift in the production of illicit cannabis plant material from regular marijuana to sinsemilla. This increase in potency poses higher risk of cannabis use, particularly among adolescents. PMID:26903403
Laboratory Study of Airborne Fallout Particles and Their Time Distribution.
ERIC Educational Resources Information Center
Smith, H. A., Jr.; And Others
1979-01-01
Samples of filtered airborne particulate, collected daily for the first month after the September 18, 1977 Chinese nuclear detonation, showed fourteen fission products. Fluctuations in the daily fallout activity levels suggested a global fallout orbit time of approximately twenty days. (Author/BB)
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
Yao, Qiufang; Wang, Chao; Fan, Bitao; Wang, Hanwei; Sun, Qingfeng; Jin, Chunde; Zhang, Hong
2016-01-01
In the present paper, uniformly large-scale wurtzite-structured ZnO nanorod arrays (ZNAs) were deposited onto a wood surface through a one-step solvothermal method. The as-prepared samples were characterized by X-ray diffraction (XRD), field-emission scanning electron microscopy (FE-SEM), transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FTIR), thermogravimetry (TG), and differential thermal analysis (DTA). ZNAs with a diameter of approximately 85 nm and a length of approximately 1.5 μm were chemically bonded onto the wood surface through hydrogen bonds. The superamphiphobic performance and ultraviolet resistance were measured and evaluated by water or oil contact angles (WCA or OCA) and roll-off angles, sand abrasion tests and an artificially accelerated ageing test. The results show that the ZNA-treated wood demonstrates a robust superamphiphobic performance under mechanical impact, corrosive liquids, intermittent and transpositional temperatures, and water spray. Additionally, the as-prepared wood sample shows superior ultraviolet resistance. PMID:27775091
NASA Astrophysics Data System (ADS)
Yao, Qiufang; Wang, Chao; Fan, Bitao; Wang, Hanwei; Sun, Qingfeng; Jin, Chunde; Zhang, Hong
2016-10-01
In the present paper, uniformly large-scale wurtzite-structured ZnO nanorod arrays (ZNAs) were deposited onto a wood surface through a one-step solvothermal method. The as-prepared samples were characterized by X-ray diffraction (XRD), field-emission scanning electron microscopy (FE-SEM), transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FTIR), thermogravimetry (TG), and differential thermal analysis (DTA). ZNAs with a diameter of approximately 85 nm and a length of approximately 1.5 μm were chemically bonded onto the wood surface through hydrogen bonds. The superamphiphobic performance and ultraviolet resistance were measured and evaluated by water or oil contact angles (WCA or OCA) and roll-off angles, sand abrasion tests and an artificially accelerated ageing test. The results show that the ZNA-treated wood demonstrates a robust superamphiphobic performance under mechanical impact, corrosive liquids, intermittent and transpositional temperatures, and water spray. Additionally, the as-prepared wood sample shows superior ultraviolet resistance.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Infrared-optical transmission and reflection measurements on loose powders
NASA Astrophysics Data System (ADS)
Kuhn, J.; Korder, S.; Arduini-Schuster, M. C.; Caps, R.; Fricke, J.
1993-09-01
A method is described to determine quantitatively the infrared-optical properties of loose powder beds via directional-hemispherical transmission and reflection measurements. Instead of the integration of the powders into a potassium bromide (KBr) or a paraffin oil matrix, which would drastically alter the scattering behavior, the powders are placed onto supporting layers of polyethylene (PE) and KBr. A commercial spectrometer is supplemented by an external optics, which enables measurements on horizontally arranged samples. For data evaluation we use a solution of the equation of radiative transfer in the 3-flux approximation under boundary conditions adapted to the PE or KBr/powder system. A comparison with Kubelka-Munk's theory and Schuster's 2-flux approximation is performed, which shows that 3-flux approximation yields results closest to the exact solution. Equations are developed, which correct transmission and reflection of the samples for the influence of the supporting layer and calculate the specific extinction and the albedo of the powder and thus enables us to separate scattering and absorption part of the extinction spectrum. Measurements on TiO2 powder are presented, which show the influence of preparation techniques and data evaluation with different methods to obtain the albedo. The specific extinction of various TiO2 powders is presented.
Effects of temperature on microbial succession and metabolite change during saeu-jeot fermentation.
Lee, Se Hee; Jung, Ji Young; Jeon, Che Ok
2014-04-01
To investigate the effects of temperature on saeu-jeot (shrimp) fermentation, four sets of saeu-jeot samples with approximately 25% (w/v) NaCl were fermented at 10 °C, 15 °C, 20 °C, and 25 °C. The pH values of the 10 °C and 15 °C samples were relatively constant during the entire fermentation period, whereas those of the 20 °C and 25 °C samples gradually decreased after 25 days of fermentation. Quantitative PCR showed that the maximum bacterial abundance was greater in higher temperature samples, and the bacterial abundance in the 10 °C samples steadily decreased during the entire fermentation period. Community analysis using pyrosequencing revealed that the initially dominant Proteobacteria including Pseudoalteromonas, Photobacterium, Vibrio, Aliivibrio, and Enterovibrio were replaced rapidly with Firmicutes such as Psychrobacter, Staphylococcus, Salimicrobium, Alkalibacillus, and Halanaerobium as the fermentation progressed. However, Vibrio, Photobacterium, Aliivibrio, and Enterovibrio, which may include potentially pathogenic strains, remained even after 215 days in the 10 °C samples. Metabolite analysis using (1)H NMR showed that amino acid profiles and initial quick increases of glucose and glycerol were similar and independent of bacterial growth in all temperature samples. After 25 days of fermentation, the levels of glucose, glycerol, and trimethylamine N-oxide decreased with the growth of Halanaerobium and the increase of acetate, butyrate, and methylamines in the 20 °C and 25 °C samples although the amino acid concentrations steadily increased until approximately 105 days of fermentation. Statistical triplot analysis showed that the bacterial successions occurred similarly regardless of the fermentation temperature, and Halanaerobium was likely responsible for the production of acetate, butyrate, and methylamines. This study suggests that around 15 °C might be the optimum temperature for the production of safe and tasty saeu-jeot. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Colarco, P. R.; Kahn, R. A.; Remer, L. A.; Levy, R. C.
2014-01-01
We use the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite aerosol optical thickness (AOT) product to assess the impact of reduced swath width on global and regional AOT statistics and trends. Alongtrack and across-track sampling strategies are employed, in which the full MODIS data set is sub-sampled with various narrow-swath (approximately 400-800 km) and single pixel width (approximately 10 km) configurations. Although view-angle artifacts in the MODIS AOT retrieval confound direct comparisons between averages derived from different sub-samples, careful analysis shows that with many portions of the Earth essentially unobserved, spatial sampling introduces uncertainty in the derived seasonal-regional mean AOT. These AOT spatial sampling artifacts comprise up to 60%of the full-swath AOT value under moderate aerosol loading, and can be as large as 0.1 in some regions under high aerosol loading. Compared to full-swath observations, narrower swath and single pixel width sampling exhibits a reduced ability to detect AOT trends with statistical significance. On the other hand, estimates of the global, annual mean AOT do not vary significantly from the full-swath values as spatial sampling is reduced. Aggregation of the MODIS data at coarse grid scales (10 deg) shows consistency in the aerosol trends across sampling strategies, with increased statistical confidence, but quantitative errors in the derived trends are found even for the full-swath data when compared to high spatial resolution (0.5 deg) aggregations. Using results of a model-derived aerosol reanalysis, we find consistency in our conclusions about a seasonal-regional spatial sampling artifact in AOT Furthermore, the model shows that reduced spatial sampling can amount to uncertainty in computed shortwave top-ofatmosphere aerosol radiative forcing of 2-3 W m(sup-2). These artifacts are lower bounds, as possibly other unconsidered sampling strategies would perform less well. These results suggest that future aerosol satellite missions having significantly less than full-swath viewing are unlikely to sample the true AOT distribution well enough to obtain the statistics needed to reduce uncertainty in aerosol direct forcing of climate.
Using Extreme Groups Strategy When Measures Are Not Normally Distributed.
ERIC Educational Resources Information Center
Fowler, Robert L.
1992-01-01
A Monte Carlo simulation explored how to optimize power in the extreme groups strategy when sampling from nonnormal distributions. Results show that the optimum percent for the extreme group selection was approximately the same for all population shapes, except the extremely platykurtic (uniform) distribution. (SLD)
Non-Destructive Evaluation of Aerospace Composites
2009-03-01
security as well as non-invasive epithelial and breast cancer detection [3, 23]. Figure 8 shows a pair of examples of current THz imaging systems...conduction videos; each test lasting approximately 10 seconds. 3.3.2 Thermography Procedure The samples were set flat on two wooden slats to
Pulsed single-blow regenerator testing
NASA Technical Reports Server (NTRS)
Oldson, J. C.; Knowles, T. R.; Rauch, J.
1992-01-01
A pulsed single-blow method has been developed for testing of Stirling regenerator materials performance. The method uses a tubular flow arrangement with a steady gas flow passing through a regenerator matrix sample that packs the flow channel for a short distance. A wire grid heater spanning the gas flow channel is used to heat a plug of gas by approximately 2 K for approximately 350 ms. Foil thermocouples monitor the gas temperature entering and leaving the sample. Data analysis based on a 1D incompressible-flow thermal model allows the extraction of Stanton number. A figure of merit involving heat transfer and pressure drop is used to present results for steel screens and steel felt. The observations show a lower figure of merit for the materials tested than is expected based on correlations obtained by other methods.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Potency trends of delta9-THC and other cannabinoids in confiscated marijuana from 1980-1997.
ElSohly, M A; Ross, S A; Mehmedic, Z; Arafat, R; Yi, B; Banahan, B F
2000-01-01
The analysis of 35,312 cannabis preparations confiscated in the USA over a period of 18 years for delta-9-tetrahydrocannabinol (delta9-THC) and other major cannabinoids is reported. Samples were identified as cannabis, hashish, or hash oil. Cannabis samples were further subdivided into marijuana (loose material, kilobricks and buds), sinsemilla, Thai sticks and ditchweed. The data showed that more than 82% of all confiscated samples were in the marijuana category for every year except 1980 (61%) and 1981 (75%). The potency (concentration of delta9-THC) of marijuana samples rose from less than 1.5% in 1980 to approximately 3.3% in 1983 and 1984, then fluctuated around 3% till 1992. Since 1992, the potency of confiscated marijuana samples has continuously risen, going from 3.1% in 1992 to 4.2% in 1997. The average concentration of delta9-THC in all cannabis samples showed a gradual rise from 3% in 1991 to 4.47% in 1997. Hashish and hash oil, on the other hand, showed no specific potency trends. Other major cannabinoids [cannabidiol (CBD), cannabinol (CBN), and cannabichromene (CBC)] showed no significant change in their concentration over the years.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
de Lemos Zingano, Bianca; Guarnieri, Ricardo; Diaz, Alexandre Paim; Schwarzbold, Marcelo Liborio; Bicalho, Maria Alice Horta; Claudino, Lucia Sukys; Markowitsch, Hans J; Wolf, Peter; Lin, Katia; Walz, Roger
2015-09-01
This study aimed to evaluate the diagnostic accuracy of the Hamilton Rating Scale for Depression (HRSD), the Beck Depression Inventory (BDI), the Hospital Anxiety and Depression Scale (HADS), and the Hospital Anxiety and Depression Scale-Depression subscale (HADS-D) as diagnostic tests for depressive disorder in drug-resistant mesial temporal lobe epilepsy with hippocampal sclerosis (MTLE-HS). One hundred three patients with drug-resistant MTLE-HS were enrolled. All patients underwent a neurological examination, interictal and ictal video-electroencephalogram (V-EEG) analyses, and magnetic resonance imaging (MRI). Psychiatric interviews were based on DSM-IV-TR criteria and ILAE Commission of Psychobiology classification as a gold standard; HRSD, BDI, HADS, and HADS-D were used as psychometric diagnostic tests, and receiver operating characteristic (ROC) curves were used to determine the optimal threshold scores. For all the scales, the areas under the curve (AUCs) were approximately 0.8, and they were able to identify depression in this sample. A threshold of ≥9 on the HRSD and a threshold of ≥8 on the HADS-D showed a sensitivity of 70% and specificity of 80%. A threshold of ≥19 on the BDI and HADS-D total showed a sensitivity of 55% and a specificity of approximately 90%. The instruments showed a negative predictive value of approximately 87% and a positive predictive value of approximately 65% for the BDI and HADS total and approximately 60% for the HRSD and HADS-D. HRSD≥9 and HADS-D≥8 had the best balance between sensitivity (approximately 70%) and specificity (approximately 80%). However, with these thresholds, these diagnostic tests do not appear useful in identifying depressive disorder in this population with epilepsy, and their specificity (approximately 80%) and PPV (approximately 55%) were lower than those of the other scales. We believe that the BDI and HADS total are valid diagnostic tests for depressive disorder in patients with MTLE-HS, as both scales showed acceptable (though not high) specificity and PPV for this type of study. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
MaNGA: Target selection and Optimization
NASA Astrophysics Data System (ADS)
Wake, David
2015-01-01
The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.
MaNGA: Target selection and Optimization
NASA Astrophysics Data System (ADS)
Wake, David
2016-01-01
The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.
Tao, Guohua; Miller, William H
2012-09-28
An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
Effect of the centrifugal force on domain chaos in Rayleigh-Bénard convection.
Becker, Nathan; Scheel, J D; Cross, M C; Ahlers, Guenter
2006-06-01
Experiments and simulations from a variety of sample sizes indicated that the centrifugal force significantly affects the domain-chaos state observed in rotating Rayleigh-Bénard convection-patterns. In a large-aspect-ratio sample, we observed a hybrid state consisting of domain chaos close to the sample center, surrounded by an annulus of nearly stationary nearly radial rolls populated by occasional defects reminiscent of undulation chaos. Although the Coriolis force is responsible for domain chaos, by comparing experiment and simulation we show that the centrifugal force is responsible for the radial rolls. Furthermore, simulations of the Boussinesq equations for smaller aspect ratios neglecting the centrifugal force yielded a domain precession-frequency f approximately epsilon(mu) with mu approximately equal to 1 as predicted by the amplitude-equation model for domain chaos, but contradicted by previous experiment. Additionally the simulations gave a domain size that was larger than in the experiment. When the centrifugal force was included in the simulation, mu and the domain size were consistent with experiment.
Structured Matrix Completion with Applications to Genomic Data Integration.
Cai, Tianxi; Cai, T Tony; Zhang, Anru
2016-01-01
Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Analysis of the 2H-evaporator scale samples (HTF-17-56, -57)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.; Coleman, C.; Diprete, D.
Savannah River National Laboratory analyzed scale samples from both the wall and cone sections of the 242-16H Evaporator prior to chemical cleaning. The samples were analyzed for uranium and plutonium isotopes required for a Nuclear Criticality Safety Assessment of the scale removal process. The analysis of the scale samples found the material to contain crystalline nitrated cancrinite and clarkeite. Samples from both the wall and cone contain depleted uranium. Uranium concentrations of 16.8 wt% 4.76 wt% were measured in the wall and cone samples, respectively. The ratio of plutonium isotopes in both samples is ~85% Pu-239 and ~15% Pu-238 bymore » mass and shows approximately the same 3.5 times higher concentration in the wall sample versus the cone sample as observed in the uranium concentrations. The mercury concentrations measured in the scale samples were higher than previously reported values. The wall sample contains 19.4 wt% mercury and the cone scale sample 11.4 wt% mercury. The results from the current scales samples show reasonable agreement with previous 242-16H Evaporator scale sample analysis; however, the uranium concentration in the current wall sample is substantially higher than previous measurements.« less
Measuring herbicide volatilization from bare soil.
Yates, S R
2006-05-15
A field experiment was conducted to measure surface dissipation and volatilization of the herbicide triallate after application to bare soil using micrometeorological, chamber, and soil-loss methods. The volatilization rate was measured continuously for 6.5 days and the range in the daily peak values for the integrated horizontal flux method was from 32.4 (day 5) to 235.2 g ha(-1) d(-1) (day 1), for the theoretical profile shape method was from 31.5 to 213.0 g ha(-1) d(-1), and for the flux chamber was from 15.7 to 47.8 g ha(-1) d(-1). Soil samples were taken within 30 min after application and the measured mass of triallate was 8.75 kg ha(-1). The measured triallate mass in the soil at the end of the experiment was approximately 6 kg ha(-1). The triallate dissipation rate, obtained by soil sampling, was approximately 334 g ha(-1) d(-1) (98 g d(-1)) and the average rate of volatilization was 361 g ha(-1) d(-1). Soil sampling at the end of the experiment showed that approximately 31% (0.803 kg/2.56 kg) of the triallate mass was lost from the soil. Significant volatilization of triallate is possible when applied directly to the soil surface without incorporation.
Wedge sampling for computing clustering coefficients and triangle counts on large graphs
Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.
2014-05-08
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less
Enantiomerically enriched, polycrystalline molecular sieves
Brand, Stephen K.; Schmidt, Joel E.; Deem, Michael W.; ...
2017-05-01
Zeolite and zeolite-like molecular sieves are being used in a large number of applications such as adsorption and catalysis. Achievement of the long-standing goal of creating a chiral, polycrystalline molecular sieve with bulk enantioenrichment would enable these materials to perform enantioselective functions. Here, we report the synthesis of enantiomerically enriched samples of a molecular sieve. For this study, enantiopure organic structure directing agents are designed with the assistance of computational methods and used to synthesize enantioenriched, polycrystalline molecular sieve samples of either enantiomer. Computational results correctly predicted which enantiomer is obtained, and enantiomeric enrichment is proven by high-resolution transmission electronmore » microscopy. The enantioenriched and racemic samples of the molecular sieves are tested as adsorbents and heterogeneous catalysts. The enantioenriched molecular sieves show enantioselectivity for the ring opening reaction of epoxides and enantioselective adsorption of 2-butanol (the R enantiomer of the molecular sieve shows opposite and approximately equal enantioselectivity compared with the S enantiomer of the molecular sieve, whereas the racemic sample of the molecular sieve shows no enantioselectivity).« less
Enantiomerically enriched, polycrystalline molecular sieves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brand, Stephen K.; Schmidt, Joel E.; Deem, Michael W.
Zeolite and zeolite-like molecular sieves are being used in a large number of applications such as adsorption and catalysis. Achievement of the long-standing goal of creating a chiral, polycrystalline molecular sieve with bulk enantioenrichment would enable these materials to perform enantioselective functions. Here, we report the synthesis of enantiomerically enriched samples of a molecular sieve. For this study, enantiopure organic structure directing agents are designed with the assistance of computational methods and used to synthesize enantioenriched, polycrystalline molecular sieve samples of either enantiomer. Computational results correctly predicted which enantiomer is obtained, and enantiomeric enrichment is proven by high-resolution transmission electronmore » microscopy. The enantioenriched and racemic samples of the molecular sieves are tested as adsorbents and heterogeneous catalysts. The enantioenriched molecular sieves show enantioselectivity for the ring opening reaction of epoxides and enantioselective adsorption of 2-butanol (the R enantiomer of the molecular sieve shows opposite and approximately equal enantioselectivity compared with the S enantiomer of the molecular sieve, whereas the racemic sample of the molecular sieve shows no enantioselectivity).« less
Prostatic origin of a zinc binding high molecular weight protein complex in human seminal plasma.
Siciliano, L; De Stefano, C; Petroni, M F; Vivacqua, A; Rago, V; Carpino, A
2000-03-01
The profile of the zinc ligand high molecular weight proteins was investigated in the seminal plasma of 55 normozoospermic subjects by size exclusion high performance liquid chromatography (HPLC). The proteins were recovered from Sephadex G-75 gel filtration of seminal plasma in three zinc-containing fractions which were then submitted to HPLC analysis. The results were, that in all the samples, the protein profiles showed two peaks with apparent molecular weight of approximately 660 and approximately 250 kDa. Dialysis experiments revealed that both approximately 660 and approximately 250 kDa proteins were able to uptake zinc against gradient indicating their zinc binding capacity. The HPLC analysis of the whole seminal plasma evidenced only the approximately 660 kDa protein complex as a single well quantifying peak, furthermore a positive correlation between its peak area and the seminal zinc values (P < 0.001) was observed. This suggested a prostatic origin of the approximately 660 kDa protein complex which was then confirmed by the seminal plasma HPLC analysis of a subject with agenesis of the Wolffian ducts. Finally the study demonstrated the presence of two zinc binding proteins, approximately 660 and approximately 250 kDa respectively, in human seminal plasma and the prostatic origin of the approximately 660 kDa.
Copper-polydopamine composite derived from bioinspired polymer coating
Zhao, Yao; Wang, Hsin; Qian, Bosen; ...
2018-04-01
Metal matrix composites with nanocarbon phases, such carbon nanotube (CNT) and graphene, have shown potentials to achieve improved mechanical, thermal, and electrical properties. However, incorporation of these nanocarbons into the metal matrix usually involves complicated processes. Here, this study explored a new processing method to fabricate copper (Cu) matrix composite by coating Cu powder particles with nanometer-thick polydopamine (PDA) thin films and sintering of the powder compacts. For sintering temperatures between 300°C and 750°C, the Cu-PDA composite samples showed higher electrical conductivity and thermal conductivity than the uncoated Cu samples, which is likely related to the higher mass densities ofmore » the composite samples. After being sintered at 950°C, the thermal conductivity of the Cu-PDA sample was approximately 12% higher than the Cu sample, while the electrical conductivity did not show significant difference. On the other hand, Knoop micro-hardness values were comparable between the Cu-PDA and Cu samples sintered at the same temperatures.« less
Chemical analyses of provided samples
NASA Technical Reports Server (NTRS)
Becker, Christopher H.
1993-01-01
A batch of four samples were received and chemical analysis was performed of the surface and near surface regions of the samples by the surface analysis by laser ionization (SALI) method. The samples included four one-inch diameter optics labeled windows no. PR14 and PR17 and MgF2 mirrors 9-93 PPPC exp. and control DMES 26-92. The analyses emphasized surface contamination or modification. In these studies, pulsed desorption by 355 nm laser light and single-photon ionization (SPI) above the sample by coherent 118 nm radiation (at approximately 5 x 10(exp 5) W/cm(sup 2)) were used, emphasizing organic analysis. For the two windows with an apparent yellowish contaminant film, higher desorption laser power was needed to provide substantial signals, indicating a less volatile contamination than for the two mirrors. Window PR14 and the 9-93 mirror showed more hydrocarbon components than the other two samples. The mass spectra, which show considerable complexity, are discussed in terms of various potential chemical assignments.
Copper-polydopamine composite derived from bioinspired polymer coating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Yao; Wang, Hsin; Qian, Bosen
Metal matrix composites with nanocarbon phases, such carbon nanotube (CNT) and graphene, have shown potentials to achieve improved mechanical, thermal, and electrical properties. However, incorporation of these nanocarbons into the metal matrix usually involves complicated processes. Here, this study explored a new processing method to fabricate copper (Cu) matrix composite by coating Cu powder particles with nanometer-thick polydopamine (PDA) thin films and sintering of the powder compacts. For sintering temperatures between 300°C and 750°C, the Cu-PDA composite samples showed higher electrical conductivity and thermal conductivity than the uncoated Cu samples, which is likely related to the higher mass densities ofmore » the composite samples. After being sintered at 950°C, the thermal conductivity of the Cu-PDA sample was approximately 12% higher than the Cu sample, while the electrical conductivity did not show significant difference. On the other hand, Knoop micro-hardness values were comparable between the Cu-PDA and Cu samples sintered at the same temperatures.« less
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
NASA Astrophysics Data System (ADS)
Yeti Nuryantini, Ade; Cahya Septia Mahen, Ea; Sawitri, Asti; Wahid Nuryadin, Bebeh
2017-09-01
In this paper, we report on a homemade optical spectrometer using diffraction grating and image processing techniques. This device was designed to produce spectral images that could then be processed by measuring signal strength (pixel intensity) to obtain the light source, transmittance, and absorbance spectra of the liquid sample. The homemade optical spectrometer consisted of: (i) a white LED as a light source, (ii) a cuvette or sample holder, (iii) a slit, (iv) a diffraction grating, and (v) a CMOS camera (webcam). In this study, various concentrations of a carbon nanoparticle (CNP) colloid were used in the particle size sample test. Additionally, a commercial optical spectrometer and tunneling electron microscope (TEM) were used to characterize the optical properties and morphology of the CNPs, respectively. The data obtained using the homemade optical spectrometer, commercial optical spectrometer, and TEM showed similar results and trends. Lastly, the calculation and measurement of CNP size were performed using the effective mass approximation (EMA) and TEM. These data showed that the average nanoparticle sizes were approximately 2.4 nm and 2.5 ± 0.3 nm, respectively. This research provides new insights into the development of a portable, simple, and low-cost optical spectrometer that can be used in nanomaterial characterization for physics undergraduate instruction.
Reinforced Concrete Beams under Combined Axial and Lateral Loading.
1982-01-01
NUMBER(s) Golden E. Lane, Jr. F29601-76-C-015 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT . PROJECT, TASK AREA 4 WORK UNIT NUMBERS New...acquisition system. The voltage output from the system’s digital multimeter was recorded on a floppy disk. The sampling rate was approximately two... samples per second for every channel. The same system was used to reduce and plot the data. TEST APPARATUS Figure 9 shows a schematic drawing of the load
Injury and death of various Salmonella serotypes due to acidic conditions
USDA-ARS?s Scientific Manuscript database
Acid injury of Salmonella could prevent detection of Salmonella in feed and feed-type samples. A previous study showed that after incubation in commonly used pre-enrichment media, mixed feeds and feed ingredients reached a pH (4.0 to 5.0) capable of injuring or killing Salmonella. Approximately 10...
Career Change and Motivation: A Matter of Balance
ERIC Educational Resources Information Center
Green, Liz; Hemmings, Brian; Green, Annette
2007-01-01
The study was designed to consider the motivations of career changers and the perceived outcomes of their career change. Data were collected from a sample of career changers (N = 81), approximately half of whom had used the services of a career coach. The analysis showed: firstly, that the reported outcomes associated with career change appeared…
Figure 1 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics
A screenshot of the IGV user interface at the chromosome view. IGV user interface showing five data types (copy number, methylation, gene expression, and loss of heterozygosity; mutations are overlaid with black boxes) from approximately 80 glioblastoma multiforme samples. Adapted from Figure S1; Robinson et al. 2011
Electrodynamic Aerosol Concentrating and Sampling
2006-06-16
flat-plate AC electrode ...................................................................................................9 Figure 4. Corona needle ...Figure 4 shows a fine hypodermic needle used as a corona wire within the first quadrupole. The quadrupole electrodes would serve as the collecting...carefully expanded to better approximate hyperbolas. It is shown here with the inner needle corona electrode in place. Once it was determined that the
Discharge process of cesium during rainstorms in headwater catchments, Fukushima, Japan
NASA Astrophysics Data System (ADS)
Tsujimura, Maki; Onda, Yuichi; Iwagami, Sho; Nishino, Masataka; Konuma, Ryohei
2014-05-01
We monitored Cs-137 concentrations in stream water, groundwater, soil water and rainwater in the Yamakiya district located approximately 35 km north west of Fukushima Dai-ichi Nuclear Power Plant (FDNPP) from June 2011 through July 2013, focusing on rainfall-runoff processes during the rainstorm events. Two catchments with different land cover (Iboishiyama and Koutaishiyama) were instrumentd, and stream water, groundwater, soil water and rainwater were sampled for approximately one month at each site, and intensive sampling was conducted during rainstorm events. The 137Cs concentration in stream water showed a relatively quick decreasing trend during 2011. Also, during rainfall events, the Cs-137 concentration in stream water showed a temporary increase. End Member Mixing Analysis was applied to evaluate contribution of groundwater, soil water and rainwater in discharge water during rainstorm events. The groundwater component was dominant in the runoff, whereas rainwater was main source for the Cs-137 concentration of the stream increasing during the storm events. In addition, a leaching of Cs-137 from the suspended sediments and the organic materials seemed to be also important sources to the stream.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Cho, Yuichiro; Cohen, Barbara A.
2018-01-01
We report new K-Ar isochron data for two approximately 380 Ma basaltic rocks, using an updated version of the Potassium-Argon Laser Experiment (KArLE). These basalts have K contents comparable to lunar KREEP basalts or igneous lithologies found by Mars rovers, whereas previous proof-of-concept studies focused primarily on more K-rich rocks. We continue to measure these analogue samples to show the advancing capability of in situ K-Ar geochronology. KArLE is applicable to other bodies including the Moon or asteroids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobo Lapidus, R.; Gates, B
2009-01-01
Supported metals prepared from H{sub 3}Re{sub 3}(CO){sub 12} on {gamma}-Al{sub 2}O{sub 3} were treated under conditions that led to various rhenium structures on the support and were tested as catalysts for n-butane conversion in the presence of H{sub 2} in a flow reactor at 533 K and 1 atm. After use, two samples were characterized by X-ray absorption edge positions of approximately 5.6 eV (relative to rhenium metal), indicating that the rhenium was cationic and essentially in the same average oxidation state in each. But the Re-Re coordination numbers found by extended X-ray absorption fine structure spectroscopy (2.2 and 5.1)more » show that the clusters in the two samples were significantly different in average nuclearity despite their indistinguishable rhenium oxidation states. Spectra of a third sample after catalysis indicate approximately Re{sub 3} clusters, on average, and an edge position of 4.5 eV. Thus, two samples contained clusters approximated as Re{sub 3} (on the basis of the Re-Re coordination number), on average, with different average rhenium oxidation states. The data allow resolution of the effects of rhenium oxidation state and cluster size, both of which affect the catalytic activity; larger clusters and a greater degree of reduction lead to increased activity.« less
[Respondent-Driven Sampling: a new sampling method to study visible and hidden populations].
Mantecón, Alejandro; Juan, Montse; Calafat, Amador; Becoña, Elisardo; Román, Encarna
2008-01-01
The paper introduces a variant of chain-referral sampling: respondent-driven sampling (RDS). This sampling method shows that methods based on network analysis can be combined with the statistical validity of standard probability sampling methods. In this sense, RDS appears to be a mathematical improvement of snowball sampling oriented to the study of hidden populations. However, we try to prove its validity with populations that are not within a sampling frame but can nonetheless be contacted without difficulty. The basics of RDS are explained through our research on young people (aged 14 to 25) who go clubbing, consume alcohol and other drugs, and have sex. Fieldwork was carried out between May and July 2007 in three Spanish regions: Baleares, Galicia and Comunidad Valenciana. The presentation of the study shows the utility of this type of sampling when the population is accessible but there is a difficulty deriving from the lack of a sampling frame. However, the sample obtained is not a random representative one in statistical terms of the target population. It must be acknowledged that the final sample is representative of a 'pseudo-population' that approximates to the target population but is not identical to it.
Processing and performance of self-healing materials
NASA Astrophysics Data System (ADS)
Tan, P. S.; Zhang, M. Q.; Bhattacharyya, D.
2009-08-01
Two self-healing methods were implemented into composite materials with self-healing capabilities, using hollow glass fibres (HGF) and microencapsulated epoxy resin with mercaptan as the hardener. For the HGF approach, two perpendicular layers of HGF were put into an E-glass/epoxy composite, and were filled with coloured epoxy resin and hardener. The HGF samples had a novel ball indentation test method done on them. The samples were analysed using micro-CT scanning, confocal microscopy and penetrant dye. Micro-CT and confocal microscopy produced limited success, but their viability was established. Penetrant dye images showed resin obstructing flow of dye through damage regions, suggesting infiltration of resin into cracks. Three-point bend tests showed that overall performance could be affected by the flaws arising from embedding HGF in the material. For the microcapsule approach, samples were prepared for novel double-torsion tests used to generate large cracks. The samples were compared with pure resin samples by analysing them using photoelastic imaging and scanning electron microscope (SEM) on crack surfaces. Photoelastic imaging established the consolidation of cracks while SEM showed a wide spread of microcapsules with their distribution being affected by gravity. Further double-torsion testing showed that healing recovered approximately 24% of material strength.
Calcium EXAFS Establishes the Mn-Ca Cluster in the Oxygen-Evolving Complex of Photosystem II†
Cinco, Roehl M.; Holman, Karen L. McFarlane; Robblee, John H.; Yano, Junko; Pizarro, Shelly A.; Bellacchio, Emanuele; Sauer, Kenneth; Yachandra, Vittal K.
2014-01-01
The proximity of Ca to the Mn cluster of the photosynthetic water-oxidation complex is demonstrated by X-ray absorption spectroscopy. We have collected EXAFS data at the Ca K-edge using active PS II membrane samples that contain approximately 2 Ca per 4 Mn. These samples are much less perturbed than previously investigated Sr-substituted samples, which were prepared subsequent to Ca depletion. The new Ca EXAFS clearly shows backscattering from Mn at 3.4 Å, a distance that agrees with that surmised from previously recorded Mn EXAFS. This result is also consistent with earlier related experiments at the Sr K-edge, using samples that contained functional Sr, that show Mn is ~ 3.5 Å distant from Sr. The totality of the evidence clearly advances the notion that the catalytic center of oxygen evolution is a Mn-Ca heteronuclear cluster. PMID:12390018
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
NASA Astrophysics Data System (ADS)
Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir
2008-03-01
The differential interference contrast (DIC) microscope is commonly used for the visualization of live biological specimens. It enables the view of the transparent specimens while preserving their viability, being a non-invasive modality. Fertility clinics often use the DIC microscope for evaluation of human embryos quality. Towards quantification and reconstruction of the visualized specimens, an image formation model for DIC imaging is sought and the interaction of light waves with biological matter is examined. In many image formation models the light-matter interaction is expressed via the first Born approximation. The validity region of this approximation is defined in a theoretical bound which limits its use to very small specimens with low dielectric contrast. In this work the Born approximation is investigated via the Helmholtz equation, which describes the interaction between the specimen and light. A solution on the lens field is derived using the Gaussian Legendre quadrature formulation. This numerical scheme is considered both accurate and efficient and has shortened significantly the computation time as compared to integration methods that required a great amount of sampling for satisfying the Whittaker - Shannon sampling theorem. By comparing the numerical results with the theoretical values it is shown that the theoretical bound is not directly relevant to microscopic imaging and is far too limiting. The numerical exhaustive experiments show that the Born approximation is inappropriate for modeling the visualization of thick human embryos.
Unveiling the Secrets of Metallicity and Massive Star Formation Using DLAs Along Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Cucchiara, A.; Fumagalli, M.; Rafelski, M.; Kocevski, D.; Prochaska, J. X.; Cooke, R. J.; Becker, G. D.
2015-01-01
We present the largest, publicly available, sample of Damped Lyman-alpha systems (DLAs) along Swift discovered Gamma-ray Bursts (GRB) line of sights in order to investigate the environmental properties of long GRB hosts in the z = 1.8 - 6 redshift range. Compared with the most recent quasar DLAs sample (QSO-DLA), our analysis shows that GRB-DLAs probe a more metal enriched environment at z approximately greater than 3, up to [X/H] approximately -0.5. In the z = 2 - 3 redshift range, despite the large number of lower limits, there are hints that the two populations may be more similar (only at 90% significance level) than at higher redshifts. Also, at high-z, the GRB-DLA average metallicity seems to decline at a shallower rate than the QSO-DLAs: GRB-DLA hosts may be polluted with metals at least as far as approximately 2 kpc from the GRB explosion site, probably due to previous star-formation episodes and/or supernovae explosions. This shallow metallicity trend, extended now up to z approximately 5, confirms previous results that GRB hosts are star-forming and have, on average, higher metallicity than the general QSO-DLA population. Finally, our host metallicity measurements are broadly consistent with the predictions derived from the hypothesis of two channels of GRB progenitors, one of which is mildly affected by a metallicity bias, although more data are needed to constrain the models at z approximately greater than 4.
NASA Astrophysics Data System (ADS)
DeBlois, Elisabeth M.; Paine, Michael D.; Kilgour, Bruce W.; Tracy, Ellen; Crowley, Roger D.; Williams, Urban P.; Janes, G. Gregory
2014-12-01
This paper describes sediment composition at the Terra Nova offshore oil development. The Terra Nova Field is located on the Grand Banks approximately 350 km southeast of Newfoundland, Canada, at an approximate water depth of 100 m. Surface sediment samples (upper 3 cm) were collected for chemical and particle size analyses at the site pre-development (1997) and in 2000-2002, 2004, 2006, 2008 and 2010. Approximately 50 stations have been sampled in each program year, with stations extending from less than 1 km to a maximum of 20 km from source (drill centres) along five gradients, extending to the southeast, southwest, northeast, northwest and east of Terra Nova. Results show that Terra Nova sediments were contaminated with >C10-C21 hydrocarbons and barium-the two main constituents of synthetic-based drilling muds used at the site. Highest levels of contamination occurred within 1 to 2 km from source, consistent with predictions from drill cuttings dispersion modelling. The strength of distance gradients for >C10-C21 hydrocarbons and barium, and overall levels, generally increased as drilling progressed but decreased from 2006 to 2010, coincident with a reduction in drilling. As seen at other offshore oil development sites, metals other than barium, sulphur and sulphide levels were elevated and sediment fines content was higher in the immediate vicinity (less than 0.5 km) of drill centres in some sampling years; but there was no strong evidence of project-related alterations of these variables. Overall, sediment contamination at Terra Nova was spatially limited and only the two major constituents of synthetic-based drilling muds used at the site, >C10-C21 hydrocarbons and barium, showed clear evidence of project-related alternations.
NASA Astrophysics Data System (ADS)
Xu, Jiajie; Jiang, Bo; Chai, Sanming; He, Yuan; Zhu, Jianyi; Shen, Zonggen; Shen, Songdong
2016-09-01
Filamentous Bangia, which are distributed extensively throughout the world, have simple and similar morphological characteristics. Scientists can classify these organisms using molecular markers in combination with morphology. We successfully sequenced the complete nuclear ribosomal DNA, approximately 13 kb in length, from a marine Bangia population. We further analyzed the small subunit ribosomal DNA gene (nrSSU) and the internal transcribed spacer (ITS) sequence regions along with nine other marine, and two freshwater Bangia samples from China. Pairwise distances of the nrSSU and 5.8S ribosomal DNA gene sequences show the marine samples grouping together with low divergences (00.003; 0-0.006, respectively) from each other, but high divergences (0.123-0.126; 0.198, respectively) from freshwater samples. An exception is the marine sample collected from Weihai, which shows high divergence from both other marine samples (0.063-0.065; 0.129, respectively) and the freshwater samples (0.097; 0.120, respectively). A maximum likelihood phylogenetic tree based on a combined SSU-ITS dataset with maximum likelihood method shows the samples divided into three clades, with the two marine sample clades containing Bangia spp. from North America, Europe, Asia, and Australia; and one freshwater clade, containing Bangia atropurpurea from North America and China.
Roeder, Peter; Gofton, Emma; Thornber, Carl
2006-01-01
The volume %, distribution, texture and composition of coexisting olivine, Cr-spinel and glass has been determined in quenched lava samples from Hawaii, Iceland and mid-oceanic ridges. The volume ratio of olivine to spinel varies from 60 to 2800 and samples with >0·02% spinel have a volume ratio of olivine to spinel of approximately 100. A plot of wt % MgO vs ppm Cr for natural and experimental basaltic glasses suggests that the general trend of the glasses can be explained by the crystallization of a cotectic ratio of olivine to spinel of about 100. One group of samples has an olivine to spinel ratio of approximately 100, with skeletal olivine phenocrysts and small (<50 μm) spinel crystals that tend to be spatially associated with the olivine phenocrysts. The large number of spinel crystals included within olivine phenocrysts is thought to be due to skeletal olivine phenocrysts coming into physical contact with spinel by synneusis during the chaotic conditions of ascent and extrusion. A second group of samples tend to have large olivine phenocrysts relatively free of included spinel, a few large (>100 μm) spinel crystals that show evidence of two stages of growth, and a volume ratio of olivine to spinel of 100 to well over 1000. The olivine and spinel in this group have crystallized more slowly with little physical interaction, and show evidence that they have accumulated in a magma chamber.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Parcperdue Geopressure -- Geothermal Project: Appendix E
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, L.R.
1981-10-05
The mechanical and transport properties and characteristics of rock samples obtained from DOW-DOE L.R. SWEEZY NO. 1 TEST WELL at the Parcperdue Geopressure/Geothermal Site have been investigated in the laboratory. Elastic moduli, compressibility, uniaxial compaction coefficient, strength, creep parameters, permeability, acoustic velocities (all at reservoir conditions) and changes in these quantities induced by simulated reservoir production have been obtained from tests on several sandstone and shale samples from different depths. Most important results are that the compaction coefficients are approximately an order of magnitude lower than those generally accepted for the reservoir sand in the Gulf Coast area and thatmore » the creep behavior is significant. Geologic characterization includes lithological description, SEM micrographs and mercury intrusion tests to obtain pore distributions. Petrographic analysis shows that approximately half of the total sand interval has excellent reservoir potential and that most of the effective porosity in the Cib Jeff Sand is formed by secondary porosity development.« less
A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.
Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco
2005-02-01
Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.
Defect annealing and thermal desorption of deuterium in low dose HFIR neutron-irradiated tungsten
NASA Astrophysics Data System (ADS)
Shimada, Masashi; Hara, Masanori; Otsuka, Teppei; Oya, Yasuhisa; Hatano, Yuji
2015-08-01
Three tungsten samples irradiated at High Flux Isotope Reactor at Oak Ridge National Laboratory were exposed to deuterium plasma (ion fluence of 1 × 1026 m-2) at three different temperatures (100, 200, and 500 °C) in Tritium Plasma Experiment at Idaho National Laboratory. Subsequently, thermal desorption spectroscopy was performed with a ramp rate of 10 °C min-1 up to 900 °C, and the samples were annealed at 900 °C for 0.5 h. These procedures were repeated three times to uncover defect-annealing effects on deuterium retention. The results show that deuterium retention decreases approximately 70% for at 500 °C after each annealing, and radiation damages were not annealed out completely even after the 3rd annealing. TMAP modeling revealed the trap concentration decreases approximately 80% after each annealing at 900 °C for 0.5 h.
2011-06-17
collected in the berm area. In the control areas, surface sediment samples were taken at approximately the toe of the dune (where present...In the berm area, surface sediment samples were taken at approximately the toe of the dune (where 29 present), backbeach, high tide line, mean...samples were taken at approximately the toe of the dune (where present), backbeach, high tide line, mean sea level, low tide line, 2 ft water depth
NASA Astrophysics Data System (ADS)
Kiahosseini, Seyed Rahim; Mohammadi Baygi, Seyyed Javad; Khalaj, Gholamreza; Khoshakhlagh, Ali; Samadipour, Razieh
2018-01-01
Cubic specimens from AISI 316 stainless steel were multiaxially forged to 15 passes and annealed at 1200 °C for 1, 2, and 3 h and finally sensitized at 700 °C for 24 h. Examination of samples indicated that the hardness of the annealed samples was reduced from 153 to 110, 81, and 74 HV for as-received sample and under 1, 2, and 3 h of annealing, and increased from 245 to 288 HV for samples forged at 3 and 7 passes. However, no significant changes were observed in a large number of passes and at about 300 HV. Degree of sensitization of samples was increased to approximately 27.3% at 3-h annealing but reduced to 1.23% by 15 passes of MF. The potentiodynamic polarization test shows that the breakdown potentials decreased with annealing time from 0.6 to - 102 (mV/SCE) for as-received and 3-h annealed specimen. These potentials increased to approximately - 16.5 mV with the increase in MF passes to 15. These observations indicated that the chromium carbide deposition affects Cr-depleted zone, which can subsequently affect the degree of sensitization and pitting corrosion resistance of AISI 316 austenitic stainless steel.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Function approximation using combined unsupervised and supervised learning.
Andras, Peter
2014-03-01
Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.
Saha, Mahua; Togo, Ayako; Mizukawa, Kaoruko; Murakami, Michio; Takada, Hideshige; Zakaria, Mohamad P; Chiem, Nguyen H; Tuyen, Bui Cach; Prudente, Maricar; Boonyatumanond, Ruchaya; Sarkar, Santosh Kumar; Bhattacharya, Badal; Mishra, Pravakar; Tana, Touch Seang
2009-02-01
We collected surface sediment samples from 174 locations in India, Indonesia, Malaysia, Thailand, Vietnam, Cambodia, Laos, and the Philippines and analyzed them for polycyclic aromatic hydrocarbons (PAHs) and hopanes. PAHs were widely distributed in the sediments, with comparatively higher concentrations in urban areas (Sigma PAHs: approximately 1000 to approximately 100,000 ng/g-dry) than in rural areas ( approximately 10 to approximately 100g-dry), indicating large sources of PAHs in urban areas. To distinguish petrogenic and pyrogenic sources of PAHs, we calculated the ratios of alkyl PAHs to parent PAHs: methylphenanthrenes to phenanthrene (MP/P), methylpyrenes+methylfluoranthenes to pyrene+fluoranthene (MPy/Py), and methylchrysenes+methylbenz[a]anthracenes to chrysene+benz[a]anthracene (MC/C). Analysis of source materials (crude oil, automobile exhaust, and coal and wood combustion products) gave thresholds of MP/P=0.4, MPy/Py=0.5, and MC/C=1.0 for exclusive combustion origin. All the combustion product samples had the ratios of alkyl PAHs to parent PAHs below these threshold values. Contributions of petrogenic and pyrogenic sources to the sedimentary PAHs were uneven among the homologs: the phenanthrene series had a greater petrogenic contribution, whereas the chrysene series had a greater pyrogenic contribution. All the Indian sediments showed a strong pyrogenic signature with MP/P approximately 0.5, MPy/Py approximately 0.1, and MC/C approximately 0.2, together with depletion of hopanes indicating intensive inputs of combustion products of coal and/or wood, probably due to the heavy dependence on these fuels as sources of energy. In contrast, sedimentary PAHs from all other tropical Asian cities were abundant in alkylated PAHs with MP/P approximately 1-4, MPy/Py approximately 0.3-1, and MC/C approximately 0.2-1.0, suggesting a ubiquitous input of petrogenic PAHs. Petrogenic contributions to PAH homologs varied among the countries: largest in Malaysia whereas inferior in Laos. The higher abundance of alkylated PAHs together with constant hopane profiles suggests widespread inputs of automobile-derived petrogenic PAHs to Asian waters.
Compositional zoning and its significance in pyroxenes from three coarse-grained lunar samples.
Hargraves, R B; Hollister, L S; Otalora, G
1970-01-30
The calcium-rich pyroxenes in lunar samples 10047, 10058, and 10062 show pronounced sectoral and radial compositional variations which correlate with sharp to gradual variations in color and optical properties. The pyroxenes apparently grew as nearly euhedral crystals from melts of approximately the same composition as that of the samples. The coupled substitutions determined across sector boundaries suggest that Al is predominantly in the tetrahedral site and that Ti is predominantly quadrivalent. The pyroxene differentiation trend (unknown in terrestrial pyroxenes) is toward extreme enrichment in the ferrosilite molecule. The iron-enriched portions of the pyroxene grains may have grown with a triclinic pyroxenoid structure.
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
The enhancement in optical and magnetic properties of Na-doped LaFeO3
NASA Astrophysics Data System (ADS)
Devi, E.; Kalaiselvi, B. J.
2018-04-01
La1-xNaxFeO3(x=0.00 and 0.05) were synthesized by sol-gel auto-combustion method. No evidence of impurity phase and the peak (121) slightly shift towards lower angle is confirmed by X-ray diffraction analysis (XRD). The UV-visible spectra show strong absorption peak centered at approximately 231 nm and the calculated optical band gap are found to be 2.73eV, 2.36eV for x = 0.00 and 0.05, respectively. The M-H loop of pure sample is anti-ferromagnetic, whereas those of the Na doped sample shows enhanced ferromagnetic behavior. The remnant magnetization (Mr), saturation magnetization (Ms) and coercive field (Hc) of Na-doped sample are enhanced to 1.06emu/g, 5.39emu/g and 182.84kOe, respectively.
Permian tetrapods from the Sahara show climate-controlled endemism in Pangaea.
Sidor, Christian A; O'Keefe, F Robin; Damiani, Ross; Steyer, J Sébastien; Smith, Roger M H; Larsson, Hans C E; Sereno, Paul C; Ide, Oumarou; Maga, Abdoulaye
2005-04-14
New fossils from the Upper Permian Moradi Formation of northern Niger provide an insight into the faunas that inhabited low-latitude, xeric environments near the end of the Palaeozoic era (approximately 251 million years ago). We describe here two new temnospondyl amphibians, the cochleosaurid Nigerpeton ricqlesi gen. et sp. nov. and the stem edopoid Saharastega moradiensis gen. et sp. nov., as relicts of Carboniferous lineages that diverged 40-90 million years earlier. Coupled with a scarcity of therapsids, the new finds suggest that faunas from the poorly sampled xeric belt that straddled the Equator during the Permian period differed markedly from well-sampled faunas that dominated tropical-to-temperate zones to the north and south. Our results show that long-standing theories of Late Permian faunal homogeneity are probably oversimplified as the result of uneven latitudinal sampling.
BOKS 45906: a CV with an Orbital Period of 56.6 Min in the Kepler Field?
NASA Technical Reports Server (NTRS)
Ramsay, Gavin; Howell, Steve B.; Wood, Matt A.; Smale, Alan; Barclay, Thomas; Seebode, Sally A.; Gelino, Dawn; Still, Martin; Cannizzo, John K.
2013-01-01
BOKS 45906 was found to be a blue source in the Burrell-Optical-Kepler Survey which showed a 3 magnitude outburst lasting approximately 5 days. We present the Kepler light curve of this source which covers nearly 3 years. We find that it is in a faint optical state for approximately half the time and shows a series of outbursts separated by distinct dips in flux. Using data with 1 minute sampling, we find clear evidence that in its low state BOKS 45906 shows a flux variability on a period of 56.5574 plus or minus 0.0014 minutes and a semi-amplitude of approximately 3 percent. Since we can phase all the 1 minute cadence data on a common ephemeris using this period, it is probable that 56.56 minutes is the binary orbital period. Optical spectra of BOKS 45906 show the presence of Balmer lines in emission indicating it is not an AM CVn (pure Helium) binary. Swift data show that it is a weak X-ray source and is weakly detected in the bluest of the UVOT filters. We conclude that BOKS 45906 is a cataclysmic variable with a period shorter than the 'period-bounce' systems and therefore BOKS 45906 could be the first helium-rich cataclysmic variable detected in the Kepler field.
Level 1 environmental assessment performance evaluation. Final report jun 77-oct 78
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, E.D.; Smith, F.; Wagoner, D.E.
1979-02-01
The report gives results of a two-phased evaluation of Level 1 environmental assessment procedures. Results from Phase I, a field evaluation of the Source Assessment Sampling System (SASS), showed that the SASS train performed well within the desired factor of 3 Level 1 accuracy limit. Three sample runs were made with two SASS trains sampling simultaneously and from approximately the same sampling point in a horizontal duct. A Method-5 train was used to estimate the 'true' particulate loading. The sampling systems were upstream of the control devices to ensure collection of sufficient material for comparison of total particulate, particle sizemore » distribution, organic classes, and trace elements. Phase II consisted of providing each of three organizations with three types of control samples to challenge the spectrum of Level 1 analytical procedures: an artificial sample in methylene chloride, an artificial sample on a flyash matrix, and a real sample composed of the combined XAD-2 resin extracts from all Phase I runs. Phase II results showed that when the Level 1 analytical procedures are carefully applied, data of acceptable accuracy is obtained. Estimates of intralaboratory and interlaboratory precision are made.« less
Friedman, I; Long, W
1976-01-30
The hydration rates of 12 obsidian samples of different chemical compositions were measured at temperatures from 95 degrees to 245 degrees C. An expression relating hydration rate to temperature was derived for each sample. The SiO(2) content and refractive index are related to the hydration rate, as are the CaO, MgO, and original water contents. With this information it is possible to calculate the hydration rate of a sample from its silica content, refractive index, or chemical index and a knowledge of the effective temperature at which the hydration occurred. The effective hydration temperature can be either measured or approximated from weather records. Rates have been calculated by both methods, and the results show that weather records can give a good approximation to the true EHT, particularly in tropical and subtropical climates. If one determines the EHT by any of the methods suggested, and also measures or knows the rate of hydration of the particular obsidian used, it should be possible to carry out absolute dating to +/- 10 percent of the true age over periods as short as several years and as long as millions of years.
Preservation of far-UV aluminum reflectance by means of overcoating with C60 films.
Méndez, J A; Larruquert, J I; Aznárez, J A
2000-01-01
Thin films of C(60) were investigated as protective coatings of Al films to preserve their far-UV (FUV) reflectance by inhibition or retardation of their oxidation. Two methods were used for the overcoating of Al films with approximately one monolayer of C(60): (1) deposition of a multilayer film followed by temperature desorption of all but one monolayer and (2) direct deposition of approximately one-monolayer film. We exposed both types of sample to controlled doses of molecular oxygen and water vapor and measured their FUV reflectance before and after exposure to evaluate the achieved protection on the Al films. The whole process of sample preparation, reflectance measurement, sample heating, and oxidation was made without breaking vacuum. Results show that a C(60) monolayer protected Al from oxidation to some extent, although FUV reflectance of unprotected Al films was never exceeded. FUV optical constants of C(60) films and the FUV reflectance of the C(60) film as deposited and as a function of exposure to O(2) were also measured.
Sm-Nd, Rb-Sr, and Mn-Cr Ages of Yamato 74013
NASA Technical Reports Server (NTRS)
Nyquist, L. E.; Shih, C.- Y.; Reese, Y.D.
2009-01-01
Yamato 74013 is one of 29 paired diogenites having granoblastic textures. The Ar-39 - Ar-40 age of Y-74097 is approximately 1100 Ma. Rb-Sr and Sm-Nd analyses of Y-74013, -74037, -74097, and -74136 suggested that multiple young metamorphic events disturbed their isotopic systems. Masuda et al. reported that REE abundances were heterogeneous even within the same sample (Y-74010) for sample sizes less than approximately 2 g. Both they and Nyquist et al. reported data for some samples showing significant LREE enrichment. In addition to its granoblastic texture, Y-74013 is characterized by large, isolated clots of chromite up to 5 mm in diameter. Takeda et al. suggested that these diogenites originally represented a single or very small number of coarse orthopyroxene crystals that were recrystallized by shock processes. They further suggested that initial crystallization may have occurred very early within the deep crust of the HED parent body. Here we report the chronology of Y-74013 as recorded in chronometers based on long-lived Rb-87 and Sm-147, intermediate- lived Sm-146, and short-lived Mn-53.
Development of Low Cost Soil Stabilization Using Recycled Material
NASA Astrophysics Data System (ADS)
Ahmad, F.; Yahaya, A. S.; Safari, A.
2016-07-01
Recycled tyres have been used in many geotechnical engineering projects such as soil improvement, soil erosion and slope stability. Recycled tyres mainly in chip and shredded form are highly compressible under low and normal pressures. This characteristic would cause challenging problems in some applications of soil stabilization such as retaining wall and river bank projects. For high tensile stress and low tensile strain the use of fiberglass would be a good alternative for recycled tyre in some cases. To evaluate fiberglass as an alternative for recycled tyre, this paper focused on tests of tensile tests which have been carried out between fiberglass and recycled tyre strips. Fibreglass samples were produced from chopped strand fibre mat, a very low-cost type of fibreglass, which is cured by resin and hardener. Fibreglass samples in the thickness of 1 mm, 2 mm, 3 mm and 4 mm were developed 100 mm x 300 mm pieces. It was found that 3 mm fibreglass exhibited the maximum tensile load (MTL) and maximum tensile stress (MTS) greater than other samples. Statistical analysis on 3 mm fibreglass indicated that in the approximately equal MTL fibreglass samples experienced 2% while tyre samples experienced 33.9% ultimate tensile strain (UTST) respectively. The results also showed an approximately linear relationship between stress and strain for fibreglass samples and Young's modulus (E), ranging from 3581 MPa to 4728 MPa.
Behavior of endocrine disrupting chemicals in Johkasou improved septic tank in Japan.
Nakagawa, S; Matsuo, H; Motoyama, M; Nomiyama, K; Shinohara, R
2009-09-01
The behavior of estrogens (estrone: E1, 17beta-estradiol: E2, estriol: E3 and ethinylestradiol: EE2) and an androgen (testosterone) in the water and sludge from Johkasou in Japan was investigated. The concentrations of E1, E2, E3 and testosterone in water samples from the Johkasou were 33-500, N.D. approximately 150, N.D. approximately 6,700 and 500 ng/L, respectively. In sludge samples, the concentrations of E1, E2, E3, and testostrerone were N.D. approximately 39, N.D. approximately 6.7, N.D. approximately 60 and 0.2-9.0 ng/L, respectively. EE2 was not detected in all samples. The removal rates of E1, E2, E3 and testosterone in Johkasou were 45%-91%, 66%-100%, 90%-100%, and about 90%, respectively.
Onset of superconductivity in sodium and potassium intercalated molybdenum disulphide
NASA Technical Reports Server (NTRS)
Somoano, R. B.; Rembaum, A.
1971-01-01
Molybdenum disulfide in the form of natural crystals or powder has been intercalated at -65 to -70 C with sodium and potassium using the liquid ammonia technique. All intercalated samples were found to show a superconducting transition. A plot of the percent of diamagnetic throw versus temperature indicates the possible existence of two phases in the potassium intercalated molybdenum disulfide. The onset of superconductivity in potassium and sodium intercalated molybdenite powder was found to be approximately 6.2 and approximately 4.5 K, respectively. The observed superconductivity is believed to be due to an increase in electron density as a result of intercalation.
Dickenson, Nicholas E; Erickson, Elizabeth S; Mooren, Olivia L; Dunn, Robert C
2007-05-01
Tip-induced sample heating in near-field scanning optical microscopy (NSOM) is studied for fiber optic probes fabricated using the chemical etching technique. To characterize sample heating from etched NSOM probes, the spectra of a thermochromic polymer sample are measured as a function of probe output power, as was previously reported for pulled NSOM probes. The results reveal that sample heating increases rapidly to approximately 55-60 degrees C as output powers reach approximately 50 nW. At higher output powers, the sample heating remains approximately constant up to the maximum power studied of approximately 450 nW. The sample heating profiles measured for etched NSOM probes are consistent with those previously measured for NSOM probes fabricated using the pulling method. At high powers, both pulled and etched NSOM probes fail as the aluminum coating is damaged. For probes fabricated in our laboratory we find failure occurring at input powers of 3.4+/-1.7 and 20.7+/-6.9 mW for pulled and etched probes, respectively. The larger half-cone angle for etched probes ( approximately 15 degrees for etched and approximately 6 degrees for pulled probes) enables more light delivery and also apparently leads to a different failure mechanism. For pulled NSOM probes, high resolution images of NSOM probes as power is increased reveal the development of stress fractures in the coating at a taper diameter of approximately 6 microm. These stress fractures, arising from the differential heating expansion of the dielectric and the metal coating, eventually lead to coating removal and probe failure. For etched tips, the absence of clear stress fractures and the pooled morphology of the damaged aluminum coating following failure suggest that thermal damage may cause coating failure, although other mechanisms cannot be ruled out.
Urine chromium as an estimator of air exposure to stainless steel welding fumes.
Sjögren, B; Hedström, L; Ulfvarson, U
1983-01-01
Welding stainless steel with covered electrodes, also called manual metal arc welding, generates hexavalent airborne chromium. Chromium concentrations in air and post-shift urine samples, collected the same arbitrarily chosen working day, showed a linear relationship. Since post-shift urine samples reflect chromium concentrations of both current and previous stainless steel welding fume exposure, individual urine measurements are suggested as approximate although not exact estimators of current exposure. This study evaluates the practical importance of such measurements by means of confidence limits and tests of validity.
A new microwave acid digestion bomb method for the determination of total fluorine.
Grobler, S R; Louw, A J
1998-01-01
A new microwave acid digestion method for total fluorine analysis was compared to the reliable reverse-extraction technique. The commercially available Parr bombs which are compatible with microwave heating were modified for this purpose. The Mann-Whitney statistical test did not show any significant differences (p > 0.05) in the determinations of total fluorine in various samples between the two above-mentioned methods. The microwave method also gave high fluorine recoveries (> 97%) when fluoride was added to different samples. The great advantage of the microwave acid digestion bomb method is that the digestion under pressure is so aggressive that only a few minutes is needed for complete digestion (also of covalently bonded fluorine), which reduces the time for fluorine analysis dramatically, while no loss of fluorine or contamination from extraneous sources could take place during the ashing procedure. The digestion solution was made up of 300 microliter of concentrated nitric acid plus 537 microliter of water. After digestion 675 microliter of approximately 8.5 M sodium hydroxide plus 643 microliter of citrate/TISAB buffer was added resulting in an alkaline solution (pH approximately 12) which was finally adjusted to a pH of approximately 5.3 for fluoride determination.
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
The effect of char structure on burnout during pulverized coal combustion at pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, G.; Wu, H.; Benfell, K.E.
An Australian bituminous coal sample was burnt in a drop tube furnace (DTF) at 1 atm and a pressurized drop tube furnace (PDTF) at 15 atm. The char samples were collected at different burnout levels, and a scanning electron microscope was used to examine the structures of chars. A model was developed to predict the burnout of char particles with different structures. The model accounts for combustion of the thin-walled structure of cenospheric char and its fragmentation during burnout. The effect of pressure on reaction rate was also considered in the model. As a result, approximately 40% and 70% cenosphericmore » char particles were observed in the char samples collected after coal pyrolysis in the DTF and PDTF respectively. A large number of fine particles (< 30 mm) were observed in the 1 atm char samples at burnout levels between 30% and 50%, which suggests that significant fragmentation occurred during early combustion. Ash particle size distributions show that a large number of small ash particles formed during burnout at high pressure. The time needed for 70% char burnout at 15 atm is approximately 1.6 times that at 1 atm under the same temperature and gas environment conditions, which is attributed to the different pressures as well as char structures. The overall reaction rate for cenospheric char was predicted to be approximately 2 times that of the dense chars, which is consistent with previous experimental results. The predicted char burnout including char structures agrees reasonably well with the experimental measurements that were obtained at 1 atm and 15 atm pressures.« less
A ROM-Less Direct Digital Frequency Synthesizer Based on Hybrid Polynomial Approximation
Omran, Qahtan Khalaf; Islam, Mohammad Tariqul; Misran, Norbahiah; Faruque, Mohammad Rashed Iqbal
2014-01-01
In this paper, a novel design approach for a phase to sinusoid amplitude converter (PSAC) has been investigated. Two segments have been used to approximate the first sine quadrant. A first linear segment is used to fit the region near the zero point, while a second fourth-order parabolic segment is used to approximate the rest of the sine curve. The phase sample, where the polynomial changed, was chosen in such a way as to achieve the maximum spurious free dynamic range (SFDR). The invented direct digital frequency synthesizer (DDFS) has been encoded in VHDL and post simulation was carried out. The synthesized architecture exhibits a promising result of 90 dBc SFDR. The targeted structure is expected to show advantages for perceptible reduction of hardware resources and power consumption as well as high clock speeds. PMID:24892092
López-Gastey, J; Choucri, A; Robidoux, P Y; Sunahara, G I
2000-06-01
An innovative screening procedure has been developed to detect illicit toxic discharges in domestic septic tank sludge hauled to the Montreal Urban Community waste-water treatment plant. This new means of control is based on an integrative approach, using bioassays and chemical analyses. Conservative criteria are applied to detect abnormal toxicity with great reliability while avoiding false positive results. The complementary data obtained from toxicity tests and chemical analyses support the use of this efficient and easy-to-apply procedure. This study assesses the control procedure in which 231 samples were analyzed over a 30-month period. Data clearly demonstrate the deterrent power of an efficient control procedure combined with a public awareness campaign among the carriers. In the first 15 months of application, between January 1996 and March 1997, approximately 30% of the 123 samples analyzed showed abnormal toxicity. Between April 1997 and June 1998, that is, after a public hearing presentation of this procedure, this proportion dropped significantly to approximately 9% based on 108 analyzed samples. The results of a 30-month application of this new control procedure show the superior efficiency of the ecotoxicological approach compared with the previously used chemical control procedure. To be able to apply it effectively and, if necessary, to apply the appropriate coercive measures, ecotoxicological criteria should be included in regulatory guidelines.
Inferring Recent Demography from Isolation by Distance of Long Shared Sequence Blocks
Ringbauer, Harald; Coop, Graham
2017-01-01
Recently it has become feasible to detect long blocks of nearly identical sequence shared between pairs of genomes. These identity-by-descent (IBD) blocks are direct traces of recent coalescence events and, as such, contain ample signal to infer recent demography. Here, we examine sharing of such blocks in two-dimensional populations with local migration. Using a diffusion approximation to trace genetic ancestry, we derive analytical formulas for patterns of isolation by distance of IBD blocks, which can also incorporate recent population density changes. We introduce an inference scheme that uses a composite-likelihood approach to fit these formulas. We then extensively evaluate our theory and inference method on a range of scenarios using simulated data. We first validate the diffusion approximation by showing that the theoretical results closely match the simulated block-sharing patterns. We then demonstrate that our inference scheme can accurately and robustly infer dispersal rate and effective density, as well as bounds on recent dynamics of population density. To demonstrate an application, we use our estimation scheme to explore the fit of a diffusion model to Eastern European samples in the Population Reference Sample data set. We show that ancestry diffusing with a rate of σ≈50−−100 km/gen during the last centuries, combined with accelerating population growth, can explain the observed exponential decay of block sharing with increasing pairwise sample distance. PMID:28108588
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Metastates in Mean-Field Models with Random External Fields Generated by Markov Chains
NASA Astrophysics Data System (ADS)
Formentin, M.; Külske, C.; Reichenbachs, A.
2012-01-01
We extend the construction by Külske and Iacobelli of metastates in finite-state mean-field models in independent disorder to situations where the local disorder terms are a sample of an external ergodic Markov chain in equilibrium. We show that for non-degenerate Markov chains, the structure of the theorems is analogous to the case of i.i.d. variables when the limiting weights in the metastate are expressed with the aid of a CLT for the occupation time measure of the chain. As a new phenomenon we also show in a Potts example that for a degenerate non-reversible chain this CLT approximation is not enough, and that the metastate can have less symmetry than the symmetry of the interaction and a Gaussian approximation of disorder fluctuations would suggest.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
NASA Technical Reports Server (NTRS)
Kaul, Anupama B.; Coles, James B.; Megerian, Krikor G.; Eastwood, Michael; Green, Robert O.; Bandaru, Prabhakar R.
2013-01-01
Optical absorbers based on vertically aligned multi-walled carbon nanotubes (MWCNTs), synthesized using electric-field assisted growth, are described here that show an ultra-low reflectance, 100X lower compared to Au-black from wavelength lamba approximately 350 nm - 2.5 micron. A bi-metallic Co/Ti layer was shown to catalyze a high site density of MWCNTs on metallic substrates and the optical properties of the absorbers were engineered by controlling the bottom-up synthesis conditions using dc plasma-enhanced chemical vapor deposition (PECVD). Reflectance measurements on the MWCNT absorbers after heating them in air to 400deg showed negligible changes in reflectance which was still low, approximately 0.022 % at lamba approximately 2 micron. In contrast, the percolated structure of the reference Au-black samples collapsed completely after heating, causing the optical response to degrade at temperatures as low as 200deg. The high optical absorption efficiency of the MWCNT absorbers, synthesized on metallic substrates, over a broad spectral range, coupled with their thermal ruggedness, suggests they have promise in solar energy harnessing applications, as well as thermal detectors for radiometry.
Seen, Andrew; Bizeau, Oceane; Sadler, Lachlan; Jordan, Timothy; Nichols, David
2014-05-01
The graphitised carbon solid phase extraction (SPE) sorbent Envi-Carb has been used to fabricate glass fibre filter- Envi-Carb "sandwich" disks for use as a passive sampler for acid herbicides. Passive sampler uptake of a suite of herbicides, including the phenoxyacetic acid herbicides 4-chloro-o-tolyloxyacetic acid (MCPA), 2,4-dichlorophenoxyacetic acid (2,4-D) and 3,6-dichloro-2-methoxybenzoic acid (Dicamba), was achieved without pH adjustment, demonstrating for the first time a suitable binding phase for passive sampling of acid herbicides at neutral pH. Passive sampling experiments with Duck River (Tasmania, Australia) water spiked at 0.5 μg L(-1) herbicide concentration over a 7 d deployment period showed that sampling rates in Duck River water decreased for seven out of eight herbicides, and in the cases of 3,6-dichloro-2-pyridinecarboxylic acid (Clopyralid) and Dicamba no accumulation of the herbicides occurred in the Envi-Carb over the deployment period. Sampling rates for 4-amino-3,5,6-trichloro-2-pyridinecarboxylic acid (Picloram), 2,4-D and MCPA decreased to approximately 30% of the sampling rates in ultrapure water, whilst sampling rates for 2-(4,6-dimethylpyrimidin-2-ylcarbamoylsulfamoyl) benzoic acid, methyl ester (Sulfometuron-methyl) and 3,5,6-Trichloro-2-pyridinyloxyacetic acid (Triclopyr) were approximately 60% of the ultrapure water sampling rate. For methyl N-(2,6-dimethylphenyl)-N-(methoxyacetyl)-D-alaninate (Metalaxyl-M) there was little variation in sampling rate between passive sampling experiments in ultrapure water and Duck River water. SPE experiments undertaken with Envi-Carb disks using ultrapure water and filtered and unfiltered Duck River water showed that not only is adsorption onto particulate matter in Duck River water responsible for a reduction in herbicide sampling rate, but interactions of herbicides with dissolved or colloidal matter (matter able to pass through a 0.2 μm membrane filter) also reduces the herbicide sampling rate. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
PVP capped silver nanocubes assisted removal of glyphosate from water-A photoluminescence study.
Sarkar, Sumit; Das, Ratan
2017-10-05
Glyphosate [N-phosphono-methylglycine (PMG)] is the most used herbicide worldwide and it has been reported very recently that Glyphosate is very harmful and can produce lots of diseases such as alzheimer and parkinson's disease, depression, cancer, infertility including genotoxic effects. As it is mostly present in stable water body and ground water system, its detection and removal is very important. Here, we have shown a fluorescence technique for the removal of glyphosate from water using chemically synthesized polyvinylpyrrolidone (PVP) silver nanocrystals. Transmission Electron Microscopy (TEM) study shows the average size of silver nanocrystals of 100nm approximately with a morphology of cubic shape. Glyphosate does not show absorption in the visible region. But both glyphosate and silver nanocrystals show strong fluorescence in the visible region. So, photoluminescence study has been successfully utilized to detect the glyphosate in water samples and on treating the glyphosate contaminated water sample with silver nanocrystals, the sample shows no emission peak of glyphosate at 458nm. Thus, this approach is a promising and very rapid method for the detection and removal of glyphosate from water samples on treatment with silver nanocubes. NMR spectra further confirms that the silver nanocrystals treated contaminated water samples are glyphosate free. Copyright © 2017 Elsevier B.V. All rights reserved.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Customer exposure to MTBE, TAME, C6 alkyl methyl ethers, and benzene during gasoline refueling.
Vainiotalo, S; Peltonen, Y; Ruonakangas, A; Pfäffli, P
1999-02-01
We studied customer exposure during refueling by collecting air samples from customers' breathing zone. The measurements were carried out during 4 days in summer 1996 at two Finnish self-service gasoline stations with "stage I" vapor recovery systems. The 95-RON (research octane number) gasoline contained approximately 2.7% methyl tert-butyl ether (MTBE), approximately 8.5% tert-amyl methyl ether (TAME), approximately 3.2% C6 alkyl methyl ethers (C6 AMEs), and 0.75% benzene. The individual exposure concentrations showed a wide log-normal distribution, with low exposures being the most frequent. In over 90% of the samples, the concentration of MTBE was higher (range <0.02-51 mg/m3) than that of TAME. The MTBE values were well below the short-term (15 min) threshold limits set for occupational exposure (250-360 mg/m3). At station A, the geometric mean concentrations in individual samples were 3.9 mg/m3 MTBE and 2. 2 mg/m3 TAME. The corresponding values at station B were 2.4 and 1.7 mg/m3, respectively. The average refueling (sampling) time was 63 sec at station A and 74 sec at station B. No statistically significant difference was observed in customer exposures between the two service stations. The overall geometric means (n = 167) for an adjusted 1-min refueling time were 3.3 mg/m3 MTBE and 1.9 mg/m3 TAME. Each day an integrated breathing zone sample was also collected, corresponding to an arithmetic mean of 20-21 refuelings. The overall arithmetic mean concentrations in the integrated samples (n = 8) were 0.90 mg/m3 for benzene and 0.56 mg/m3 for C6 AMEs calculated as a group. Mean MTBE concentrations in ambient air (a stationary point in the middle of the pump island) were 0.16 mg/m3 for station A and 0.07 mg/m3 for station B. The mean ambient concentrations of TAME, C6 AMEs, and benzene were 0.031 mg/m3, approximately 0.005 mg/m3, and approximately 0.01 mg/m3, respectively, at both stations. The mean wind speed was 1.4 m/sec and mean air temperature was 21 degreesC. Of the gasoline refueled during the study, 75% was 95 grade and 25% was 98/99 grade, with an oxygenate (MTBE) content of 12.2%.
Customer exposure to MTBE, TAME, C6 alkyl methyl ethers, and benzene during gasoline refueling.
Vainiotalo, S; Peltonen, Y; Ruonakangas, A; Pfäffli, P
1999-01-01
We studied customer exposure during refueling by collecting air samples from customers' breathing zone. The measurements were carried out during 4 days in summer 1996 at two Finnish self-service gasoline stations with "stage I" vapor recovery systems. The 95-RON (research octane number) gasoline contained approximately 2.7% methyl tert-butyl ether (MTBE), approximately 8.5% tert-amyl methyl ether (TAME), approximately 3.2% C6 alkyl methyl ethers (C6 AMEs), and 0.75% benzene. The individual exposure concentrations showed a wide log-normal distribution, with low exposures being the most frequent. In over 90% of the samples, the concentration of MTBE was higher (range <0.02-51 mg/m3) than that of TAME. The MTBE values were well below the short-term (15 min) threshold limits set for occupational exposure (250-360 mg/m3). At station A, the geometric mean concentrations in individual samples were 3.9 mg/m3 MTBE and 2. 2 mg/m3 TAME. The corresponding values at station B were 2.4 and 1.7 mg/m3, respectively. The average refueling (sampling) time was 63 sec at station A and 74 sec at station B. No statistically significant difference was observed in customer exposures between the two service stations. The overall geometric means (n = 167) for an adjusted 1-min refueling time were 3.3 mg/m3 MTBE and 1.9 mg/m3 TAME. Each day an integrated breathing zone sample was also collected, corresponding to an arithmetic mean of 20-21 refuelings. The overall arithmetic mean concentrations in the integrated samples (n = 8) were 0.90 mg/m3 for benzene and 0.56 mg/m3 for C6 AMEs calculated as a group. Mean MTBE concentrations in ambient air (a stationary point in the middle of the pump island) were 0.16 mg/m3 for station A and 0.07 mg/m3 for station B. The mean ambient concentrations of TAME, C6 AMEs, and benzene were 0.031 mg/m3, approximately 0.005 mg/m3, and approximately 0.01 mg/m3, respectively, at both stations. The mean wind speed was 1.4 m/sec and mean air temperature was 21 degreesC. Of the gasoline refueled during the study, 75% was 95 grade and 25% was 98/99 grade, with an oxygenate (MTBE) content of 12.2%. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9924009
NASA Technical Reports Server (NTRS)
Nichols, P. D.; Henson, J. M.; Guckert, J. B.; Nivens, D. E.; White, D. C.
1985-01-01
Fourier transform-infrared (FT-IR) spectroscopy has been used to rapidly and nondestructively analyze bacteria, bacteria-polymer mixtures, digester samples and microbial biofilms. Diffuse reflectance FT-IR (DRIFT) analysis of freeze-dried, powdered samples offered a means of obtaining structural information. The bacteria examined were divided into two groups. The first group was characterized by a dominant amide I band and the second group of organisms displayed an additional strong carbonyl stretch at approximately 1740 cm-1. The differences illustrated by the subtraction spectra obtained for microbes of the two groups suggest that FT-IR spectroscopy can be utilized to recognize differences in microbial community structure. Calculation of specific band ratios has enabled the composition of bacteria and extracellular or intracellular storage product polymer mixtures to be determined for bacteria-gum arabic (amide I/carbohydrate C-O approximately 1150 cm-1) and bacteria-poly-beta-hydroxybutyrate (amide I/carbonyl approximately 1740 cm-1). The key band ratios correlate with the compositions of the material and provide useful information for the application of FT-IR spectroscopy to environmental biofilm samples and for distinguishing bacteria grown under differing nutrient conditions. DRIFT spectra have been obtained for biofilms produced by Vibrio natriegens on stainless steel disks. Between 48 and 144 h, an increase in bands at approximately 1440 and 1090 cm-1 was seen in FT-IR spectra of the V. natriegens biofilm. DRIFT spectra of mixed culture effluents of anaerobic digesters show differences induced by shifts in input feedstocks. The use of flow-through attenuated total reflectance has permitted in situ real-time changes in biofilm formation to be monitored and provides a powerful tool for understanding the interactions within adherent microbial consortia.
Maia, A M A; Karlsson, L; Margulis, W; Gomes, A S L
2011-10-01
The aim of this paper was to evaluate a transillumination (TI) system using near-infrared (NIR) light and bitewing radiographs for the detection of early approximal enamel caries lesions. Mesiodistal sections of teeth (n = 14) were cut with various thicknesses from 1.5 mm to 4.75 mm. Both sides of each section were included, 17 approximal surfaces with natural enamel caries and 11 surfaces considered intact. The approximal surfaces were illuminated by NIR light and X-ray. Captured images were analysed by two calibrated specialists in radiology, and re-analysed after 6 months using stereomicroscope images as a gold standard. The interexaminer reliability (Kappa test statistic) for the NIR TI technique showed moderate agreement on first (0.55) and second (0.48) evaluation, and low agreement for bitewing radiographs on first (0.26) and second (0.32) evaluation. In terms of accuracy, the sensitivity for the NIR TI system was 0.88 and the specificity was 0.72. For the bitewing radiographs the sensitivity ranged from 0.35 to 0.53 and the specificity ranged from 0.50 to 0.72. In the same samples and conditions tested, NIR TI images showed reliability and the enamel caries surfaces were better identified than on dental radiographs.
Maia, A M A; Karlsson, L; Margulis, W; Gomes, A S L
2011-01-01
Objectives The aim of this paper was to evaluate a transillumination (TI) system using near-infrared (NIR) light and bitewing radiographs for the detection of early approximal enamel caries lesions. Methods Mesiodistal sections of teeth (n = 14) were cut with various thicknesses from 1.5 mm to 4.75 mm. Both sides of each section were included, 17 approximal surfaces with natural enamel caries and 11 surfaces considered intact. The approximal surfaces were illuminated by NIR light and X-ray. Captured images were analysed by two calibrated specialists in radiology, and re-analysed after 6 months using stereomicroscope images as a gold standard. Results The interexaminer reliability (Kappa test statistic) for the NIR TI technique showed moderate agreement on first (0.55) and second (0.48) evaluation, and low agreement for bitewing radiographs on first (0.26) and second (0.32) evaluation. In terms of accuracy, the sensitivity for the NIR TI system was 0.88 and the specificity was 0.72. For the bitewing radiographs the sensitivity ranged from 0.35 to 0.53 and the specificity ranged from 0.50 to 0.72. Conclusion In the same samples and conditions tested, NIR TI images showed reliability and the enamel caries surfaces were better identified than on dental radiographs. PMID:21960400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Jin, C.; Yamauchi, H.
We report measurements of thermoelectric power (TEP) for high-pressure synthesized CuBa{sub 2}Ca{sub 3}Cu{sub 4}O{sub 11{minus}{delta}} superconductors. The magnitude of TEP for the sample with {ital T}{sub {ital c},zero}=115.9 K is very small and shows a sign crossover at {approximately}160 K. The TEP shows a peak behavior and displays an approximately linear temperature dependence with a negative slope {minus}0.033 {mu}V/K{sup 2} for 120{le}{ital T}{le}240 K. These features resemble those for other known high-{ital T}{sub {ital c}} cuprate superconductors, in particular {ital S}{sub {ital a}} in the {ital a} direction for an untwinned YBa{sub 2}Cu{sub 3}O{sub 7{minus}{delta}} single crystal and polycrystalline Tl-2201more » samples. A brief discussion is given on the TEP behavior in comparison with CuBa{sub 2}YCu{sub 2}O{sub 7{minus}{delta}} cuprate superconductors by considering their similar structure of building blocks and type of charge reservoir. {copyright} {ital 1996 The American Physical Society.}« less
Aprea, Giuseppe; Mullan, William Michael; Murru, Nicoletta; Fitzgerald, Gerald; Buonanno, Marialuisa; Cortesi, Maria Luisa; Prencipe, Vincenza Annunziata; Migliorati, Giacomo
2017-09-30
This work investigated bacteriophage induced starter failures in artisanal buffalo Mozzarella production plants in Southern Italy. Two hundred and ten samples of whey starter cultures were screened for bacteriophage infection. Multiplex polymerase chain reaction (PCR) revealed phage infection in 28.56% of samples, all showing acidification problems during cheese making. Based on DNA sequences, bacteriophages for Lactococcus lactis (L. lactis), Lactobacillus delbruekii (L. delbruekii) and Streptococcus thermophilus (S. thermophilus) were detected. Two phages active against L. lactis, ΦApr-1 and ΦApr-2, were isolated and characterised. The genomes, approximately 31.4 kb and 31 kb for ΦApr-1 and ΦApr-2 respectively, consisted of double-stranded linear DNA with pac-type system. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS‑PAGE) showed one major structural protein of approximately 32.5 kDa and several minor proteins. This is the first report of phage isolation in buffalo milk and of the use of multiplex PCR to screen and study the diversity of phages against Lactic Acid Bacteria (LAB) strains in artisanal Water Buffalo Mozzarella starters.
Peng, G W; Sood, V K; Rykert, U M
1985-03-01
Bromadoline and its two N-demethylated metabolites were extracted into ether:butyl chloride after the addition of internal standard and basification of the various biological fluids (blood, plasma, serum, and urine). These compounds were then extracted into dilute phosphoric acid from the organic phase and separated on a reversed-phase chromatographic system using a mobile phase containing acetonitrile and a buffer of 1,4-dimethylpiperazine and perchloric acid. The overall absolute extraction recoveries of these compounds were approximately 50-80%. The background interferences from the biological fluids were negligible and allowed quantitative determination of bromadoline and the metabolites at levels as low as 2-5 ng/mL. At mobile phase flow rate of 1 mL/min, the sample components and the internal standard were eluted at the retention times within approximately 7-12 min. The drug- and metabolite-to-internal standard peak height ratios showed excellent linear relationships with their corresponding concentrations. The analytical method showed satisfactory within- and between-run assay precision and accuracy, and has been utilized in the simultaneous determination of bromadoline and its two N-demethylated metabolites in biological fluids collected from humans and from dogs after administration of bromadoline maleate.
Ober, Allison J; Sussell, Jesse; Kilmer, Beau; Saunders, Jessica; Heckathorn, Douglas D
2016-04-01
Violent drug markets are not as prominent as they once were in the United States, but they still exist and are associated with significant crime and lower quality of life. The drug market intervention (DMI) is an innovative strategy that uses focused deterrence, community engagement, and incapacitation to reduce crime and disorder associated with these markets. Although studies show that DMI can reduce crime and overt drug activity, one perspective is prominently missing from these evaluations: those who purchase drugs. This study explores the use of respondent-driven sampling (RDS)-a statistical sampling method-to approximate a representative sample of drug users who purchased drugs in a targeted DMI market to gain insight into the effect of a DMI on market dynamics. Using RDS, we recruited individuals who reported hard drug use (crack or powder cocaine, heroin, methamphetamine, or illicit use of prescriptions opioids) in the last month to participate in a survey. The main survey asked about drug use, drug purchasing, and drug market activity before and after DMI; a secondary survey asked about network characteristics and recruitment. Our sample of 212 respondents met key RDS assumptions, suggesting that the characteristics of our weighted sample approximate the characteristics of the drug user network. The weighted estimates for market purchasers are generally valid for inferences about the aggregate population of customers, but a larger sample size is needed to make stronger inferences about the effects of a DMI on drug market activity. © The Author(s) 2016.
The Apollo 17 mare basalts: Serenely sampling Taurus-Littrow
NASA Technical Reports Server (NTRS)
Neal, Clive R.; Taylor, Lawrence A.
1992-01-01
As we are all aware, the Apollo 17 mission marked the final manned lunar landing of the Apollo program. The lunar module (LM) landed approximately 0.7 km due east of Camelot Crater in the Taurus-Littrow region on the southwestern edge of Mare Serenitatis. Three extravehicular activities (EVA's) were performed, the first concentrating around the LM and including station 1 approximately 1.1 km south-southeast of the LM at the northwestern edge of Steno Crater. The second traversed approximately 8 km west of the LM to include stations 2, 3, 4, and 5, and the third EVA traversed approximately 4.5 km to the northwest of the LM to include stations 6, 7, 8, and 9. This final manned mission returned the largest quantity of lunar rock samples, 110.5 kg/243.7 lb, and included soils, breccias, highland samples, and mare basalts. This abstract concentrates upon the Apollo 17 mare basalt samples.
The Apollo 17 mare basalts: Serenely sampling Taurus-Littrow
NASA Astrophysics Data System (ADS)
Neal, Clive R.; Taylor, Lawrence A.
1992-12-01
As we are all aware, the Apollo 17 mission marked the final manned lunar landing of the Apollo program. The lunar module (LM) landed approximately 0.7 km due east of Camelot Crater in the Taurus-Littrow region on the southwestern edge of Mare Serenitatis. Three extravehicular activities (EVA's) were performed, the first concentrating around the LM and including station 1 approximately 1.1 km south-southeast of the LM at the northwestern edge of Steno Crater. The second traversed approximately 8 km west of the LM to include stations 2, 3, 4, and 5, and the third EVA traversed approximately 4.5 km to the northwest of the LM to include stations 6, 7, 8, and 9. This final manned mission returned the largest quantity of lunar rock samples, 110.5 kg/243.7 lb, and included soils, breccias, highland samples, and mare basalts. This abstract concentrates upon the Apollo 17 mare basalt samples.
Modelling the light-scattering properties of a planetary-regolith analog sample
NASA Astrophysics Data System (ADS)
Vaisanen, T.; Markkanen, J.; Hadamcik, E.; Levasseur-Regourd, A. C.; Lasue, J.; Blum, J.; Penttila, A.; Muinonen, K.
2017-12-01
Solving the scattering properties of asteroid surfaces can be made cheaper, faster, and more accurate with reliable physics-based electromagnetic scattering programs for large and dense random media. Existing exact methods fail to produce solutions for such large systems and it is essential to develop approximate methods. Radiative transfer (RT) is an approximate method which works for sparse random media such as atmospheres fails when applied to dense media. In order to make the method applicable to dense media, we have developed a radiative-transfer coherent-backscattering method (RT-CB) with incoherent interactions. To show the current progress with the RT-CB, we have modeled a planetary-regolith analog sample. The analog sample is a low-density agglomerate produced by random ballistic deposition of almost equisized silicate spheres studied using the PROGRA2-surf experiment. The scattering properties were then computed with the RT-CB assuming that the silicate spheres were equisized and that there were a Gaussian particle size distribution. The results were then compared to the measured data and the intensity plot is shown below. The phase functions are normalized to unity at the 40-deg phase angle. The tentative intensity modeling shows good match with the measured data, whereas the polarization modeling shows discrepancies. In summary, the current RT-CB modeling is promising, but more work needs to be carried out, in particular, for modeling the polarization. Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.
Thermal probe design for Europa sample acquisition
NASA Astrophysics Data System (ADS)
Horne, Mera F.
2018-01-01
The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.
Permeability During Magma Expansion and Compaction
NASA Astrophysics Data System (ADS)
Gonnermann, Helge. M.; Giachetti, Thomas; Fliedner, Céline; Nguyen, Chinh T.; Houghton, Bruce F.; Crozier, Joshua A.; Carey, Rebecca J.
2017-12-01
Plinian lapilli from the 1060 Common Era Glass Mountain rhyolitic eruption of Medicine Lake Volcano, California, were collected and analyzed for vesicularity and permeability. A subset of the samples were deformed at a temperature of 975°, under shear and normal stress, and postdeformation porosities and permeabilities were measured. Almost all undeformed samples fall within a narrow range of vesicularity (0.7-0.9), encompassing permeabilities between approximately 10-15 m2 and 10-10 m2. A percolation threshold of approximately 0.7 is required to fit the data by a power law, whereas a percolation threshold of approximately 0.5 is estimated by fitting connected and total vesicularity using percolation modeling. The Glass Mountain samples completely overlap with a range of explosively erupted silicic samples, and it remains unclear whether the erupting magmas became permeable at porosities of approximately 0.7 or at lower values. Sample deformation resulted in compaction and vesicle connectivity either increased or decreased. At small strains permeability of some samples increased, but at higher strains permeability decreased. Samples remain permeable down to vesicularities of less than 0.2, consistent with a potential hysteresis in permeability-porosity between expansion (vesiculation) and compaction (outgassing). We attribute this to retention of vesicle interconnectivity, albeit at reduced vesicle size, as well as bubble coalescence during shear deformation. We provide an equation that approximates the change in permeability during compaction. Based on a comparison with data from effusively erupted silicic samples, we propose that this equation can be used to model the change in permeability during compaction of effusively erupting magmas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, A. J.; Tourret, D.; Song, Y.
We study microstructure selection during during directional solidification of a thin metallic sample. We combine in situ X-ray radiography of a dilute Al-Cu alloy solidification experiments with three-dimensional phase-field simulations. Here we explore a range of temperature gradient G and growth velocity V and build a microstructure selection map for this alloy. We investigate the selection of the primary dendritic spacing Λ and tip radius ρ. While ρ shows a good agreement between experimental measurements and dendrite growth theory, with ρ~V $-$1/2, Λ is observed to increase with V (∂Λ/∂V > 0), in apparent disagreement with classical scaling laws formore » primary dendritic spacing, which predict that ∂Λ/∂V<0. We show through simulations that this trend inversion for Λ(V) is due to liquid convection in our experiments, despite the thin sample configuration. We use a classical diffusion boundary-layer approximation to semi-quantitatively incorporate the effect of liquid convection into phase-field simulations. This approximation is implemented by assuming complete solute mixing outside a purely diffusive zone of constant thickness that surrounds the solid-liquid interface. This simple method enables us to quantitatively match experimental measurements of the planar morphological instability threshold and primary spacings over an order of magnitude in V. Lastly, we explain the observed inversion of ∂Λ/∂V by a combination of slow transient dynamics of microstructural homogenization and the influence of the sample thickness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, A. J.; Tourret, D.; Song, Y.
We study microstructure selection during directional solidification of a thin metallic sample. We combine in situ X-ray radiography of a dilute Al-Cu alloy solidification experiments with three-dimensional phase-field simulations. We explore a range of temperature gradient G and growth velocity V and build a microstructure selection map for this alloy. We investigate the selection of the primary dendritic spacing Lambda and tip radius rho. While rho shows a good agreement between experimental measurements and dendrite growth theory, with rho similar to V-1/2, Lambda is observed to increase with V (partial derivative Lambda/partial derivative V > 0), in apparent disagreement withmore » classical scaling laws for primary dendritic spacing, which predict that partial derivative Lambda/partial derivative V <0. We show through simulations that this trend inversion for Lambda(V) is due to liquid convection in our experiments, despite the thin sample configuration. We use a classical diffusion boundary-layer approximation to semi-quantitatively incorporate the effect of liquid convection into phase-field simulations. This approximation is implemented by assuming complete solute mixing outside a purely diffusive zone of constant thickness that surrounds the solid-liquid interface. This simple method enables us to quantitatively match experimental measurements of the planar morphological instability threshold and primary spacings over an order of magnitude in V. We explain the observed inversion of partial derivative Lambda/partial derivative V by a combination of slow transient dynamics of microstructural homogenization and the influence of the sample thickness.« less
Clarke, A. J.; Tourret, D.; Song, Y.; ...
2017-05-01
We study microstructure selection during during directional solidification of a thin metallic sample. We combine in situ X-ray radiography of a dilute Al-Cu alloy solidification experiments with three-dimensional phase-field simulations. Here we explore a range of temperature gradient G and growth velocity V and build a microstructure selection map for this alloy. We investigate the selection of the primary dendritic spacing Λ and tip radius ρ. While ρ shows a good agreement between experimental measurements and dendrite growth theory, with ρ~V $-$1/2, Λ is observed to increase with V (∂Λ/∂V > 0), in apparent disagreement with classical scaling laws formore » primary dendritic spacing, which predict that ∂Λ/∂V<0. We show through simulations that this trend inversion for Λ(V) is due to liquid convection in our experiments, despite the thin sample configuration. We use a classical diffusion boundary-layer approximation to semi-quantitatively incorporate the effect of liquid convection into phase-field simulations. This approximation is implemented by assuming complete solute mixing outside a purely diffusive zone of constant thickness that surrounds the solid-liquid interface. This simple method enables us to quantitatively match experimental measurements of the planar morphological instability threshold and primary spacings over an order of magnitude in V. Lastly, we explain the observed inversion of ∂Λ/∂V by a combination of slow transient dynamics of microstructural homogenization and the influence of the sample thickness.« less
Oxychlorine Species in Gale Crater and Broader Implications for Mars
NASA Technical Reports Server (NTRS)
Ming, Douglas W.; Sutter, Brad; Morris, Richard V.; Clark, B. C.; Mahaffy, P. H.; Archilles, C.; Wray, J. J.; Fairen, A. G.; Gellert, Ralf; Yen, Albert;
2017-01-01
Of 15 samples analyzed to date, the Sample Analysis at Mars (SAM) instrument on the Mars Science Laboratory (MSL) has detected oxychlorine compounds (perchlorate or chlorate) in 12 samples. The presence of oxychlorine species is inferred from the release of oxygen at temperatures less than 600degC and HCl between 350-850degC when a sample is heated to 850degC. The O2 release temperature varies with sample, likely caused by different cations, grain size differences, or catalytic effects of other minerals. In the oxychlorine-containing samples, perchlorate abundances range from 0.06 +/- 0.03 to 1.15 +/- 0.5 wt% Cl2O7 equivalent. Comparing these results to the elemental Cl concentration measured by the Alpha Particle X-ray Spectrometer (APXS) instrument, oxychlorine species account for 5-40% of the total Cl present. The variation in oxychlorine abundance has implications for their production and preservation over time. For example, the John Klein (JK) and Cumberland (CB) samples were acquired within a few meters of each other and CB contained approximately1.2 wt% Cl2O7 equivalent while JK had approximately 0.1 wt%. One difference between the two samples is that JK has a large number of veins visible in the drill hole wall, indicating more post-deposition alteration and removal. Finally, despite Cl concentrations similar to previous samples, the last three Murray formation samples (Oudam, Marimba, and Quela) had no detectable oxygen released during pyrolysis. This could be a result of oxygen reacting with other species in the sample during pyrolysis. Lab work has shown this is likely to have occurred in SAM but it is unlikely to have consumed all the O2 released. Another explanation is that the Cl is present as chlorides, which is consistent with data from the ChemCam (Chemical Camera) and CheMin (Chemistry and Mineralogy) instruments on MSL. For example, the Quela sample has approximately1 wt% elemental Cl detected by APXS, had no detectable O2 released, and halite (NaCl) has been tentatively identified in CheMin X-ray diffraction data. These data show that oxychlorines are likely globally distributed on Mars but the distribution is heterogenous depending on the perchlorate formation mechanism (production rate), burial, and subsequent diagenesis
X-Ray Properties of Lyman Break Galaxies in the Hubble Deep Field North Region
NASA Technical Reports Server (NTRS)
Nandra, K.; Mushotzky, R. F.; Arnaud, K.; Steidel, C. C.; Adelberger, K. L.; Gardner, J. P.; Teplitz, H. I.; Windhorst, R. A.; White, Nicholas E. (Technical Monitor)
2002-01-01
We describe the X-ray properties of a large sample of z approximately 3 Lyman Break Galaxies (LBGs) in the region of the Hubble Deep Field North, derived from the 1 Ms public Chandra observation. Of our sample of 148 LBGs, four are detected individually. This immediately gives a measure of the bright AGN (active galactic nuclei) fraction in these galaxies of approximately 3 per cent, which is in agreement with that derived from the UV (ultraviolet) spectra. The X-ray color of the detected sources indicates that they are probably moderately obscured. Stacking of the remainder shows a significant detection (6 sigma) with an average luminosity of 3.5 x 10(exp 41) erg/s per galaxy in the rest frame 2-10 keV band. We have also studied a comparison sample of 95 z approximately 1 "Balmer Break" galaxies. Eight of these are detected directly, with at least two clear AGN based on their high X-ray luminosity and very hard X-ray spectra respectively. The remainder are of relatively low luminosity (< 10(exp 42) erg/s, and the X-rays could arise from either AGN or rapid star-formation. The X-ray colors and evidence from other wavebands favor the latter interpretation. Excluding the clear AGN, we deduce a mean X-ray luminosity of 6.6 x 10(exp 40) erg/s, a factor approximately 5 lower than the LBGs. The average ratio of the UV and X-ray luminosities of these star forming galaxies L(sub UV)/L (sub X), however, is approximately the same at z = 1 as it is at z = 3. This scaling implies that the X-ray emission follows the current star formation rate, as measured by the UV luminosity. We use our results to constrain the star formation rate at z approximately 3 from an X-ray perspective. Assuming the locally established correlation between X-ray and far-IR (infrared) luminosity, the average inferred star formation rate in each Lyman break galaxy is found to be approximately 60 solar mass/yr, in excellent agreement with the extinction-corrected UV estimates. This provides an external check on the UV estimates of the star formation rates, and on the use of X-ray luminosities to infer these rates in rapidly starforming galaxies at high redshift.
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Astrophysics Data System (ADS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-08-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc-1. The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h-1 Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h-1 Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h-1 Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambdazero = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h-1 Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma8 (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h-1 Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the power spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have Mlim greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Investigation of He-W interactions using DiMES on DIII-D
NASA Astrophysics Data System (ADS)
Doerner, R. P.; Rudakov, D. L.; Chrobak, C. P.; Briesemeister, A. R.; Corr, C.; De Temmerman, G.; Kluth, P.; Lasnier, C. J.; McLean, A. G.; Pace, D. C.; Pitts, R. A.; Schmitz, O.; Thompson, M.; Winters, V.
2016-02-01
Tungsten button samples were exposed to He ELMing H-mode plasma in DIII-D using 2.3 MW of electron cyclotron heating power. Prior to the exposures, the W buttons were exposed to either He, or D, plasma in PISCES-A for 2000 s at surface temperatures of 225-850 °C to create a variety of surfaces (surface blisters, subsurface nano-bubbles, fuzz). Erosion was spectroscopically measured from each DiMES sample, with the exception of the fuzzy W samples which showed almost undetectable WI emission. Post-exposure grazing incidence small angle x-ray scattering surface analysis showed the formation of 1.5 nm diameter He bubbles in the surface of W buttons after only a single DIII-D (3 s, ˜150 ELMs) discharge, similar to the bubble layer resulting from the 2000 s. exposure in PISCES-A. No surface roughening, or damage, was detected on the samples after approximately 600 ELMs with energy density between 0.04-0.1 MJ m-2.
Structural analysis of as-deposited and annealed low-temperature gallium arsenide
NASA Astrophysics Data System (ADS)
Matyi, R. J.; Melloch, M. R.; Woodall, J. M.
1993-04-01
The structure of GaAs grown at low substrate temperatures (LT-GaAs) by molecular beam epitaxy has been studied using high resolution X-ray diffraction methods. Double crystal rocking curves from the as-deposited LT-GaAs show well defined interference fringes, indicating a high level of structural perfection. Triple crystal diffraction analysis of the as-deposited sample showed significantly less diffuse scattering near the LT-GaAs 004 reciprocal lattice point compared with the substrate 004 reciprocal lattice point, suggesting that despite the incorporation of approximately 1% excess arsenic, the epitaxial layer had superior crystalline perfection than did the GaAs substrate. Triple crystal scans of annealed LT-GaAs showed an increase in the integrated diffuse intensity by approximately a factor of three as the anneal temperature was increased from 700 to 900°C. Analogous to the effects of SiO2 precipitates in annealed Czochralski silicon, the diffuse intensity is attributed to distortions in the epitaxial LT-GaAs lattice by arsenic precipitates.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
ERIC Educational Resources Information Center
Meyer, J. Patrick; Seaman, Michael A.
2013-01-01
The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…
NASA Technical Reports Server (NTRS)
Allen, Carlton C.; Anderson, David; Bastien, Ron K.; Brenker, Frank E.; Flynn, George J.; Frank, David; Gainsforth, Zack; Sandford, Scott A.; Simionovici, Alexandre S.; Zolensky, Michael E.
2014-01-01
The NASA Stardust spacecraft exposed an aerogel collector to the interstellar dust passing through the solar system. We performed X-ray fluorescence element mapping and abundance measurements, for elements 19 < or = Z < or = 30, on six "interstellar candidates," potential interstellar impacts identified by Stardust@Home and extracted for analyses in picokeystones. One, I1044,3,33, showed no element hot-spots within the designated search area. However, we identified a nearby surface feature, consistent with the impact of a weak, high-speed particle having an approximately chondritic (CI) element abundance pattern, except for factor-of-ten enrichments in K and Zn and an S depletion. This hot-spot, containing approximately 10 fg of Fe, corresponds to an approximately 350 nm chondritic particle, small enough to be missed by Stardust@Home, indicating that other techniques may be necessary to identify all interstellar candidates. Only one interstellar candidate, I1004,1,2, showed a track. The terminal particle has large enrichments in S, Ti, Cr, Mn, Ni, Cu, and Zn relative to Fe-normalized CI values. It has high Al/Fe, but does not match the Ni/Fe range measured for samples of Al-deck material from the Stardust sample return capsule, which was within the field-of-view of the interstellar collector. A third interstellar candidate, I1075,1,25, showed an Al-rich surface feature that has a composition generally consistent with the Al-deck material, suggesting that it is a secondary particle. The other three interstellar candidates, I1001,1,16, I1001,2,17, and I1044,2,32, showed no impact features or tracks, but allowed assessment of submicron contamination in this aerogel, including Fe hot-spots having CI-like Ni/Fe ratios, complicating the search for CI-like interstellar/interplanetary dust.
NASA Astrophysics Data System (ADS)
Flynn, George J.; Sutton, Steven R.; Lai, Barry; Wirick, Sue; Allen, Carlton; Anderson, David; Ansari, Asna; Bajt, SašA.; Bastien, Ron K.; Bassim, Nabil; Bechtel, Hans A.; Borg, Janet; Brenker, Frank E.; Bridges, John; Brownlee, Donald E.; Burchell, Mark; Burghammer, Manfred; Butterworth, Anna L.; Changela, Hitesh; Cloetens, Peter; Davis, Andrew M.; Doll, Ryan; Floss, Christine; Frank, David; Gainsforth, Zack; Grün, Eberhard; Heck, Philipp R.; Hillier, Jon K.; Hoppe, Peter; Hudson, Bruce; Huth, Joachim; Hvide, Brit; Kearsley, Anton; King, Ashley J.; Leitner, Jan; Lemelle, Laurence; Leroux, Hugues; Leonard, Ariel; Lettieri, Robert; Marchant, William; Nittler, Larry R.; Ogliore, Ryan; Ong, Wei Ja; Postberg, Frank; Price, Mark C.; Sandford, Scott A.; Tresseras, Juan-Angel Sans; Schmitz, Sylvia; Schoonjans, Tom; Silversmit, Geert; Simionovici, Alexandre; Sol, Vicente A.; Srama, Ralf; Stadermann, Frank J.; Stephan, Thomas; Sterken, Veerle; Stodolna, Julien; Stroud, Rhonda M.; Trieloff, Mario; Tsou, Peter; Tsuchiyama, Akira; Tyliszczak, Tolek; Vekemans, Bart; Vincze, Laszlo; von Korff, Joshua; Westphal, Andrew J.; Wordsworth, Naomi; Zevin, Daniel; Zolensky, Michael E.
2014-09-01
The NASA Stardust spacecraft exposed an aerogel collector to the interstellar dust passing through the solar system. We performed X-ray fluorescence element mapping and abundance measurements, for elements 19 ≤ Z ≤ 30, on six "interstellar candidates," potential interstellar impacts identified by Stardust@Home and extracted for analyses in picokeystones. One, I1044,3,33, showed no element hot-spots within the designated search area. However, we identified a nearby surface feature, consistent with the impact of a weak, high-speed particle having an approximately chondritic (CI) element abundance pattern, except for factor-of-ten enrichments in K and Zn and an S depletion. This hot-spot, containing approximately 10 fg of Fe, corresponds to an approximately 350 nm chondritic particle, small enough to be missed by Stardust@Home, indicating that other techniques may be necessary to identify all interstellar candidates. Only one interstellar candidate, I1004,1,2, showed a track. The terminal particle has large enrichments in S, Ti, Cr, Mn, Ni, Cu, and Zn relative to Fe-normalized CI values. It has high Al/Fe, but does not match the Ni/Fe range measured for samples of Al-deck material from the Stardust sample return capsule, which was within the field-of-view of the interstellar collector. A third interstellar candidate, I1075,1,25, showed an Al-rich surface feature that has a composition generally consistent with the Al-deck material, suggesting that it is a secondary particle. The other three interstellar candidates, I1001,1,16, I1001,2,17, and I1044,2,32, showed no impact features or tracks, but allowed assessment of submicron contamination in this aerogel, including Fe hot-spots having CI-like Ni/Fe ratios, complicating the search for CI-like interstellar/interplanetary dust.
Ma, Jing; Cheng, Jinping; Wang, Wenhua; Kunisue, Tatsuya; Wu, Minghong; Kannan, Kurunthachalam
2011-02-28
Hair samples collected from e-waste recycling workers (n=23 males, n=4 females) were analyzed to assess occupational exposures to polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) at a large e-waste recycling facility in Taizhou, eastern China. Hair samples from a reference population composed of residents of Shanghai (n=11) were analyzed for comparison. The mean concentration of ∑PBDEs (range, 22.8-1020 ng/g dw; mean, 157 ng/g dw) found in hair samples from e-waste recycling workers was approximately 3 times higher than the mean determined for the reference samples. The congener profiles of PBDEs in hair from e-waste recycling workers were dominated by BDE 209, whereas the profiles in the reference-population samples showed comparable levels of BDE 47 and BDE 209. Total PCDD/F concentrations in hair from e-waste workers (range, 126-5820 pg/g dw; mean, 1670 pg/g dw) were approximately 18-fold greater than the concentrations measured in hair from the reference population. Concentrations of PCDFs were greater than concentrations of PCDDs, in all of the hair samples analyzed (samples from e-waste and non-e-waste sites). Tetrachlorodibenzofurans (TCDFs) were the major homologues in hair samples. Overall, e-waste recycling workers had elevated concentrations of both PBDEs and PCDD/Fs, indicating that they are exposed to high levels of multiple persistent organic pollutants. Copyright © 2010 Elsevier B.V. All rights reserved.
Microwave surface resistance of MgB2
NASA Astrophysics Data System (ADS)
Zhukov, A. A.; Purnell, A.; Miyoshi, Y.; Bugoslavsky, Y.; Lockman, Z.; Berenov, A.; Zhai, H. Y.; Christen, H. M.; Paranthaman, M. P.; Lowndes, D. H.; Jo, M. H.; Blamire, M. G.; Hao, Ling; Gallop, J.; MacManus-Driscoll, J. L.; Cohen, L. F.
2002-04-01
The microwave power and frequency dependence of the surface resistance of MgB2 films and powder samples were studied. Sample quality is relatively easy to identify by the breakdown in the ω2 law for poor-quality samples at all temperatures. The performance of MgB2 at 10 GHz and 21 K was compared directly with that of high-quality YBCO films. The surface resistance of MgB2 was found to be approximately three times higher at low microwave power and showed an onset of nonlinearity at microwave surface fields ten times lower than the YBCO film. It is clear that MgB2 films are not yet optimized for microwave applications.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.
Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo
2018-04-01
Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
Approximating the Generalized Voronoi Diagram of Closely Spaced Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, John; Daniel, Eric; Pascucci, Valerio
2015-06-22
We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less
Trace copper measurements and electrical effects in LPE HgCdTe
NASA Astrophysics Data System (ADS)
Tower, J. P.; Tobin, S. P.; Norton, P. W.; Bollong, A. B.; Socha, A.; Tregilgas, J. H.; Ard, C. K.; Arlinghaus, H. F.
1996-08-01
Recent improvements in sputter initiated resonance ionization spectroscopy (SIRIS) have now made it possible to measure copper in HgCdTe films into the low 1013 cm-3 range. We have used this technique to show that copper is responsible for type conversion in n-type HgCdTe films. Good n-type LPE films were found to have less than 1 x 1014 cm-3 copper, while converted p-type samples were found to have copper concentrations approximately equal to the hole concentrations. Some compensated n-type samples with low mobilities have copper concentrations too low to account for the amount of compensation and the presence of a deep acceptor level is suggested. In order to study diffusion of copper from substrates into LPE layers, a CdTe boule was grown intentionally spiked with copper at approximately 3 x 1016 cm-3. Annealing HgCdTe films at 360°C was found to greatly increase the amount of copper that diffuses out of the substrates and a substrate screening technique was developed based on this phenomenon. SIRIS depth profiles showed much greater copper in HgCdTe films than in the substrates, indicating that copper is preferentially attracted to HgCdTe over Cd(Zn)Te. SIRIS spatial mapping showed that copper is concentrated in substrate tellurium inclusions 5 25 times greater than in the surrounding CdZnTe matrix.
NASA Technical Reports Server (NTRS)
Ahn, Myong K.; Eaton, Sandra S.; Eaton, Gareth R.; Meador, Mary Ann B.
1997-01-01
Prior studies have shown that free radicals generated by heating polyimides above 300 C are stable at room temperature and are involved in thermo-oxidative degradation in the presence of oxygen gas. Electron paramagnetic resonance imaging (EPRI) is a technique to determine the spatial distribution of free radicals. X-band (9.5 GHz) EPR images of PMR-15 polyimide were obtained with a spatial resolution of approximately 0.18 mm along a 2-mm dimension of the sample. In a polyimide sample that was not thermocycled, the radical distribution was uniform along the 2-mm dimension of the sample. For a polyimide sample that was exposed to thermocycling in air for 300 1-h cycles at 335 C, one-dimensional EPRI showed a higher concentration of free radicals in the surface layers than in the bulk sample. A spectral-spatial two-dimensional image showed that the EPR lineshape of the surface layer remained the same as that of the bulk. These EPRI results suggest that the thermo-oxidative degradation of PMR-15 resin involves free radicals present in the oxygen-rich surface layer.
NASA Astrophysics Data System (ADS)
Roback, R. C.; Jones, C. L.; Hull, L. C.; McLing, T. L.; Baker, K. E.; Abdel-Fattah, A. I.; Adams, J. D.; Nichols, E. M.
2003-12-01
The Vadose Zone Research Park (VZRP) provides a unique opportunity to investigate flow and transport in a thick, fractured and layered vadose zone. The VZRP includes two newly constructed percolation ponds each approximately 160000 square ft in area, which receive roughly 1.0 to 1.5 million gallons/day of uncontaminated process water. Monitoring wells and instrumented boreholes surround the percolation ponds. These are distributed in nested sets that allow continuous monitoring and sample collection along two important hydrologic contacts; one located at roughly 60' bls along a contact between alluvium and basalt and the other at 125' bls, along a sedimentary interbed in basalt. Both of these contacts support perched water zones. Hydraulic data have been collected nearly continuously since the first use of the percolation ponds in August 2002. Samples for geochemical studies were also collected during the first few weeks of discharge to the south pond to observe geochemical trends during initial wetting of the subsurface. During the summer of 2003, two tracer tests were performed. The first test consisted of injecting a conservative tracer (2,4,5-trifluorobenzoic acid) into the south pond, which had been receiving water for almost 10 months prior and for which hydraulic data indicated a steady state hydraulic system. The second tracer test was conducted in the north pond and consisted of simultaneous injection of two conservative tracers with different diffusion coefficients (2,4-difluorobenzoic acid, and Br- ion). Tracer injection coincided with the switching of water from the south to the north pond, which had been dry for 10 months prior. Thus, this test afforded us the opportunity to evaluate transport behavior in a relatively dry vadose zone, and to compare this to observed transport behavior under the earlier steady state, more saturated flow condition. Results from the first tracer test show tracer breakthrough in a shallow well, close to the south pond within approximately 30 hours with the peak at approximately 70 hours. In an adjacent, though deeper well located in a perched water zone at the 125' interbed, two tracer peaks were observed, one at approximately 50 hours and the other at approximately 200 hours, indicating multiple flow pathways and different travel times. Flow velocities calculated from this test are on the order of 100 m/day, in good agreement with velocities determined through hydraulic data. Initial results from the second tracer test show tracer recovery in at least four of the sampled wells. During this test, the discharge and four wells were also sampled for colloid concentration and particle size distribution. Colloid concentrations in the wells are roughly equivalent to, or larger than, those from the discharge and show sharp peaks up to an order of magnitude above background values. Comparison of colloid concentration data from the discharge, shallow wells located in the alluvium, and deeper wells in fractured basalt suggest that colloids are liberated in the alluvium and that advection through the fractured basalt does not affect the stability of the colloids. Preliminary tracer data show that tracer breakthrough in the monitoring wells occurred at similar times to colloid peaks. Further analytical work will yield breakthrough curves for the 2,4-tFBA that will be quantitatively compared with the colloid peaks.
Zeng, Cheng; Liang, Shan; Xiang, Shuwen
2017-05-01
Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Charcoal as a capture material for silver nanoparticles in the aquatic environment
NASA Astrophysics Data System (ADS)
McGillicuddy, Eoin; Morrison, Liam; Cormican, Martin; Morris, Dearbháile
2017-04-01
Background: The reported antibacterial activity of silver nanoparticles (AgNPs) has led to their incorporation into numerous consumer products including; textiles, domestic appliances, food containers, cosmetics, paints, medical and medicinal products. The AgNPs incorporated into these products can be released into the environment and aquatic system during their production, use and end of life disposal. In the aquatic environment, uncertainties surround the concentration, fate and effects of AgNPs. The aim of this project is to examine charcoal as a potential material for capture of silver nanoparticles from the aquatic environment. Material/methods: Activated charcoal is a commonly used filter material and was selected for this project to determine its suitability as a capture material for AgNPs in water samples. Activated charcoal (Norit® CA1 (Sigma-Aldrich)) was exposed to 100 ppb, 25 nm PVP coated AgNPs (nanoComposix) prepared in Milli-Q water. These solutions were exposed to unaltered charcoal granules for 20 hours after which the decrease of silver in the solution was measured using ICP-MS. In order to improve the removal, the surface area of the charcoal was increased firstly by grinding with a pestle and mortar and secondly by milling the charcoal. The milled charcoal was prepared using an agate ball mill running at 500 rpm for 5 minutes. The activated charcoal was then exposed to samples containing 10 ppb AgNPs. Results: In the initial tests, approximately 10% of the silver was removed from the water samples using the unaltered activated charcoal granules. Further experiments were carried out to compare the unaltered granules with the ground and milled charcoal. These tests were carried out similarly to the previous test however lower concentration of 10 ppb was used. After 20 hours of exposure the granule samples, as previously, showed approximately a 10% reduction in silver content with the ground charcoal giving approximately 30% reduction in silver concentration and in the sample exposed to milled charcoal, approximately 60% reduction in silver concentration was observed. These tests found that increasing the surface area of the charcoal increased the silver reduction in the solution. Conclusions: Data suggest that charcoal may be a suitable material for use in the capture of AgNPs from water samples
NASA Astrophysics Data System (ADS)
Harrington, David M.; Sueoka, Stacey R.
2017-01-01
We outline polarization performance calculations and predictions for the Daniel K. Inouye Solar Telescope (DKIST) optics and show Mueller matrices for two of the first light instruments. Telescope polarization is due to polarization-dependent mirror reflectivity and rotations between groups of mirrors as the telescope moves in altitude and azimuth. The Zemax optical modeling software has polarization ray-trace capabilities and predicts system performance given a coating prescription. We develop a model coating formula that approximates measured witness sample polarization properties. Estimates show the DKIST telescope Mueller matrix as functions of wavelength, azimuth, elevation, and field angle for the cryogenic near infra-red spectro-polarimeter (CryoNIRSP) and visible spectro-polarimeter. Footprint variation is substantial and shows vignetted field points will have strong polarization effects. We estimate 2% variation of some Mueller matrix elements over the 5-arc min CryoNIRSP field. We validate the Zemax model by showing limiting cases for flat mirrors in collimated and powered designs that compare well with theoretical approximations and are testable with lab ellipsometers.
NASA Technical Reports Server (NTRS)
Rabenberg, Ellen; Kaukler, William; Grugel, Richard
2015-01-01
Two sets of epoxy mixtures, both containing the same ionic liquid (IL) based resin but utilizing two different curing agents, were evaluated after spending more than two years of continual space exposure outside of the International Space Station on the MISSE-8 sample rack. During this period the samples, positioned on nadir side, also experienced some 12,500 thermal cycles between approximately -40?C and +40 C. Initial examination showed some color change, a miniscule weight variance, and no cracks or de-bonding from the sample substrate. Microscopic examination of the surface reveled some slight deformities and pitting. These observations, and others, are discussed in view of the ground-based control samples. Finally, the impetus of this study in terms of space applications is presented.
Evaluation of a standardized micro-vacuum sampling method for collection of surface dust.
Ashley, Kevin; Applegate, Gregory T; Wise, Tamara J; Fernback, Joseph E; Goldcamp, Michael J
2007-03-01
A standardized procedure for collecting dust samples from surfaces using a micro-vacuum sampling technique was evaluated. Experiments were carried out to investigate the collection efficiency of the vacuum sampling method described in ASTM Standard D7144, "Standard Practice for Collection of Surface Dust by Micro-Vacuum Sampling for Subsequent Metals Determination." Weighed masses ( approximately 5, approximately 10 and approximately 25 mg) of three NIST Standard Reference Materials (SRMs) were spiked onto surfaces of various substrates. The SRMs used were: (1) Powdered Lead-Based Paint; (2) Urban Particulate Matter; and (3) Trace Elements in Indoor Dust. Twelve different substrate materials were chosen to be representative of surfaces commonly encountered in occupational and/or indoor settings: (1) wood, (2) tile, (3) linoleum, (4) vinyl, (5) industrial carpet, (6) plush carpet, (7,8) concrete block (painted and unpainted), (9) car seat material, (10) denim, (11) steel, and (12) glass. Samples of SRMs originally spiked onto these surfaces were collected using the standardized micro-vacuum sampling procedure. Gravimetric analysis of material collected within preweighed Accucapinserts (housed within the samplers) was used to measure SRM recoveries. Recoveries ranged from 21.6% (+/- 10.4%, 95% confidence limit [CL]) for SRM 1579 from industrial carpet to 59.2% (+/- 11.0%, 95% CL) for SRM 1579 from glass. For most SRM/substrate combinations, recoveries ranged from approximately 25% to approximately 50%; variabilities differed appreciably. In general, SRM recoveries were higher from smooth and hard surfaces and lower from rough and porous surfaces. Material captured within collection nozzles attached to the sampler inlets was also weighed. A significant fraction of SRM originally spiked onto substrate surfaces was captured within collection nozzles. Percentages of SRMs captured within collection nozzles ranged from approximately 13% (+/- 4 - +/- 5%, 95% CLs) for SRMs 1579 and 2583 from industrial carpet to approximately 45% (+/- 7 - +/- 26%, 95% CLs) for SRM 1648 from glass, tile and steel. For some substrates, loose material from the substrate itself (i.e., substrate particles and fibers) was sometimes collected along with the SRM, both within Accucaps as well as collection nozzles. Co-collection of substrate material can bias results and contribute to sampling variability. The results of this work have provided performance data on the standardized micro-vacuum sampling procedure.
Characterization of air contaminants formed by the interaction of lava and sea water.
Kullman, G J; Jones, W G; Cornwell, R J; Parker, J E
1994-05-01
We made environmental measurements to characterize contaminants generated when basaltic lava from Hawaii's Kilauea volcano enters sea water. This interaction of lava with sea water produces large clouds of mist (LAZE). Island winds occasionally directed the LAZE toward the adjacent village of Kalapana and the Hawaii Volcanos National Park, creating health concerns. Environmental samples were taken to measure airborne concentrations of respirable dust, crystalline silica and other mineral compounds, fibers, trace metals, inorganic acids, and organic and inorganic gases. The LAZE contained quantifiable concentrations of hydrochloric acid (HCl) and hydrofluoric acid (HF); HCl was predominant. HCl and HF concentrations were highest in dense plumes of LAZE near the sea. The HCl concentration at this sampling location averaged 7.1 ppm; this exceeds the current occupational exposure ceiling of 5 ppm. HF was detected in nearly half the samples, but all concentrations were <1 ppm Sulfur dioxide was detected in one of four short-term indicator tube samples at approximately 1.5 ppm. Airborne particulates were composed largely of chloride salts (predominantly sodium chloride). Crystalline silica concentrations were below detectable limits, less than approximately 0.03 mg/m3 of air. Settled dust samples showed a predominance of glass flakes and glass fibers. Airborne fibers were detected at quantifiable levels in 1 of 11 samples. These fibers were composed largely of hydrated calcium sulfate. These findings suggest that individuals should avoid concentrated plumes of LAZE near its origin to prevent over exposure to inorganic acids, specifically HCl.
Cutaway line drawing of STS-34 middeck experiment Polymer Morphology (PM)
NASA Technical Reports Server (NTRS)
1989-01-01
Cutaway line drawing shows components of STS-34 middeck experiment Polymer Morphology (PM). Components include the EAC, heat exchanger, sample cell control (SCC), sample cells, source, interferometer, electronics, carousel drive, infrared (IR) beam, and carousel. PM, a 3M-developed organic materials processing experiment, is designed to explore the effects of microgravity on polymeric materials as they are processed in space. The samples of polymeric materials being studied in the PM experiment are thin films (25 microns or less) approximately 25mm in diameter. The samples are mounted between two infrared transparent windows in a specially designed infrared cell that provides the capability of thermally processing the samples to 200 degrees Celsius with a high degree of thermal control. The samples are mounted on a carousel that allows them to be positioned, one at a time, in the infrared beam where spectra may be acquired. The Generic Electronics Module (GEM) provides all carousel and
Low, Dennis J.; Chichester, Douglas C.
2006-01-01
This study, by the U.S. Geological Survey (USGS) in cooperation with the Pennsylvania Department of Environmental Protection (PADEP), provides a compilation of ground-water-quality data for a 25-year period (January 1, 1979, through August 11, 2004) based on water samples from wells. The data are from eight source agencies唯orough of Carroll Valley, Chester County Health Department, Pennsylvania Department of Environmental Protection-Ambient and Fixed Station Network, Montgomery County Health Department, Pennsylvania Drinking Water Information System, Pennsylvania Department of Agriculture, Susquehanna River Basin Commission, and the U.S. Geological Survey. The ground-water-quality data from the different source agencies varied in type and number of analyses; however, the analyses are represented by 12 major analyte groups:biological (bacteria and viruses), fungicides, herbicides, insecticides, major ions, minor ions (including trace elements), nutrients (dominantly nitrate and nitrite as nitrogen), pesticides, radiochemicals (dominantly radon or radium), volatile organic compounds, wastewater compounds, and water characteristics (dominantly field pH, field specific conductance, and hardness).A summary map shows the areal distribution of wells with ground-water-quality data statewide and by major watersheds and source agency. Maps of 35 watersheds within Pennsylvania are used to display the areal distribution of water-quality information. Additional maps emphasize the areal distribution with respect to 13 major geolithologic units in Pennsylvania and concentration ranges of nitrate (as nitrogen). Summary data tables by source agency provide information on the number of wells and samples collected for each of the 35 watersheds and analyte groups. The number of wells sampled for ground-water-quality data varies considerably across Pennsylvania. Of the 8,012 wells sampled, the greatest concentration of wells are in the southeast (Berks, Bucks, Chester, Delaware, Lancaster, Montgomery, and Philadelphia Counties), in the vicinity of Pittsburgh, and in the northwest (Erie County). The number of wells sampled is relatively sparse in south-central (Adams, Cambria, Cumberland, and Franklin Counties), central (Centre, Indiana, and Snyder Counties), and north-central (Bradford, Potter, and Tioga Counties) Pennsylvania. Little to no data are available for approximately one-third of the state. Water characteristics and nutrients were the most frequently sampled major analyte groups; approximately 21,000 samples were collected for each group. Major and minor ions were the next most-frequently sampled major analyte groups; approximately 17,000 and 12,000 samples were collected, respectively. For the remaining eight major analyte groups, the number of samples collected ranged from a low of 307 samples (wastewater compounds) to a high of approximately 3,000 samples (biological).The number of samples that exceeded a maximum contaminant level (MCL) or secondary maximum contaminant level (SMCL) by major analyte group also varied. Of the 2,988 samples in the biological analyte group, 53 percent had water that exceeded an MCL. Almost 2,500 samples were collected and analyzed for volatile organic compounds; 14 percent exceeded an MCL. Other major analyte groups that frequently exceeded MCLs or SMCLs included major ions (17,465 samples and a 33.9 percent exceedence), minor ions (11,905 samples and a 17.1 percent exceedence), and water characteristics (21,183 samples and a 20.3 percent exceedence). Samples collected and analyzed for fungicides, herbicides, insecticides, and pesticides (4,062 samples), radiochemicals (1,628 samples), wastewater compounds (307 samples), and nutrients (20,822 samples) had the lowest exceedences of 0.3, 8.4, 0.0, and 8.8 percent, respectively.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Surface-hopping dynamics and decoherence with quantum equilibrium structure.
Grunwald, Robbie; Kim, Hyojoon; Kapral, Raymond
2008-04-28
In open quantum systems, decoherence occurs through interaction of a quantum subsystem with its environment. The computation of expectation values requires a knowledge of the quantum dynamics of operators and sampling from initial states of the density matrix describing the subsystem and bath. We consider situations where the quantum evolution can be approximated by quantum-classical Liouville dynamics and examine the circumstances under which the evolution can be reduced to surface-hopping dynamics, where the evolution consists of trajectory segments exclusively evolving on single adiabatic surfaces, with probabilistic hops between these surfaces. The justification for the reduction depends on the validity of a Markovian approximation on a bath averaged memory kernel that accounts for quantum coherence in the system. We show that such a reduction is often possible when initial sampling is from either the quantum or classical bath initial distributions. If the average is taken only over the quantum dispersion that broadens the classical distribution, then such a reduction is not always possible.
Chemical reactions and morphological stability at the Cu/Al2O3 interface.
Scheu, C; Klein, S; Tomsia, A P; Rühle, M
2002-10-01
The microstructures of diffusion-bonded Cu/(0001)Al2O3 bicrystals annealed at 1000 degrees C at oxygen partial pressures of 0.02 or 32 Pa have been studied with various microscopy techniques ranging from optical microscopy to high-resolution transmission electron microscopy. The studies revealed that for both oxygen partial pressures a 20-35 nm thick interfacial CuAlO2 layer formed, which crystallises in the rhombohedral structure. However, the CuAlO2 layer is not continuous, but interrupted by many pores. In the samples annealed in the higher oxygen partial pressure an additional reaction phase with a needle-like structure was observed. The needles are several millimetres long, approximately 10 microm wide and approximately 1 microm thick. They consist of CuAlO2 with alternating rhombohedral and hexagonal structures. Solid-state contact angle measurements were performed to derive values for the work of adhesion. The results show that the adhesion is twice as good for the annealed specimen compared to the as-bonded sample.
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Limitations of Poisson statistics in describing radioactive decay.
Sitek, Arkadiusz; Celler, Anna M
2015-12-01
The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Sample distribution in peak mode isotachophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il
We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
Furuta, Etsuko; Ohyama, Ryu-ichiro; Yokota, Shigeaki; Nakajo, Toshiya; Yamada, Yuka; Kawano, Takao; Uda, Tatsuhiko; Watanabe, Yasuo
2014-11-01
The detection efficiencies of tritium samples by using liquid scintillation counter with hydrophilic plastic scintillator (PS) was approximately 48% when the sample of 20 μL was held between 2 PS sheets treated by plasma. The activity and count rates showed a good relationship between 400 Bq to 410 KBq mL(-1). The calculated detection limit of 2 min measurement by the PS was 13 Bq mL(-1) when a confidence was 95%. The plasma method for PS produces no radioactive waste. Copyright © 2014 Elsevier Ltd. All rights reserved.
Two dimensional eye tracking: Sampling rate of forcing function
NASA Technical Reports Server (NTRS)
Hornseth, J. P.; Monk, D. L.; Porterfield, J. L.; Mcmurry, R. L.
1978-01-01
A study was conducted to determine the minimum update rate of a forcing function display required for the operator to approximate the tracking performance obtained on a continuous display. In this study, frequency analysis was used to determine whether there was an associated change in the transfer function characteristics of the operator. It was expected that as the forcing function display update rate was reduced, from 120 to 15 samples per second, the operator's response to the high frequency components of the forcing function would show a decrease in gain, an increase in phase lag, and a decrease in coherence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baur, N.; Liew, T.C.; Todt, W.
1991-07-01
The authors present U-Pb zircon isotopic data from locally restricted prograde (arrested in situ charnockitization) and retrograde metamorphic transition zones, which are well exposed in Proterozoic orthogneisses tectonically interbanded with granulite facies supracrustal rocks of the Highland Group in Sri Lanka. These granitoid rocks yield apparent ages of 1942 {plus minus} 22 Ma, {approximately} 770 Ma, {approximately} 660 Ma, and {approximately} 560 Ma. All samples show severe Pb-loss some 550-560 Ma ago. The main phase of granulite-formation could not be dated unambiguously but is bracketed between {approximately} 660 Ma and {approximately} 550 Ma. The pervasive Pb-loss event around 550-560 Mamore » reflects the end of this period of high-grade metamorphism and was associated with widespread igneous activity and retrogression. This is constrained by the 550 {plus minus} 3 Ma intrusion age for a post-tectonic granite. They relate this late phase of thermal activity to crustal uplift of the Sri Lankan granulites. This data unambiguously prove the high-grade history of the Sri Lanka gneisses to be a late Precambrian event that may be related to the Pan-African evolution along the eastern part of Africa.« less
Polyurethane foam (PUF) disks passive air samplers: wind effect on sampling rates.
Tuduri, Ludovic; Harner, Tom; Hung, Hayley
2006-11-01
Different passive sampler housings were evaluated for their wind dampening ability and how this might translate to variability in sampler uptake rates. Polyurethane foam (PUF) disk samplers were used as the sampling medium and were exposed to a PCB-contaminated atmosphere in a wind tunnel. The effect of outside wind speed on PUF disk sampling rates was evaluated by exposing polyurethane foam (PUF) disks to a PCB-contaminated air stream in a wind tunnel over air velocities in the range 0 to 1.75 m s-1. PUF disk sampling rates increased gradually over the range 0-0.9 m s-1 at approximately 4.5-14.6 m3 d-1 and then increased sharply to approximately 42 m3 d-1 at approximately 1.75 m s-1 (sum of PCBs). The results indicate that for most field deployments the conventional 'flying saucer' housing adequately dampens the wind effect and will yield approximately time-weighted air concentrations.
How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.
Hittner, James B; May, Kim
2012-01-01
The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.
Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology
Murakami, Yohei
2014-01-01
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832
ERIC Educational Resources Information Center
Guo, Jiin-Huarng; Luh, Wei-Ming
2008-01-01
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolov, E.L.; Yeh, F.; Khokhlov, A.
1996-12-25
Studies of slightly cross-linked polycationic gels interacting with anionic surfactants have been performed by using random copolymers of poly(diallyldimethylammonium chloride) (PDADMACl) and polyacrylamide (PAAm) with varying content of PDADMACl and degree of cross-linking. Gel samples which had been fully swollen in water were placed in aqueous solutions of sodium alkyl sulfates (octyl(SOS), decyl-(SDCS), dodecyl (SDS), tetradecyl (STS), and hexyl (SHS) sulfates). The degree of the sample volume contraction depends on the PDADMACl content. The collapsed gel-surfactant complexes were studied using synchrotron small-angle X-ray scattering. All studied samples containing PDADMACl exhibited pronounced supramolecular nanostructures. The gel-SDCS complex exhibited a cubic structuremore » with a periodicity (7.75 nm) of approximately 4 times the surfactant molecular length, while the gel-SDS, gel-STS, and gel-SHS complexes showed hexagonal supramolecular ordering with a periodicity of approximately 2 times the surfactant molecular length. The d spacing of the longest periodicity in the complexes was dependent on the PDADMACl content and the surfactant. The d spacing generally increased with decreasing PDADMACl (charge) content and increasing number of carbon atoms in the surfactant alkyl chain. 20 refs., 11 figs., 5 tabs.« less
Wilbe Ramsay, Karin; Alaeus, Annette; Albert, Jan; Leitner, Thomas
2011-01-01
The molecular evolution of HIV-1 is characterized by frequent substitutions, indels and recombination events. In addition, a HIV-1 population may adapt through frequency changes of its variants. To reveal such population dynamics we analyzed HIV-1 subpopulation frequencies in an untreated patient with stable, low plasma HIV-1 RNA levels and close to normal CD4+ T-cell levels. The patient was intensively sampled during a 32-day period as well as approximately 1.5 years before and after this period (days −664, 1, 2, 3, 11, 18, 25, 32 and 522). 77 sequences of HIV-1 env (approximately 3100 nucleotides) were obtained from plasma by limiting dilution with 7–11 sequences per time point, except day −664. Phylogenetic analysis using maximum likelihood methods showed that the sequences clustered in six distinct subpopulations. We devised a method that took into account the relatively coarse sampling of the population. Data from days 1 through 32 were consistent with constant within-patient subpopulation frequencies. However, over longer time periods, i.e. between days 1…32 and 522, there were significant changes in subpopulation frequencies, which were consistent with evolutionarily neutral fluctuations. We found no clear signal of natural selection within the subpopulations over the study period, but positive selection was evident on the long branches that connected the subpopulations, which corresponds to >3 years as the subpopulations already were established when we started the study. Thus, selective forces may have been involved when the subpopulations were established. Genetic drift within subpopulations caused by de novo substitutions could be resolved after approximately one month. Overall, we conclude that subpopulation frequencies within this patient changed significantly over a time period of 1.5 years, but that this does not imply directional or balancing selection. We show that the short-term evolution we study here is likely representative for many patients of slow and normal disease progression. PMID:21829600
Okimoto, Gordon; Zeinalzadeh, Ashkan; Wenska, Tom; Loomis, Michael; Nation, James B; Fabre, Tiphaine; Tiirikainen, Maarit; Hernandez, Brenda; Chan, Owen; Wong, Linda; Kwee, Sandi
2016-01-01
Technological advances enable the cost-effective acquisition of Multi-Modal Data Sets (MMDS) composed of measurements for multiple, high-dimensional data types obtained from a common set of bio-samples. The joint analysis of the data matrices associated with the different data types of a MMDS should provide a more focused view of the biology underlying complex diseases such as cancer that would not be apparent from the analysis of a single data type alone. As multi-modal data rapidly accumulate in research laboratories and public databases such as The Cancer Genome Atlas (TCGA), the translation of such data into clinically actionable knowledge has been slowed by the lack of computational tools capable of analyzing MMDSs. Here, we describe the Joint Analysis of Many Matrices by ITeration (JAMMIT) algorithm that jointly analyzes the data matrices of a MMDS using sparse matrix approximations of rank-1. The JAMMIT algorithm jointly approximates an arbitrary number of data matrices by rank-1 outer-products composed of "sparse" left-singular vectors (eigen-arrays) that are unique to each matrix and a right-singular vector (eigen-signal) that is common to all the matrices. The non-zero coefficients of the eigen-arrays identify small subsets of variables for each data type (i.e., signatures) that in aggregate, or individually, best explain a dominant eigen-signal defined on the columns of the data matrices. The approximation is specified by a single "sparsity" parameter that is selected based on false discovery rate estimated by permutation testing. Multiple signals of interest in a given MDDS are sequentially detected and modeled by iterating JAMMIT on "residual" data matrices that result from a given sparse approximation. We show that JAMMIT outperforms other joint analysis algorithms in the detection of multiple signatures embedded in simulated MDDS. On real multimodal data for ovarian and liver cancer we show that JAMMIT identified multi-modal signatures that were clinically informative and enriched for cancer-related biology. Sparse matrix approximations of rank-1 provide a simple yet effective means of jointly reducing multiple, big data types to a small subset of variables that characterize important clinical and/or biological attributes of the bio-samples from which the data were acquired.
Galář, Pavel; Khun, Josef; Kopecký, Dušan; Scholtz, Vladimír; Trchová, Miroslava; Fučíková, Anna; Jirešová, Jana; Fišer, Ladislav
2017-11-08
Non-thermal plasma has proved its benefits in medicine, plasma assisted polymerization, food industry and many other fields. Even though, the ability of non-thermal plasma to modify surface properties of various materials is generally known, only limited attention has been given to exploitations of this treatment on conductive polymers. Here, we show study of non-thermal plasma treatment on properties of globular and nanostructured polypyrrole in the distilled water. We observe that plasma presence over the suspension level doesn't change morphology of the polymer (shape), but significantly influences its elemental composition and physical properties. After 60 min of treatment, the relative concentration of chloride counter ions decreased approximately 3 and 4 times for nanostructured and globular form, respectively and concentration of oxygen increased approximately 3 times for both forms. Simultaneously, conductivity decrease (14 times for globular and 2 times for nanostructured one) and changes in zeta potential characteristics of both samples were observed. The modification evolution was dominated by multi-exponential function with time constants having values approximately 1 and 10 min for both samples. It is expected that these time constants are related to two modification processes connected to direct presence of the spark and to long-lived species generated by the plasma.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian
2016-01-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
Is stock market volatility asymmetric? A multi-period analysis for five countries
NASA Astrophysics Data System (ADS)
Bentes, Sonia R.
2018-06-01
This study examines the asymmetry in the volatility of the returns of five indices, namely, PSI 20 (Portugal), ISEQ 20 (Ireland), MIB 30 (Italy), ATHEX 30 (Greece) and IBEX 35 (Spain) using daily data from 2004-2016. For this purpose, we estimate the GJR and EGARCH asymmetric models for the whole sample and then split it into three subperiods of approximately four years each to examine how the coefficient on asymmetry behaves over time. Our results for the full sample show that all indices exhibit different levels of asymmetry. When we consider the subsample analysis however results show that while there is mixed evidence from the first to the second subperiods, all returns evidence an increase in asymmetry from the second to the last subperiod.
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation. PMID:26203657
Volatiles in High-K Lunar Basalts
NASA Technical Reports Server (NTRS)
Barnes, Jessica J.; McCubbin, Francis M.; Messenger, Scott R.; Nguyen, Ann; Boyce, Jeremy
2017-01-01
Chlorine is an unusual isotopic system, being essentially unfractionated ((delta)Cl-37 approximately 0 per mille ) between bulk terrestrial samples and chondritic meteorites and yet showing large variations in lunar (approximately -4 to +81 per mille), martian, and vestan (HED) samples. Among lunar samples, the volatile-bearing mineral apatite (Ca5(PO4)3[F,Cl,OH]) has been studied for volatiles in K-, REE-, and P (KREEP), very high potassium (VHK), low-Ti and high-Ti basalts, as well as samples from the lunar highlands. These studies revealed a positive correlation between in-situ (delta)Cl-37 measurements and bulk incompatible trace elements (ITEs) and ratios. Such trends were interpreted to originate from Cl isotopic fractionation during the degassing of metal chlorides during or shortly after the differentiation of the Moon via a magma ocean. In this study, we investigate the volatile inventories of a group of samples for which new-era volatile data have yet to be reported - the high-K (greater than 2000 ppm bulk K2O), high-Ti, trace element-rich mare basalts. We used isotope imaging on the Cameca NanoSIMS 50L at JSC to obtain the Cl isotopic composition [((Cl-37/(35)Clsample/C-37l/(35)Clstandard)-1)×1000, to get a value in per thousand (per mille)] which ranges from approximately -2.7 +/- 2 per mille to +16.1 +/- 2 per mille (2sigma), as well as volatile abundances (F & Cl) of apatite in samples 10017, 10024 & 10049. Simply following prior models, as lunar rocks with high bulk-rock abundances of ITEs we might expect the high-K, high-Ti basalts to contain apatite characterized by heavily fractionated (delta)Cl-37 values, i.e., Cl obtained from mixing between unfractionated mantle Cl (approximately 0 per mille) and the urKREEP reservoir (possibly fractionated to greater than +25 per mille.). However, the data obtained for the studied samples do not conform to either the early degassing or mixing models. Existing petrogentic models for the origin of the high-K, high-Ti basalts do not include urKREEP assimilation into their LMO cumulate sources. Therefore, Cl in these basalts either originated from source region heterogeneity or through assimilation or metasomatism by volatile and incompatible trace element rich materials. The new data presented here could provide evidence for the existence of region(s) in the lunar interior that are ITE-enriched and contain Cl that does not share isotopic affinities with lunar urKREEP, possibly representing the composition of the purported 'neuKREEP'.
Waters, Brian W; Hung, Yen-Con
2014-04-01
Chlorinated water and electrolyzed oxidizing (EO) water solutions were made to compare the free chlorine stability and microbicidal efficacy of chlorine-containing solutions with different properties. Reduction of Escherichia coli O157:H7 was greatest in fresh samples (approximately 9.0 log CFU/mL reduction). Chlorine loss in "aged" samples (samples left in open bottles) was greatest (approximately 40 mg/L free chlorine loss in 24 h) in low pH (approximately 2.5) and high chloride (Cl(-) ) concentrations (greater than 150 mg/L). Reduction of E. coli O157:H7 was also negatively impacted (<1.0 log CFU/mL reduction) in aged samples with a low pH and high Cl(-) . Higher pH values (approximately 6.0) did not appear to have a significant effect on free chlorine loss or numbers of surviving microbial cells when fresh and aged samples were compared. This study found chloride levels in the chlorinated and EO water solutions had a reduced effect on both free chlorine stability and its microbicidal efficacy in the low pH solutions. Greater concentrations of chloride in pH 2.5 samples resulted in decreased free chlorine stability and lower microbicidal efficacy. © 2014 Institute of Food Technologists®
Herich, Hanna; Tritscher, Torsten; Wiacek, Aldona; Gysel, Martin; Weingartner, Ernest; Lohmann, Ulrike; Baltensperger, Urs; Cziczo, Daniel J
2009-09-28
Airborne mineral dust particles serve as cloud condensation nuclei (CCN), thereby influencing the formation and properties of warm clouds. It is therefore of atmospheric interest how dust aerosols with different mineralogy behave when exposed to high relative humidity (RH) or supersaturation (SS) with respect to liquid water. In this study the subsaturated hygroscopic growth and the supersaturated cloud condensation nucleus activity of pure clays and real desert dust aerosols were determined using a hygroscopicity tandem differential mobility analyzer (HTDMA) and a cloud condensation nuclei counter (CCNC), respectively. Five different illite, montmorillonite and kaolinite clay samples as well as three desert dust samples (Saharan dust (SD), Chinese dust (CD) and Arizona test dust (ATD)) were investigated. Aerosols were generated both with a wet and a dry disperser. The water uptake was parameterized via the hygroscopicity parameter kappa. The hygroscopicity of dry generated dust aerosols was found to be negligible when compared to processed atmospheric aerosols, with CCNC derived kappa values between 0.00 and 0.02 (the latter corresponds to a particle consisting of 96.7% by volume insoluble material and approximately 3.3% ammonium sulfate). Pure clay aerosols were generally found to be less hygroscopic than natural desert dust particles. The illite and montmorillonite samples had kappa approximately 0.003. The kaolinite samples were less hygroscopic and had kappa=0.001. SD (kappa=0.023) was found to be the most hygroscopic dry-generated desert dust followed by CD (kappa=0.007) and ATD (kappa=0.003). Wet-generated dust showed an increased water uptake when compared to dry-generated samples. This is considered to be an artifact introduced by redistribution of soluble material between the particles. Thus, the generation method is critically important when presenting such data. These results indicate any atmospheric processing of a fresh mineral dust particle which leads to the addition of more than approximately 3% soluble material will significantly enhance its hygroscopicity and CCN activity.
ERIC Educational Resources Information Center
Reitzle, Matthias; Silbereisen, Rainer K.
A study was conducted to show that economic and societal differences between the former Eastern and Western parts of Germany had produced differences in the timing of young people's school-to-work transitions. Data were collected from samples of approximately 350 participants from the West and 380 participants from the East conducted in 1991 and…
On the improvement of signal repeatability in laser-induced air plasmas
NASA Astrophysics Data System (ADS)
Zhang, Shuai; Sheta, Sahar; Hou, Zong-Yu; Wang, Zhe
2018-04-01
The relatively low repeatability of laser-induced breakdown spectroscopy (LIBS) severely hinders its wide commercialization. In the present work, we investigate the optimization of LIBS system for repeatability improvement for both signal generation (plasma evolution) and signal collection. Timeintegrated spectra and images were obtained under different laser energies and focal lengths to investigate the optimum configuration for stable plasmas and repeatable signals. Using our experimental setup, the optimum conditions were found to be a laser energy of 250 mJ and a focus length of 100 mm. A stable and homogeneous plasma with the largest hot core area in the optimum condition yielded the most stable LIBS signal. Time-resolved images showed that the rebounding processes through the air plasma evolution caused the relative standard deviation (RSD) to increase with laser energies of > 250 mJ. In addition, the emission collection was improved by using a concave spherical mirror. The line intensities doubled as their RSDs decreased by approximately 25%. When the signal generation and collection were optimized simultaneously, the pulse-to-pulse RSDs were reduced to approximately 3% for O(I), N(I), and H(I) lines, which are better than the RSDs reported for solid samples and showed great potential for LIBS quantitative analysis by gasifying the solid or liquid samples.
Energy-saving method for technogenic waste processing
Dikhanbaev, Bayandy; Dikhanbaev, Aristan Bayandievich
2017-01-01
Dumps of a mining-metallurgical complex of post-Soviet Republics have accumulated a huge amount of technogenic waste products. Out of them, Kazakhstan alone has preserved about 20 billion tons. In the field of technogenic waste treatment, there is still no technical solution that leads it to be a profitable process. Recent global trends prompted scientists to focus on developing energy-saving and a highly efficient melting unit that can significantly reduce specific fuel consumption. This paper reports, the development of a new technological method—smelt layer of inversion phase. The introducing method is characterized by a combination of ideal stirring and ideal displacement regimes. Using the method of affine modelling, recalculation of pilot plant’s test results on industrial sample has been obtained. Experiments show that in comparison with bubbling and boiling layers of smelt, the degree of zinc recovery increases in the layer of inversion phase. That indicates the reduction of the possibility of new formation of zinc silicates and ferrites from recombined molecules of ZnO, SiO2, and Fe2O3. Calculations show that in industrial samples of the pilot plant, the consumption of natural gas has reduced approximately by two times in comparison with fuming-furnace. The specific fuel consumption has reduced by approximately four times in comparison with Waelz-kiln. PMID:29281646
Streicher, Jeffrey W; Schulte, James A; Wiens, John J
2016-01-01
Targeted sequence capture is becoming a widespread tool for generating large phylogenomic data sets to address difficult phylogenetic problems. However, this methodology often generates data sets in which increasing the number of taxa and loci increases amounts of missing data. Thus, a fundamental (but still unresolved) question is whether sampling should be designed to maximize sampling of taxa or genes, or to minimize the inclusion of missing data cells. Here, we explore this question for an ancient, rapid radiation of lizards, the pleurodont iguanians. Pleurodonts include many well-known clades (e.g., anoles, basilisks, iguanas, and spiny lizards) but relationships among families have proven difficult to resolve strongly and consistently using traditional sequencing approaches. We generated up to 4921 ultraconserved elements with sampling strategies including 16, 29, and 44 taxa, from 1179 to approximately 2.4 million characters per matrix and approximately 30% to 60% total missing data. We then compared mean branch support for interfamilial relationships under these 15 different sampling strategies for both concatenated (maximum likelihood) and species tree (NJst) approaches (after showing that mean branch support appears to be related to accuracy). We found that both approaches had the highest support when including loci with up to 50% missing taxa (matrices with ~40-55% missing data overall). Thus, our results show that simply excluding all missing data may be highly problematic as the primary guiding principle for the inclusion or exclusion of taxa and genes. The optimal strategy was somewhat different for each approach, a pattern that has not been shown previously. For concatenated analyses, branch support was maximized when including many taxa (44) but fewer characters (1.1 million). For species-tree analyses, branch support was maximized with minimal taxon sampling (16) but many loci (4789 of 4921). We also show that the choice of these sampling strategies can be critically important for phylogenomic analyses, since some strategies lead to demonstrably incorrect inferences (using the same method) that have strong statistical support. Our preferred estimate provides strong support for most interfamilial relationships in this important but phylogenetically challenging group. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Nutritional characteristics of moon dust for soil microorganisms
NASA Technical Reports Server (NTRS)
Ito, T.
1983-01-01
Approximately 46% of the lunar sample (10084,151), 125.42 mg, was solubilized in 680 ml 0.01 M salicylic acid. Atomic absorption spectroscopic analysis of the solubilized lunar sample showed the following amount of metal ions: Ca, 3.1; Mg, 4.0; K, 0.09; Na, 0.67; Fe, 7.3; Mn, 1.6; Cu, Ni, Cr, less than 0.1 each. All are in ppm. Salicylic acid used to solubilize the lunar sample was highly inhibitory to the growth of mixed soil microbes. However, the mineral part of the lunar extract stimulated the growth. For optimal growth of the soil microbes the following nutrients must be added to the moon extract; sources of carbon, nitrogen, sulfur, phosphorus, and magnesium in addition to water.
Roos, Johannes Lodewikus; Pretorius, Herman Walter; Karayiorgou, Maria
2009-01-01
The clinical characteristics of an Afrikaner founder population sample recruited for a schizophrenia genetic study are described. Comparisons on several clinical characteristics between this sample and a U.S. sample of schizophrenia patients show that generalization of findings in a founder population to the population at large is applicable. The assessment of the frequency of the 22q11 deletion in Afrikaner schizophrenia patients is approximately 2%, similar to findings in a U.S. sample. Results of analysis of early non-psychotic deviant behavior in subjects under the age of 10 years in the Afrikaner population broadly replicated findings in a U.S. sample. Approximately half of male schizophrenia patients and a quarter of female patients in the Afrikaner schizophrenia database used or abused cannabis. Male users of cannabis with severe early deviant behavior had the lowest mean age of criteria onset, namely 18.4 years. These findings confirm previous findings, indicating that early deviance is linked to later outcome of disease. The clinical characteristics and premorbid variables in 12 childhood-onset Afrikaner schizophrenia patients thus far recruited in this study compare favorably with what is known about childhood-onset schizophrenia in a U.S. sample. The prevalence of co-morbid OCD/OCS in this Afrikaner schizophrenia founder sample was 13.2% which is in keeping with that of co-morbid OCD in schizophrenia, estimated at 12.2% by the U.S. National Institute of Mental Health. These findings confirm that the clinical characteristics of a schizophrenia sample drawn from the Afrikaner founder population can be generalized to the schizophrenia population at large when compared to findings reported in the literature.
Cai, Yi-Hong; Wang, Yi-Sheng
2018-04-01
This work discusses the correlation between the mass resolving power of matrix-assisted laser desorption/ionization time-of-flight mass analyzers and extraction condition with an uneven sample morphology. Previous theoretical calculations show that the optimum extraction condition for flat samples involves an ideal ion source design and extraction delay. A general expression of spectral feature takes into account ion initial velocity, and extraction delay is derived in the current study. The new expression extends the comprehensive calculation to uneven sample surfaces and above 90% Maxell-Boltzmann initial velocity distribution of ions to account for imperfect ionization condition. Calculation shows that the impact of uneven sample surface or initial spatial spread of ions is negligible when the extraction delay is away from the ideal value. When the extraction delay approaches the optimum value, the flight-time topology shows a characteristic curve shape, and the time-domain mass spectral feature broadens with an increase in initial spatial spread of ions. For protonated 2,5-dihydroxybenzoic acid, the mass resolving power obtained from a sample of 3-μm surface roughness is approximately 3.3 times lower than that of flat samples. For ions of m/z 3000 coexpanded with 2,5-dihydroxybenzoic acid, the mass resolving power in the 3-μm surface roughness case only reduces roughly 7%. Comprehensive calculations also show that the mass resolving power of lighter ions is more sensitive to the accuracy of the extraction delay than heavier ions. Copyright © 2018 John Wiley & Sons, Ltd.
Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration
Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.
2009-01-01
This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.
Velay, Aurélie; Solis, Morgane; Barth, Heidi; Sohn, Véronique; Moncollin, Anne; Neeb, Amandine; Wendling, Marie-Josée; Fafi-Kremer, Samira
2018-04-01
Tick-borne encephalitis virus (TBEV) diagnosis is mainly based on the detection of viral-specific antibodies in serum. Several commercial assays are available, but published data on their performance remain unclear. We assessed six IgM and six IgG commercial enzyme-linked immunosorbent assay (ELISA) kits (ELISA-1 through ELISA-6) using 94 samples, including precharacterized TBEV-positive samples (n=50) and -negative samples (n=44). The six manufacturers showed satisfactory sensitivity and specificity and high overall agreement for both IgM and IgG. Three manufacturers showed better reproducibility and were the most sensitive (100%) and specific (95.5-98.1%) for both IgM and IgG. Two of them were also in agreement with the clinical interpretation in more than 90% of the cases. All the assays use inactivated virus as antigen, with strains showing approximately 94% homology at the amino acid level. The antigenic format of the assays was discussed to further improve this TBEV diagnostic tool. Copyright © 2017 Elsevier Inc. All rights reserved.
Automated sample exchange and tracking system for neutron research at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Rix, J. E.; Weber, J. K. R.; Santodonato, L. J.; Hill, B.; Walker, L. M.; McPherson, R.; Wenzel, J.; Hammons, S. E.; Hodges, J.; Rennich, M.; Volin, K. J.
2007-01-01
An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW™ program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of ˜25K, after several minutes, it was moved onto a "landing pad" at ˜10K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2min. The time to cool the sample from ambient temperature to ˜10K was approximately 7min including precooling time. The cooling time increases to approximately 12min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to ˜350K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to ˜10K after it has been heated to ˜240K was approximately 20min.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Severini, C; Gomes, T; De Pilli, T; Romani, S; Massini, R
2000-10-01
Shelled almonds of two Italian varieties, Romana and Pizzuta, peeled and unpeeled, were roasted and packed under different conditions: air (control), vacuum, and Maillard reaction volatile compounds (MRVc) derived from the roasting process. Samples were stored for approximately 8 months at room temperature, without light, and, at regular intervals, were collected and analyzed to evaluate the progress of lipid oxidation. Peroxide values, triglyceride oligopolymers, and oxidized triglycerides were evaluated during the storage time. Results showed that, although the MRVc atmosphere did not protect the lipid fraction of almonds as well as the vacuum condition; nevertheless, it was more protective than the control atmosphere, showing an antioxidant effect. The effect of the natural coating was a strong protection against lipid oxidation; in fact, only the unpeeled samples showed peroxide values lower than the threshold of acceptability (25 milliequiv of O(2)/kg of oil). Moreover, at the end of the storage period, Pizzuta almonds showed a greater deterioration than those of the Romana variety.
Wetherbee, Gregory A.; Latysh, Natalie E.; Burke, Kevin P.
2005-01-01
Six external quality-assurance programs were operated by the U.S. Geological Survey (USGS) External Quality-Assurance (QA) Project for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) from 2002 through 2003. Each program measured specific components of the overall error inherent in NADP/NTN wet-deposition measurements. The intersite-comparison program assessed the variability and bias of pH and specific conductance determinations made by NADP/NTN site operators twice per year with respect to accuracy goals. The percentage of site operators that met the pH accuracy goals decreased from 92.0 percent in spring 2002 to 86.3 percent in spring 2003. In these same four intersite-comparison studies, the percentage of site operators that met the accuracy goals for specific conductance ranged from 94.4 to 97.5 percent. The blind-audit program and the sample-handling evaluation (SHE) program evaluated the effects of routine sample handling, processing, and shipping on the chemistry of weekly NADP/NTN samples. The blind-audit program data indicated that the variability introduced by sample handling might be environmentally significant to data users for sodium, potassium, chloride, and hydrogen ion concentrations during 2002. In 2003, the blind-audit program was modified and replaced by the SHE program. The SHE program was designed to control the effects of laboratory-analysis variability. The 2003 SHE data had less overall variability than the 2002 blind-audit data. The SHE data indicated that sample handling buffers the pH of the precipitation samples and, in turn, results in slightly lower conductivity. Otherwise, the SHE data provided error estimates that were not environmentally significant to data users. The field-audit program was designed to evaluate the effects of onsite exposure, sample handling, and shipping on the chemistry of NADP/NTN precipitation samples. Field-audit results indicated that exposure of NADP/NTN wet-deposition samples to onsite conditions tended to neutralize the acidity of the samples by less than 1.0 microequivalent per liter. Onsite exposure of the sampling bucket appeared to slightly increase the concentration of most of the analytes but not to an extent that was environmentally significant to NADP data users. An interlaboratory-comparison program was used to estimate the analytical variability and bias of the NADP Central Analytical Laboratory (CAL) during 2002-03. Bias was identified in the CAL data for calcium, magnesium, sodium, potassium, ammonium, chloride, nitrate, sulfate, hydrogen ion, and specific conductance, but the absolute value of the bias was less than analytical minimum detection limits for all constituents except magnesium, nitrate, sulfate, and specific conductance. Control charts showed that CAL results were within statistical control approximately 90 percent of the time. Data for the analysis of ultrapure deionized-water samples indicated that CAL did not have problems with laboratory contamination. During 2002-03, the overall variability of data from the NADP/NTN precipitation-monitoring system was estimated using data from three collocated monitoring sites. Measurement differences of constituent concentration and deposition for paired samples from the collocated samplers were evaluated to compute error terms. The medians of the absolute percentage errors (MAEs) for the paired samples generally were larger for cations (approximately 8 to 50 percent) than for anions (approximately 3 to 33 percent). MAEs were approximately 16 to 30 percent for hydrogen-ion concentration, less than 10 percent for specific conductance, less than 5 percent for sample volume, and less than 8 percent for precipitation depth. The variability attributed to each component of the sample-collection and analysis processes, as estimated by USGS quality-assurance programs, varied among analytes. Laboratory analysis variability accounted for approximately 2 percent of the
The Last Interglacial sea level change: new evidence from the Abrolhos islands, West Australia
NASA Astrophysics Data System (ADS)
Eisenhauer, A.; Zhu, Z. R.; Collins, L. B.; Wyrwoll, K. H.; Eichstätter, R.
U-series ages measured by thermal ionisation mass spectrometry (TIMS) are reported for a Last Interglacial (LI) fossil coral core from the Turtle Bay, Houtman Abrolhos islands, western Australia. The core is 33.4m long the top of which is approximately 5ma.p.s.l. (above present sea level). From the 232Th concentrations and the reliability of the U-series ages, two sections in the core can be distinguished. Calculated U/Th ages in core sectionI (3.3ma.p.s.l to 11mb.p.s.l) vary between 124+/-1.7kaBP (3.3ma.p.s.l.) and 132.5+/-1.8ka (4mb.p.s.l., i.e. below present sea level), and those of sectionII (11-23mb.p.s.l.) between 140+/-3 and 214+/-5kaBP, respectively. The ages of core sectionI are in almost perfect chronological order, whereas for sectionII no clear age-depth relationship of the samples can be recognised. Further assessments based on the ∂234U(T) criteria reveal that none of the samples of core sectionII give reliable ages, whereas for core sectionI several samples can be considered to be moderately reliable within 2ka. The data of the Turtle Bay core complement and extend our previous work from the Houtman Abrolhos showing that the sea level reached a height of approximately 4mb.p.s.l at approximately 134kaBP and a sea level highstand of at least 3.3ma.p.s.l. at approximately 124kaBP. Sea level dropped below its present position at approximately 116kaBP. Although the new data are in general accord with the Milankovitch theory of climate change, a detailed comparison reveals considerable differences between the Holocenand LI sea level rise as monitored relative to the Houtman Abrolhos islands. These observation apparently add further evidence to the growing set of data that the LI sea level rise started earlier than recognised by SPECMAP chronology. A reconciliation of these contradictionary observations following the line of arguments presented by Crowley (1994) are discussed with respect to the Milankovitch theory.
Raman analysis of an impacted α-GeO2-H2O mixture
NASA Astrophysics Data System (ADS)
Rosales, Ivonne; Thions-Renero, Claude; Martinez, Erendira; Agulló-Rueda, Fernando; Bucio, Lauro; Orozco, Eligio
2012-09-01
Through a Raman analysis, we detected polymorphism at high pressure on mixtures of α-GeO2 microcrystalline powder and water under impact experiments with a single-stage gas gun. The Raman measurements taken from recovered samples show two vibrational modes associated with water-related species. After the impact, the size of the α-GeO2 crystallites was approximately 10 times higher showing molten zones and a lot of porous faces. Raman examination showed some unknown peaks possibly associated with other GeO2 polymorphs detected by X-ray diffraction experiments and perhaps stabilized in the porous of the α-GeO2 crystallites.
Glass frit nebulizer for atomic spectrometry
Layman, L.R.
1982-01-01
The nebuilizatlon of sample solutions Is a critical step In most flame or plasma atomic spectrometrlc methods. A novel nebulzatlon technique, based on a porous glass frit, has been Investigated. Basic operating parameters and characteristics have been studied to determine how thte new nebulizer may be applied to atomic spectrometrlc methods. The results of preliminary comparisons with pneumatic nebulizers Indicate several notable differences. The frit nebulizer produces a smaller droplet size distribution and has a higher sample transport efficiency. The mean droplet size te approximately 0.1 ??m, and up to 94% of the sample te converted to usable aerosol. The most significant limitations In the performance of the frit nebulizer are the stow sample equMbratton time and the requirement for wash cycles between samples. Loss of solute by surface adsorption and contamination of samples by leaching from the glass were both found to be limitations only In unusual cases. This nebulizer shows great promise where sample volume te limited or where measurements require long nebullzatlon times.
England, Glenn C; Watson, John G; Chow, Judith C; Zielinska, Barbara; Chang, M C Oliver; Loos, Karl R; Hidy, George M
2007-01-01
With the recent focus on fine particle matter (PM2.5), new, self-consistent data are needed to characterize emissions from combustion sources. Such data are necessary for health assessment and air quality modeling. To address this need, emissions data for gas-fired combustors are presented here, using dilution sampling as the reference. The dilution method allows for collection of emitted particles under conditions simulating cooling and dilution during entry from the stack into the air. The sampling and analysis of the collected particles in the presence of precursor gases, SO2 nitrogen oxide, volatile organic compound, and NH3 is discussed; the results include data from eight gas fired units, including a dual-fuel institutional boiler and a diesel engine powered electricity generator. These data are compared with results in the literature for heavy-duty diesel vehicles and stationary sources using coal or wood as fuels. The results show that the gas-fired combustors have very low PM2.5 mass emission rates in the range of approximately 10(-4) lb/million Btu (MMBTU) compared with the diesel backup generator with particle filter, with approximately 5 x 10(-3) lb/MMBTU. Even higher mass emission rates are found in coal-fired systems, with rates of approximately 0.07 lb/MMBTU for a bag-filter-controlled pilot unit burning eastern bituminous coal. The characterization of PM2.5 chemical composition from the gas-fired units indicates that much of the measured primary particle mass in PM2.5 samples is organic or elemental carbon and, to a much less extent, sulfate. Metal emissions are quite low compared with the diesel engines and the coal- or wood-fueled combustors. The metals found in the gas-fired combustor particles are low in concentration, similar in concentration to ambient particles. The interpretation of the particulate carbon emissions is complicated by the fact that an approximately equal amount of particulate carbon (mainly organic carbon) is found on the particle collector and a backup filter. It is likely that measurement artifacts, mostly adsorption of volatile organic compounds on quartz filters, are positively biasing "true" particulate carbon emission results.
Differential privacy based on importance weighting
Ji, Zhanglong
2014-01-01
This paper analyzes a novel method for publishing data while still protecting privacy. The method is based on computing weights that make an existing dataset, for which there are no confidentiality issues, analogous to the dataset that must be kept private. The existing dataset may be genuine but public already, or it may be synthetic. The weights are importance sampling weights, but to protect privacy, they are regularized and have noise added. The weights allow statistical queries to be answered approximately while provably guaranteeing differential privacy. We derive an expression for the asymptotic variance of the approximate answers. Experiments show that the new mechanism performs well even when the privacy budget is small, and when the public and private datasets are drawn from different populations. PMID:24482559
Computer image analysis of etched tracks from ionizing radiation
NASA Technical Reports Server (NTRS)
Blanford, George E.
1994-01-01
I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.
Hyperhaploid and tetraploid sperm detected in men who ingested ultra-high doses of diazepam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumgartner, A.; Adler, I.D.; Schmid, T.E.
Diazepam is widely administered as a sedative, muscle relaxant and anxiolytic drug. Five young non-smoking men who were hospitalized after their suicide attempt using diazepam, {approximately}1-7 mg/kg (oral intake), provided semen samples 40-50 days and {approximately}100 days after exposure to assess drug effects on meiotic cells and to evaluate persistence. Five healthy men served as local clinical controls. A multicolor FISH assay was applied to detect aneuploidy for chromosome X, Y, and 21 in sperm. Sex ratios were not significantly different from 1:1 among 133,143 cells analyzed. The 40-day samples showed an increase in several sperm aneuploidy groups: disomy 21more » (1.5 fold, p=0.04); disomy X (2.7 fold, p=0.0006), and XY aneuploidy (1.6 folk, p=0 0.017). The results for {approximately}100 days after exposure were similar to controls suggesting that hyperhaploidy effects may not persist. Phase contrast microscopy was used to identify flagellated tetraploid sperm, i.e., X-X-Y-Y-21-21-21-21. Tetraploid sperm were found among 8 semen samples provided by five patients (1.4 {+-} 1.2 per 10,000 cells; >80,000 cells) while none were detected among >50,000 cells from healthy men. Our findings are consistent with the possible aneuploidy-inducing effect of diazepam during male meiosis but further studies are needed before these results can be extrapolated to therapeutic dosing because suicide patients are a highly exposed cohort and other confounding factors (alcohol, drugs, antidotes) cannot be ruled out.« less
Lan, Nguyen Thi Phong; Dalsgaard, Anders; Cam, Phung Dac; Mara, Duncan
2007-06-01
Mean water quality in two wastewater-fed ponds and one non-wastewater-fed pond in Hanoi, Vietnam was approximately 10(6) and approximately 10(4) presumptive thermotolerant coliforms (pThC) per 100 ml, respectively. Fish (common carp, silver carp and Nile tilapia) grown in these ponds were sampled at harvest and in local retail markets. Bacteriological examination of the fish sampled at harvest from both types of pond showed that they were of very good quality (2 - 3 pThC g(-1) fresh muscle weight), despite the skin and gut contents being very contaminated (10(2) - 10(3) pThC g(-1) fresh weight and 10(4) - 10(6) pThC g(-1) fresh weight, respectively). These results indicate that the WHO guideline quality of < or = 1000 faecal coliforms per 100 ml of pond water in wastewater-fed aquaculture is quite restrictive and represents a safety factor of approximately 3 orders of magnitude. However, when the fish from both types of pond were sampled at the point of retail sale, quality deteriorated to 10(2) - 10(5) pThC g(-1) of chopped fresh fish (mainly flesh and skin contaminated with gut contents); this was due to the practice of the local fishmongers in descaling and chopping up the fish from both types of pond with the same knife and on the same chopping block. Fishmonger education is required to improve their hygienic practices; this should be followed by regular hygiene inspections.
Sample entropy analysis of cervical neoplasia gene-expression signatures
Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R
2009-01-01
Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Technical Reports Server (NTRS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-01-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc(exp -1). The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h(exp -1) Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h(exp -1) Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h(exp -1) Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambda(sub zero) = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h(exp -1) Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma(sub 8) (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h(exp -1) Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the pwer spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have M(sub lim) greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Average probability that a "cold hit" in a DNA database search results in an erroneous attribution.
Song, Yun S; Patil, Anand; Murphy, Erin E; Slatkin, Montgomery
2009-01-01
We consider a hypothetical series of cases in which the DNA profile of a crime-scene sample is found to match a known profile in a DNA database (i.e., a "cold hit"), resulting in the identification of a suspect based only on genetic evidence. We show that the average probability that there is another person in the population whose profile matches the crime-scene sample but who is not in the database is approximately 2(N - d)p(A), where N is the number of individuals in the population, d is the number of profiles in the database, and p(A) is the average match probability (AMP) for the population. The AMP is estimated by computing the average of the probabilities that two individuals in the population have the same profile. We show further that if a priori each individual in the population is equally likely to have left the crime-scene sample, then the average probability that the database search attributes the crime-scene sample to a wrong person is (N - d)p(A).
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
van den Beld, Maaike J C; Friedrich, Alexander W; van Zanten, Evert; Reubsaet, Frans A G; Kooistra-Smid, Mirjam A M D; Rossen, John W A
2016-12-01
An inter-laboratory collaborative trial for the evaluation of diagnostics for detection and identification of Shigella species and Entero-invasive Escherichia coli (EIEC) was performed. Sixteen Medical Microbiological Laboratories (MMLs) participated. MMLs were interviewed about their diagnostic methods and a sample panel, consisting of DNA-extracts and spiked stool samples with different concentrations of Shigella flexneri, was provided to each MML. The results of the trial showed an enormous variety in culture-dependent and molecular diagnostic techniques currently used among MMLs. Despite the various molecular procedures, 15 out of 16 MMLs were able to detect Shigella species or EIEC in all the samples provided, showing that the diversity of methods has no effect on the qualitative detection of Shigella flexneri. In contrast to semi quantitative analysis, the minimum and maximum values per sample differed by approximately five threshold cycles (Ct-value) between the MMLs included in the study. This indicates that defining a uniform Ct-value cut-off for notification to health authorities is not advisable. Copyright © 2016 Elsevier B.V. All rights reserved.
Investigation of He–W interactions using DiMES on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerner, R. P.; Rudakov, D. L.; Chrobak, C. P.
Here, tungsten button samples were exposed to He ELMing H-mode plasma in DIII-D using 2.3 MW of electron cyclotron heating power. Prior to the exposures, the W buttons were exposed to either He, or D, plasma in PISCES-A for 2000 s at surface temperatures of 225–850 °C to create a variety of surfaces (surface blisters, subsurface nano-bubbles, fuzz). Erosion was spectroscopically measured from each DiMES sample, with the exception of the fuzzy W samples which showed almost undetectable WI emission. Post-exposure grazing incidence small angle x-ray scattering surface analysis showed the formation of 1.5 nm diameter He bubbles in themore » surface of W buttons after only a single DIII-D (3 s, ~150 ELMs) discharge, similar to the bubble layer resulting from the 2000 s. exposure in PISCES-A. No surface roughening, or damage, was detected on the samples after approximately 600 ELMs with energy density between 0.04–0.1 MJ m –2.« less
Ostra, Miren; Ubide, Carlos; Zuriarrain, Juan
2007-02-12
The determination of atrazine in real samples (commercial pesticide preparations and water matrices) shows how the Fenton's reagent can be used with analytical purposes when kinetic methodology and multivariate calibration methods are applied. Also, binary mixtures of atrazine-alachlor and atrazine-bentazone in pesticide preparations have been resolved. The work shows the way in which interferences and the matrix effect can be modelled. Experimental design has been used to optimize experimental conditions, including the effect of solvent (methanol) used for extraction of atrazine from the sample. The determination of pesticides in commercial preparations was accomplished without any pre-treatment of sample apart from evaporation of solvent; the calibration model was developed for concentration ranges between 0.46 and 11.6 x 10(-5) mol L(-1) with mean relative errors under 4%. Solid-phase extraction is used for pre-concentration of atrazine in water samples through C(18) disks, and the concentration range for determination was established between 4 and 115 microg L(-1) approximately. Satisfactory results for recuperation of atrazine were always obtained.
Investigation of He–W interactions using DiMES on DIII-D
Doerner, R. P.; Rudakov, D. L.; Chrobak, C. P.; ...
2016-01-22
Here, tungsten button samples were exposed to He ELMing H-mode plasma in DIII-D using 2.3 MW of electron cyclotron heating power. Prior to the exposures, the W buttons were exposed to either He, or D, plasma in PISCES-A for 2000 s at surface temperatures of 225–850 °C to create a variety of surfaces (surface blisters, subsurface nano-bubbles, fuzz). Erosion was spectroscopically measured from each DiMES sample, with the exception of the fuzzy W samples which showed almost undetectable WI emission. Post-exposure grazing incidence small angle x-ray scattering surface analysis showed the formation of 1.5 nm diameter He bubbles in themore » surface of W buttons after only a single DIII-D (3 s, ~150 ELMs) discharge, similar to the bubble layer resulting from the 2000 s. exposure in PISCES-A. No surface roughening, or damage, was detected on the samples after approximately 600 ELMs with energy density between 0.04–0.1 MJ m –2.« less
Genotoxicity and osteogenic potential of sulfated polysaccharides from Caulerpa prolifera seaweed.
Chaves Filho, Gildácio Pereira; de Sousa, Angélica Fernandes Gurgel; Câmara, Rafael Barros Gomes; Rocha, Hugo Alexandre Oliveira; de Medeiros, Silvia Regina Batistuzzo; Moreira, Susana Margarida Gomes
2018-07-15
Marine algae are sources of novel bioactive molecules and present a great potential for biotechnological and biomedical applications. Although green algae are the least studied type of seaweed, several of their biological activities have already been described. Here, we investigated the osteogenic potential of Sulfated Polysaccharide (SP)-enriched samples extracted from the green seaweed Caulerpa prolifera on human mesenchymal stem cells isolated from Wharton jelly (hMSC-WJ). In addition, the potential genotoxicity of these SPs was determined by cytokinesis-block micronucleus (CBMN) assay. SP-enriched samples did not show significant cytotoxicity towards hMSCs-WJ at a concentration of up to 10μg/mL, and after 72h of exposure. SP enrichment also significantly increased alkaline phosphatase (ALP) activity, promoting calcium accumulation in the extracellular matrix. Among the SP-enriched samples, the CP0.5 subfraction (at 5μg/mL) presented the most promising results. In this sample, ALP activity was increased approximately by 60%, and calcium accumulation was approximately 6-fold above the negative control, indicating high osteogenic potential. This subfraction also proved to be non-genotoxic, according to the CBMN assay, as it did not induce micronuclei. The results of this study highlight, for the first time, the potential of these SPs for the development of new therapies for bone regeneration. Copyright © 2018 Elsevier B.V. All rights reserved.
PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Gorti, Sarma B; Peter, William H
A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less
Dosimetric measurements of Onyx embolization material for stereotactic radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Donald A.; Balter, James M.; Chaudhary, Neeraj
2012-11-15
Purpose: Arteriovenous malformations are often treated with a combination of embolization and stereotactic radiosurgery. Concern has been expressed in the past regarding the dosimetric properties of materials used in embolization and the effects that the introduction of these materials into the brain may have on the quality of the radiosurgery plan. To quantify these effects, the authors have taken large volumes of Onyx 34 and Onyx 18 (ethylene-vinyl alcohol copolymer doped with tantalum) and measured the attenuation and interface effects of these embolization materials. Methods: The manufacturer provided large cured volumes ({approx}28 cc) of both Onyx materials. These samples weremore » 8.5 cm in diameter with a nominal thickness of 5 mm. The samples were placed on a block tray above a stack of solid water with an Attix chamber at a depth of 5 cm within the stack. The Attix chamber was used to measure the attenuation. These measurements were made for both 6 and 16 MV beams. Placing the sample directly on the solid water stack and varying the thickness of solid water between the sample and the Attix chamber measured the interface effects. The computed tomography (CT) numbers for bulk material were measured in a phantom using a wide bore CT scanner. Results: The transmission through the Onyx materials relative to solid water was approximately 98% and 97% for 16 and 6 MV beams, respectively. The interface effect shows an enhancement of approximately 2% and 1% downstream for 16 and 6 MV beams. CT numbers of approximately 2600-3000 were measured for both materials, which corresponded to an apparent relative electron density (RED) {rho}{sub e}{sup w} to water of approximately 2.7-2.9 if calculated from the commissioning data of the CT scanner. Conclusions: We performed direct measurements of attenuation and interface effects of Onyx 34 and Onyx 18 embolization materials with large samples. The introduction of embolization materials affects the dose distribution of a MV therapeutic beam, but should be of negligible consequence for effective thicknesses of less than 8 mm. The measured interface effects are also small, particularly at 6 MV. Large areas of high-density artifacts and low-density artifacts can cause errors in dose calculations and need to be identified and resolved during planning.« less
Unexpected Magnetic Domain Behavior in LTP-MnBi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, PK; Jin, S; Berkowitz, AE
2013-07-01
Low-temperature-phase MnBi (LTP-MnBi) has attracted much interest as a potential rare-earth-free permanent magnet material because of its high uniaxial magnetocrystalline anisotropy at room temperature, K approximate to 10(7) ergs/cc, and the unusual increase of anisotropy with increasing temperature, with an accompanying increasing coercive force (H-C) with temperature. However, due to the complex Mn-Bi phase diagram, bulk samples of LTP-MnBi with the optimum saturation moment, similar to 75-76 emu/g have been achieved only with zone-refined single crystals. We have prepared polycrystalline samples of LTP-MnBi by induction melting and annealing at 300 degrees C. The moment in 70 kOe is 73.5 emu/g,more » but H-C is only 50 Oe. This is quite surprising-the high saturation moment indicates the dominating presence of LTP-MnBi. Therefore, an H-C c of some significant fraction of 2K/M-S approximate to 30 kOe would seem reasonable in this polycrystalline sample. By examining "Bitter" patterns, we show that the sample is composed of similar to 50 - 100 mu m crystallites. The randomly oriented crystallites exhibit the variety of magnetic domain structures and orientations expected from the hexagonal-structured MnBi with its strong uniaxial anisotropy. Clearly, the reversal of magnetization in the sample proceeds by the low-field nucleation of reversed magnetization in each crystallite, rather than by a wall-pinning mechanism. When the annealed sample was milled into fine particles, H-C increased by several orders of magnitude, as expected.« less
Effect of antimony-oxide on the shielding properties of some sodium-boro-silicate glasses.
Zoulfakar, A M; Abdel-Ghany, A M; Abou-Elnasr, T Z; Mostafa, A G; Salem, S M; El-Bahnaswy, H H
2017-09-01
Some sodium-silicate-boro-antimonate glasses having the molecular composition [(20) Na 2 O - (20) SiO 2 - (60-x) B 2 O 3 - (x) Sb 2 O 3 (where x takes the values 0, 5 … or 20)] have been prepared by the melt quenching method. The melting and annealing temperatures were 1500 and 650K respectively. The amorphous nature of the prepared samples was confirmed by using X-ray diffraction analysis. Both the experimental and empirical density and molar volume values showed gradual increase with increasing Sb 2 O 3 content. The empirical densities showed higher values than those obtained experimentally, while the empirical molar volume values appeared lower than those obtained experimentally, which confirm the amorphous nature and randomness character of the studied samples. The experimentally obtained shielding parameters were approximately coincident with those obtained theoretically by applying WinXCom program. At low gamma-ray energies (0.356 and 0.662MeV) Sb 2 O 3 has approximately no effect on the total Mass Attenuation Coefficient, while at high energies it acts to increase the total Mass Attenuation Coefficient gradually. The obtained Half Value Layer and Mean Free Path values showed gradual decrease as Sb 2 O 3 was gradually increased. Also, the Total Mass Attenuation Coefficient values obtained between about 0.8 and 3.0MeV gamma-ray energy showed a slight decrease, as gamma-ray photon energy increased. This may be due to the differences between the Attenuation Coefficients of both antimony and boron oxides at various gamma-ray photon energies. However, it can be stated that the addition of Sb 2 O 3 into sodium-boro-silicate glasses increases the gamma-ray Attenuation Coefficient and the best sample is that contains 20 mol% of Sb 2 O 3 , which is operating well at 0.356 and 0.662MeV gamma-ray. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolmas, Joanna; Groszyk, Ewa; Piotrowska, Urszula
2015-07-01
In this work, we used the co-precipitation method to synthesize hydroxyapatite (Mn-SeO3-HA) containing both selenium IV (approximately 3.60 wt.%) and manganese II (approximately 0.29 wt.%). Pure hydroxyapatite (HA), hydroxyapatite-containing manganese (II) ions (Mn-HA), and hydroxyapatite-containing selenite ions alone (SeO3-HA), prepared with the same method, were used as reference materials. The structures and physicochemical properties of all the obtained samples were investigated. PXRD studies showed that the obtained materials were homogeneous and consisted of apatite phase. Introducing selenites into the hydroxyapatite crystals considerably affects the size and degree of ordering. Experiments with transmission electron microscopy (TEM) showed that Mn-SeO3-HA crystals are very small, needle-like, and tend to form agglomerates. Fourier transform infrared spectroscopy (FT-IR) and solid-state nuclear magnetic resonance (ssNMR) were used to analyze the structure of the obtained material. Preliminary microbiological tests showed that the material demonstrated antibacterial activity against Staphylococcus aureus, yet such properties were not confirmed regarding Escherichia coli. PACS codes: 61, 76, 81
Lesure, Frank G.
1981-01-01
Traces of gold and molybdenum are widely disseminated in an area approximately 35 km long and 10 km wide in northwestern Moore County, N.C. At least 2540 oz. of gold were recovered from 16 or more mines and prospects between 1880 and 1910. One hundred and ninety rock samples out of 244 collected from old gold mines, pyrophyllite deposits and along roads contain gold quantities ranging from 0.02 to 2.4 parts per million. In addition, 43 samples out of the 244 taken contain molybdenum in amounts ranging from 4 to 500 parts per million.
Insight into structural phase transitions from the decoupled anharmonic mode approximation
NASA Astrophysics Data System (ADS)
Adams, Donat J.; Passerone, Daniele
2016-08-01
We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T = 0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.
Insight into structural phase transitions from the decoupled anharmonic mode approximation.
Adams, Donat J; Passerone, Daniele
2016-08-03
We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T = 0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.
NASA Technical Reports Server (NTRS)
Berger, Eve L.; Keller, Lindsay P.; Christoffersen, Roy
2016-01-01
Samples returned from the moon and Asteroid Itokawa by NASA's Apollo Missions and JAXA's Hayabusa Mission, respectively, provide a unique record of their interaction with the space environment. Space weathering effects result from micrometeorite impact activity and interactions with the solar wind. While the effects of solar wind interactions, ion implantation and solar flare particle track accumulation, have been studied extensively, the rate at which these effects accumulate in samples on airless bodies has not been conclusively determined. Results of numerical modeling and experimental simulations do not converge with observations from natural samples. We measured track densities and rim thicknesses of three olivine grains from Itokawa and multiple olivine and anorthite grains from lunar soils of varying exposure ages. Samples were prepared for analysis using a Leica EM UC6 ultramicrotome and an FEI Quanta 3D dual beam focused ion beam scanning electron microscope (FIB-SEM). Transmission electron microscope (TEM) analyses were performed on the JEOL 2500SE 200kV field emission STEM. The solar wind damaged rims on lunar anorthite grains are amorphous, lack inclusions, and are compositionally similar to the host grain. The rim width increases as a smooth function of exposure age until it levels off at approximately 180 nm after approximately 20 My (Fig. 1). While solar wind ion damage can only accumulate while the grain is in a direct line of sight to the Sun, solar flare particles can penetrate to mm-depths. To assess whether the track density accurately predicts surface exposure, we measured the rim width and track density in olivine and anorthite from the surface of rock 64455, which was never buried and has a surface exposure age of 2 My based on isotopic measurements. The rim width from 64455 (60-70nm) plots within error of the well-defined trend for solar wind amorphized rims in Fig. 1. Measured solar flare track densities are accurately reflecting the surface exposure of the grains. Track densities correlate with the amorphous rim thicknesses. While the space-weathered rims of anorthite grains are amorphous, the space-weathered rims on both Itokawa and lunar olivine grains show solar wind damaged rims that are not amorphous. Instead, the rims are nanocrystalline with high dislocation densities and sparse inclusions of nanophase Fe metal. The rim thicknesses on the olivine grains also correlate with track density. The Itokawa olivine grains have track densities that indicate surface exposures of approximately 10(exp 5) years. Longer exposures (up to approximately 10(exp 7) years) do not amorphize the rims, as evidenced by lunar soil olivines with high track densities (approximately 10(exp 11) cm(exp -2)). From the combined data, shown in Fig. 1, it is clear that olivine is damaged (but not amorphized) more rapidly by the solar wind compared to anorthite. The olivine damaged rim forms quickly (in approximately 10(exp 6) y) and saturates at approximately 120nm with longer exposure time. The anorthite damaged rims form more slowly, amorphize, and grow thicker than the olivine rims. This is in agreement with numerical modeling data which predicts that solar wind damaged rims on anorthite will be thicker than olivine. However, the models predict that both olivine and anorthite rims will amorphize and reach equilibrium widths in less than 10(exp 3) y, in contrast to what is observed for natural samples. Laboratory irradiation experiments, which show rapid formation of fully amorphous and blistered surfaces from simulated solar wind exposures are also in contrast to observations of natural samples. These results suggest that there is a flux dependence on the type and extent of irradiation damage that develops in olivine. This flux dependence suggests that great caution be used in extrapolating between high-flux laboratory experiments and the natural case, as demonstrated by. We constrain the space weathering rate through analysis of returned samples. Provided that the track densities and the solar wind damaged rim widths exhibited by the Itokawa grains are typical of the fine-grained regions of Itokawa, then the space weathering rate is on the order of 10(exp 5) y. Space weathering effects in lunar soils saturate within a few My of exposure while those in Itokawa regolith grains formed in approximately 10(exp 5) y. Olivine and anorthite respond differently to solar wind irradiation. The space weathering effects in olivine are particularly difficult to reconcile with laboratory irradiation studies and numerical models. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind amorphization of different minerals on airless bodies.
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
Enrichment of Thorium (Th) and Lead (Pb) in the early Galaxy
NASA Astrophysics Data System (ADS)
Aoki, Wako; Honda, Satoshi
2010-03-01
We have been determining abundances of Th, Pb and other neutron-capture elements in metal-deficient cool giant stars to constrain the enrichment of heavy elements by the r- and s-processes. Our current sample covers the metallicity range between [Fe/H] = -2.5 and -1.0. (1) The abundance ratios of Pb/Fe and Pb/Eu of most of our stars are approximately constant, and no increase of these ratios with increasing metallicity is found. This result suggests that the Pb abundances of our sample are determined by the r-process with no or little contribution of the s-process. (2) The Th/Eu abundance ratios of our sample show no significant scatter, and the average is lower by 0.2 dex in the logarithmic scale than the solar-system value. This result indicates that the actinides production by the r-process does not show large dispersion, even though r-process models suggest high sensitivity of the actinides production to the nucleosynthesis environment.
Capaldi, Deborah M; Pears, Katherine C; Patterson, Gerald R; Owen, Lee D
2003-04-01
A prospective model of parenting and externalizing behavior spanning 3 generations (G1, G2, and G3) was examined for young men from an at-risk sample of young adult men (G2) who were in approximately the youngest one third of their cohort to become fathers. It was first predicted that the young men in G2 who had children the earliest would show high levels of antisocial behavior. Second, it was predicted that G1 poor parenting practices would show both a direct association with the G2 son's subsequent parenting and a mediated effect via his development of antisocial and delinquent behavior by adolescence. The young fathers had more arrests and were less likely to have graduated from high school than the other young men in the sample. Findings were most consistent with the interpretation that there was some direct effect of parenting from G1 to G2 and some mediated effect via antisocial behavior in G2.
Schulz, R; Newsom, J; Mittelmark, M; Burton, L; Hirsch, C; Jackson, S
1997-01-01
We propose that two related sources of variability in studies of caregiving health effects contribute to an inconsistent pattern of findings: the sampling strategy used and the definition of what constitutes caregiving. Samples are often recruited through self-referral and are typically comprised of caregivers experiencing considerable distress. In this study, we examine the health effects of caregiving in large population-based samples of spousal caregivers and controls using a wide array of objective and self-report physical and mental health outcome measures. By applying different definitions of caregiving, we show that the magnitude of health effects attributable to caregiving can vary substantially, with the largest negative health effects observed among caregivers who characterize themselves as being strained. From an epidemiological perspective, our data show that approximately 80% of persons living with a spouse with a disability provide care to their spouse, but only half of care providers report mental or physical strain associated with caregiving.
Elasticity of microscale volumes of viscoelastic soft matter by cavitation rheometry
NASA Astrophysics Data System (ADS)
Pavlovsky, Leonid; Ganesan, Mahesh; Younger, John G.; Solomon, Michael J.
2014-09-01
Measurement of the elastic modulus of soft, viscoelastic liquids with cavitation rheometry is demonstrated for specimens as small as 1 μl by application of elasticity theory and experiments on semi-dilute polymer solutions. Cavitation rheometry is the extraction of the elastic modulus of a material, E, by measuring the pressure necessary to create a cavity within it [J. A. Zimberlin, N. Sanabria-DeLong, G. N. Tew, and A. J. Crosby, Soft Matter 3, 763-767 (2007)]. This paper extends cavitation rheometry in three ways. First, we show that viscoelastic samples can be approximated with the neo-Hookean model provided that the time scale of the cavity formation is measured. Second, we extend the cavitation rheometry method to accommodate cases in which the sample size is no longer large relative to the cavity dimension. Finally, we implement cavitation rheometry to show that the theory accurately measures the elastic modulus of viscoelastic samples with volumes ranging from 4 ml to as low as 1 μl.
The distribution of galaxies within the 'Great Wall'
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1992-01-01
The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.
Late-Quaternary recharge determined from chloride in shallow groundwater in the central Great Plains
Macfarlane, P.A.; Clark, J.F.; Davisson, M.L.; Hudson, G.B.; Whittemore, Donald O.
2000-01-01
An extensive suite of isotopic and geochemical tracers in groundwater has been used to provide hydrologic assessments of the hierarchy of flow systems in aquifers underlying the central Great Plains (southeastern Colorado and western Kansas) of the United States and to determine the late Pleistocene and Holocene paleotemperature and paleorecharge record. Hydrogeologic and geochemical tracer data permit classification of the samples into late Holocene, late Pleistocene-early Holocene, and much older Pleistocene groups. Paleorecharge rates calculated from the Cl concentration in the samples show that recharge rates were at least twice the late Holocene rate during late Pleistocene-early Holocene time, which is consistent with their relative depletion in 16O and D. Noble gas (Ne, Ar, Kr, Xe) temperature calculations confirm that these older samples represent a recharge environment approximately 5??C cooler than late Holocene values. These results are consistent with the global climate models that show a trend toward a warmer, more arid climate during the Holocene. (C) 2000 University of Washington.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2015-05-01
Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tasić, Viša; Jovašević-Stojanović, Milena; Vardoulakis, Sotiris; Milošević, Novica; Kovačević, Renata; Petrović, Jelena
2012-07-01
Accurate monitoring of indoor mass concentrations of particulate matter is very important for health risk assessment as people in developed countries spend approximately 90% of their time indoors. The direct reading, aerosol monitoring device, Turnkey, OSIRIS Particle Monitor (Model 2315) and the European reference low volume sampler, LVS3 (Sven/Leckel LVS3) with size-selective inlets for PM10 and PM2.5 fractions were used to assess the comparability of available optical and gravimetric methods for particulate matter characterization in indoor air. Simultaneous 24-hour samples were collected in an indoor environment for 60 sampling periods in the town of Bor, Serbia. The 24-hour mean PM10 levels from the OSIRIS monitor were well correlated with the LVS3 levels (R2 = 0.87) and did not show statistically significant bias. The 24-hour mean PM2.5 levels from the OSIRIS monitor were moderately correlated with the LVS3 levels (R2 = 0.71), but show statistically significant bias. The results suggest that the OSIRIS monitor provides sufficiently accurate measurements for PM10. The OSIRIS monitor underestimated the indoor PM10 concentrations by approximately 12%, relative to the reference LVS3 sampler. The accuracy of PM10 measurements could be further improved through empirical adjustment. For the fine fraction of particulate matter, PM2.5, it was found that the OSIRIS monitor underestimated indoor concentrations by approximately 63%, relative to the reference LVS3 sampler. This could lead to exposure misclassification in health effects studies relying on PM2.5 measurements collected with this instrument in indoor environments.
Sun, Xiangyu; Ma, Tingting; Yu, Jing; Huang, Weidong; Fang, Yulin; Zhan, Jicheng
2018-02-15
The copper contents in vineyard soil, grape must and wine and the relationship among them in the Huaizhuo Basin Region, China, were investigated. The results showed that the copper pollution status in vineyard soils, grapes and wines in the investigated area in China is under control, with only 4 surface soil (0-20cm) samples over maximum residue limits (MRL) and no grape or wine samples over MRL. Different vineyards, grape varieties, vine ages, and training systems all significantly influenced the copper contents in the vineyard soils, grape and wines. Additionally, the copper levels in the vineyard soils, grapes and wines all had some correlation. In wine samples, the copper contents ranged from 0.52 to 663μg/L, which is only approximately one percent the level found in grapes and one ten-thousandth that found in soils. Of the wine samples, red wines showed a significantly higher copper content than white wines, while in the red/white grape and soil samples, no significant differences were observed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Initiation and propagation of mixed mode fractures in granite and sandstone
NASA Astrophysics Data System (ADS)
Rück, Marc; Rahner, Roman; Sone, Hiroki; Dresen, Georg
2017-10-01
We investigate mixed mode fracture initiation and propagation in experimentally deformed granite and sandstone. We performed a series of asymmetric loading tests to induce fractures in cylindrical specimens at confining pressures up to 20 MPa. Loading was controlled using acoustic emission (AE) feedback control, which allows studying quasi-static fracture propagation for several hours. Location of acoustic emissions reveals distinct differences in spatial-temporal fracture evolution between granite and sandstone samples. Before reaching peak stress in experiments performed on granite, axial fractures initiate first at the edge of the indenter and then propagate through the entire sample. Secondary inclined fractures develop during softening of the sample. In sandstone, inclined shear fractures nucleate at peak stress and propagate through the specimen. AE source type analysis shows complex fracturing in both materials with pore collapse contributing significantly to fracture growth in sandstone samples. We compare the experimental results with numerical models to analyze stress distribution and energy release rate per unit crack surface area in the samples at different stages during fracture growth. We thereby show that for both rock types the energy release rate increases approximately linearly during fracture propagation. The study illuminates how different material properties modify fracture initiation direction under similar loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevcik, R. S.; Hyman, D. A.; Basumallich, L.
2013-01-01
A technique for carbohydrate analysis for bioprocess samples has been developed, providing reduced analysis time compared to current practice in the biofuels R&D community. The Thermofisher CarboPac SA10 anion-exchange column enables isocratic separation of monosaccharides, sucrose and cellobiose in approximately 7 minutes. Additionally, use of a low-volume (0.2 mL) injection valve in combination with a high-volume detection cell minimizes the extent of sample dilution required to bring sugar concentrations into the linear range of the pulsed amperometric detector (PAD). Three laboratories, representing academia, industry, and government, participated in an interlaboratory study which analyzed twenty-one opportunistic samples representing biomass pretreatment, enzymaticmore » saccharification, and fermentation samples. The technique's robustness, linearity, and interlaboratory reproducibility were evaluated and showed excellent-to-acceptable characteristics. Additionally, quantitation by the CarboPac SA10/PAD was compared with the current practice method utilizing a HPX-87P/RID. While these two methods showed good agreement a statistical comparison found significant quantitation difference between them, highlighting the difference between selective and universal detection modes.« less
NASA Astrophysics Data System (ADS)
Mesbah, Mohsen; Faraji, Ghader; Bushroa, A. R.
2016-03-01
Microstructural evolution and mechanical properties of nanostructured 1060 aluminum alloy tubes processed by tubular-channel angular pressing (TCAP) process were investigated using electron back-scattered diffraction (EBSD), transmission electron microscopy (TEM) and nanoindentation analyzes. EBSD scans revealed a homogeneous ultrafine grained microstructure after the third passes of the TCAP process. Apart from that the mean grain sizes of the TCAP processed tubes were refined to 566 nm, 500 nm and 480 nm respectively after the first, second and third passes. The results showed that after the three TCAP passes, the grain boundaries with a high angle comprised 78% of all the boundaries. This is in comparison to the first pass processed sample that includes approximately 20% HAGBs. The TEM inspection afforded an appreciation of the role of very low-angle misorientation boundaries in the process of refining microstructure. Nanoindentation results showed that hardness was the smallest form of an unprocessed sample while the largest form of the processed sample after the three passes of TCAP indicated the highest resistant of the material. In addition, the module of elasticity of the TCAP processed samples was greater from that of the unprocessed sample.
NASA Technical Reports Server (NTRS)
Federspiel, Martin; Sandage, Allan; Tammann, G. A.
1994-01-01
The observational selection bias properties of the large Mathewson-Ford-Buchhorn (MFB) sample of axies are demonstrated by showing that the apparent Hubble constant incorrectly increases outward when determined using Tully-Fisher (TF) photometric distances that are uncorreted for bias. It is further shown that the value of H(sub 0) so determined is also multivlaued at a given redshift when it is calculated by the TF method using galaxies with differenct line widths. The method of removing this unphysical contradiction is developed following the model of the bias set out in Paper II. The model developed further here shows that the appropriate TF magnitude of a galaxy that is drawn from a flux-limited catalog not only is a function of line width but, even in the most idealistic cases, requires a triple-entry correction depending on line width, apparent magnitude, and catalog limit. Using the distance-limited subset of the data, it is shown that the mean intrinsic dispersion of a bias-free TF relation is high. The dispersion depends on line width, decreasing from sigma(M) = 0.7 mag for galaxies with rotational velocities less than 100 km s(exp-1) to sigma(M) = 0.4 mag for galaxies with rotational velocities greater than 250 km s(exp-1). These dispersions are so large that the random errors of the bias-free TF distances are too gross to detect any peculiar motions of individual galaxies, but taken together the data show again the offset of 500 km s(exp-1) fond both by Dressler & Faber and by MFB for galaxies in the direction of the putative Great Attractor but described now in a different way. The maximum amplitude of the bulk streaming motion at the Local Group is approximately 500 km s(exp-1) but the perturbation dies out, approaching the Machian frame defined by the CMB at a distance of approximately 80 Mpc (v is approximately 4000 km s(exp -1)). This decay to zero perturbation at v is approximately 4000 km s(exp -1) argues against existing models with a single attraction at approximately 4500 km s(exp -1) (the Great Attactor model) pulling the local region. Rather, the cause of the perturbation appears to be the well-known clumpy mass distribution within 4000 km s(exp -1) in the busy directions of Hydra, Centaurus, Antila and Dorado, as postulated earlier (Tammann & Sandage 1985).
Aberer, Andre J; Stamatakis, Alexandros; Ronquist, Fredrik
2016-01-01
Sampling tree space is the most challenging aspect of Bayesian phylogenetic inference. The sheer number of alternative topologies is problematic by itself. In addition, the complex dependency between branch lengths and topology increases the difficulty of moving efficiently among topologies. Current tree proposals are fast but sample new trees using primitive transformations or re-mappings of old branch lengths. This reduces acceptance rates and presumably slows down convergence and mixing. Here, we explore branch proposals that do not rely on old branch lengths but instead are based on approximations of the conditional posterior. Using a diverse set of empirical data sets, we show that most conditional branch posteriors can be accurately approximated via a [Formula: see text] distribution. We empirically determine the relationship between the logarithmic conditional posterior density, its derivatives, and the characteristics of the branch posterior. We use these relationships to derive an independence sampler for proposing branches with an acceptance ratio of ~90% on most data sets. This proposal samples branches between 2× and 3× more efficiently than traditional proposals with respect to the effective sample size per unit of runtime. We also compare the performance of standard topology proposals with hybrid proposals that use the new independence sampler to update those branches that are most affected by the topological change. Our results show that hybrid proposals can sometimes noticeably decrease the number of generations necessary for topological convergence. Inconsistent performance gains indicate that branch updates are not the limiting factor in improving topological convergence for the currently employed set of proposals. However, our independence sampler might be essential for the construction of novel tree proposals that apply more radical topology changes. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernanda Sakamoto, Apostolos Doukas, William Farinelli, Zeina Tannous, Michelle D. Shinn, Stephen Benson, Gwyn P. Williams, H. Dylla, Richard Anderson
2011-12-01
The success of permanent laser hair removal suggests that selective photothermolysis (SP) of sebaceous glands, another part of hair follicles, may also have merit. About 30% of sebum consists of fats with copious CH2 bond content. SP was studied in vitro, using free electron laser (FEL) pulses at an infrared CH2 vibrational absorption wavelength band. Absorption spectra of natural and artificially prepared sebum were measured from 200 nm to 3000 nm, to determine wavelengths potentially able to target sebaceous glands. The Jefferson National Accelerator superconducting FEL was used to measure photothermal excitation of aqueous gels, artificial sebum, pig skin, humanmore » scalp and forehead skin (sebaceous sites). In vitro skin samples were exposed to FEL pulses from 1620 to 1720 nm, spot diameter 7-9.5 mm with exposure through a cold 4C sapphire window in contact with the skin. Exposed and control tissue samples were stained using H and E, and nitroblue tetrazolium chloride staining (NBTC) was used to detect thermal denaturation. Natural and artificial sebum both had absorption peaks near 1210, 1728, 1760, 2306 and 2346 nm. Laser-induced heating of artificial sebum was approximately twice that of water at 1710 and 1720 nm, and about 1.5x higher in human sebaceous glands than in water. Thermal camera imaging showed transient focal heating near sebaceous hair follicles. Histologically, skin samples exposed to {approx}1700 nm, {approx}100-125 ms pulses showed evidence of selective thermal damage to sebaceous glands. Sebaceous glands were positive for NBTC staining, without evidence of selective loss in samples exposed to the laser. Epidermis was undamaged in all samples. Conclusions: SP of sebaceous glands appears to be feasible. Potentially, optical pulses at {approx}1720 nm or {approx}1210 nm delivered with large beam diameter and appropriate skin cooling in approximately 0.1 s may provide an alternative treatment for acne.« less
Variation in Major Depressive Disorder Onset by Place of Origin Among U.S. Latinos.
Lee, Sungkyu; Park, Yangjin
2017-09-01
Using a nationally representative sample of 2514 U.S. Latinos, this study examined the extent to which major depressive disorder (MDD) onset differs by place of origin and the factors associated with it. The Kaplan-Meier method estimated the survival and hazard functions for MDD onset by place of origin, and Cox proportional hazards models identified its associative factors. Approximately 13% of the sample had experienced MDD in their lifetimes. Cuban respondents showed the highest survival function, while Puerto Ricans showed the lowest. With the entire sample, the smoothed hazard function showed that the risk of MDD onset peaked in the late 20s and early 80s. Puerto Rican respondents showed the highest risk of MDD during their 20s and 30s, whereas Cuban respondents showed a relatively stable pattern over time. The results from the Cox proportional hazards model indicated that age, sex, and marital status were significantly related to MDD onset (p < .05). In addition, the effect of U.S.-born status on MDD onset was greater among Mexican respondents than among Puerto Ricans. Findings from the present study demonstrate that different Latino subgroups experience different and unique patterns of MDD onset over time. Future research should account for the role of immigration status in examining MDD onset.
Coastal wave measurements during passage of tropical storm Amy
NASA Technical Reports Server (NTRS)
Morris, W. D.
1977-01-01
Aerial photographic and laser profilometer data of waves generated by tropical storm Amy are presented. The data mission consisted primarily of two legs, one in the direction of the wind waves, and the second along the direction of swell propagation, using Jennette's Pier at Nags Head, North Carolina, as a focal point. At flight time, Amy's center was 512 nmi from shore and had maximum winds of 60 knots. The storm's history is presented, along with a satellite photograph, showing the extent of the storm on the day of the flight. Flight ground tracks are presented along with sample aerial photographs of the wave conditions showing approximate wavelength and direction. Sample wave energy spectra are presented both from the laser profilometer onboard the aircraft, and from the Corps of Engineers Research Center (CERC) shore gauge at Nags Head, North Carolina.
Stress-strain behavior under static loading in Gd123 high-temperature superconductors at 77 K
NASA Astrophysics Data System (ADS)
Fujimoto, Hiroyuki; Murakami, Akira; Teshima, Hidekazu; Morita, Mitsuru
2013-10-01
Mechanical properties of melt-growth GdBa2Cu3Ox (Gd123) superconducting samples with 10 wt.% Ag2O and 0.5 wt.% Pt were evaluated at 77 K through flexural tests for specimens cut from the samples in order to estimate the mechanical properties of the Gd123 material without metal substrates, buffer layers or stabilization layers. We discuss the mechanical properties; the Young's modulus and flexural strength with stress-strain behavior at 77 K. The results show that the flexural strength and fracture strain of Gd123 at 77 K are approximately 100 MPa and 0.1%, respectively, and that the origin of the fracture is defects such as pores, impurities and non-superconducting compounds. We also show that the Young's modulus of Gd123 is estimated to be 160-165 GPa.
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
Online adaptive decision trees: pattern classification and function approximation.
Basak, Jayanta
2006-09-01
Recently we have shown that decision trees can be trained in the online adaptive (OADT) mode (Basak, 2004), leading to better generalization score. OADTs were bottlenecked by the fact that they are able to handle only two-class classification tasks with a given structure. In this article, we provide an architecture based on OADT, ExOADT, which can handle multiclass classification tasks and is able to perform function approximation. ExOADT is structurally similar to OADT extended with a regression layer. We also show that ExOADT is capable not only of adapting the local decision hyperplanes in the nonterminal nodes but also has the potential of smoothly changing the structure of the tree depending on the data samples. We provide the learning rules based on steepest gradient descent for the new model ExOADT. Experimentally we demonstrate the effectiveness of ExOADT in the pattern classification and function approximation tasks. Finally, we briefly discuss the relationship of ExOADT with other classification models.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Chiew, Mark; Graedel, Nadine N; Miller, Karla L
2018-07-01
Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Falqui, Andrea; Corrias, Anna; Gass, Mhairi; Mountjoy, Gavin
2009-04-01
Magnetic nanocomposite materials consisting of 5.5 wt% Fe-Co alloy nanoparticles in a silica aerogel matrix, with compositions Fe(x)Co(1-x) of x = 0.50 and 0.67, have been synthesized by the sol-gel method. The high-resolution transmission electron microscopy images show nanoparticles consisting of single crystal grains of body-centered cubic Fe-Co alloy, with typical crystal grain diameters of approximately 4 and 7 nm for Fe(0.5)Co(0.5) and Fe(0.67)Co(0.33) samples, respectively. The energy dispersive X-ray (EDX) spectra summed over areas of the samples gave compositions Fe(x)C(o1-x) with x = 0.48 +/- 0.06 and 0.68 +/- 0.05. The EDX spectra obtained with the 1.5 nm probe positioned at the centers of approximately 20 nanoparticles gave slightly lower concentrations of Fe, with means of x = 0.43 +/- 0.01 and x = 0.64 +/- 0.02, respectively. The Fe(0.5)Co(0.50) sample was studied using electron energy loss spectroscopy (EELS), and EELS spectra summed over whole nanoparticles gave x = 0.47 +/- 0.06. The EELS spectra from analysis profiles of nanoparticles show a distribution of Fe and Co that is homogeneous, i.e., x = 0.5, within a precision of at best +/-0.05 in x and +/-0.4 nm in position. The present microscopy results have not shown the presence of a thin layer of iron oxide, but this might be at the limit of detectability of the methods.
ERIC Educational Resources Information Center
Edwards, Lynne K.; Meyers, Sarah A.
Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…
Resonance Shift of Single-Axis Acoustic Levitation
NASA Astrophysics Data System (ADS)
Xie, Wen-Jun; Wei, Bing-Bo
2007-01-01
The resonance shift due to the presence and movement of a rigid spherical sample in a single-axis acoustic levitator is studied with the boundary element method on the basis of a two-cylinder model of the levitator. The introduction of a sample into the sound pressure nodes, where it is usually levitated, reduces the resonant interval Hn (n is the mode number) between the reflector and emitter. The larger the sample radius, the greater the resonance shift. When the sample moves along the symmetric axis, the resonance interval Hn varies in an approximately periodical manner, which reaches the minima near the pressure nodes and the maxima near the pressure antinodes. This suggests a resonance interval oscillation around its minimum if the stably levitated sample is slightly perturbed. The dependence of the resonance shift on the sample radius R and position h for the single-axis acoustic levitator is compared with Leung's theory for a closed rectangular chamber, which shows a good agreement.
Biodiversity of benthic macroinvertebrates in Air Terjun Asahan, Asahan, Melaka, Malaysia
NASA Astrophysics Data System (ADS)
Nurhafizah-Azwa, S.; Ahmad A., K.
2016-11-01
A study on benthic macroinvertebrate diversity was conducted at Air Terjun Asahan, Asahan, Melaka. Five stations were selected with distance intervals of approximately 500 metres. Three replicates of benthic macroinvertebrate and water samples were taken. Results classified Air Terjun Asahan in class II, which indicated good water quality based on WQI recommended by the Department of Environment. A total of 1 phylum, 2 classes, 6 order, 30 families, and 2183 individuals were successfully sampled and recorded. The analysis showed that the average value of Shannon Diversity Index, H' (2.19), Pielou Evenness Index, J' (0.30), and Margaleff Richness Index, DMG (3.77) described that Air Terjun Asahan was in moderate condition and the distribution of macroinvertebrates was uniform between stations. Correlation test showed that the WQI had a strong relationship with the diversity indices involved. BMWP, and FBI showed that Air Terjun Asahan was in good water quality. CCA test was conducted to show environmental factors towards benthic macroinvertebrate distribution. The presence of Leptophlebiidae, Baetidae, Heptageniidae and Chironomidae with high abundance of the families showed the potential as biological indicators of a clean ecosystem.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
The effects of hurricane Rita and subsequent drought on alligators in southwest Louisiana.
Lance, Valentine A; Elsey, Ruth M; Butterstein, George; Trosclair, Phillip L; Merchant, Mark
2010-02-01
Hurricane Rita struck the coast of southwest Louisiana in September 2005. The storm generated an enormous tidal surge of approximately four meters in height that inundated many thousands of acres of the coastal marsh with full strength seawater. The initial surge resulted in the deaths of a number of alligators and severely stressed those who survived. In addition, a prolonged drought (the lowest rainfall in 111 years of recorded weather data) following the hurricane resulted in highly saline conditions that persisted in the marsh for several months. We had the opportunity to collect 11 blood samples from alligators located on Holly Beach less than a month after the hurricane, but were unable to collect samples from alligators on Rockefeller Wildlife Refuge until February 2006. Conditions at Rockefeller Refuge did not permit systematic sampling, but a total of 201 samples were collected on the refuge up through August 2006. The blood samples were analyzed for sodium, potassium, chloride, osmolality, and corticosterone. Blood samples from alligators sampled on Holly Beach in October 2005, showed a marked elevation in plasma osmolality, sodium, chloride, potassium, corticosterone, and an elevated heterophil/lymphocyte ratio. Blood samples from alligators on Rockefeller Refuge showed increasing levels of corticosterone as the drought persisted and elevated osmolality and electrolytes. After substantial rainfall in July and August, these indices of osmotic stress returned to within normal limits. (c) 2009 Wiley-Liss, Inc.
Thormar, Hans G; Gudmundsson, Bjarki; Eiriksdottir, Freyja; Kil, Siyoen; Gunnarsson, Gudmundur H; Magnusson, Magnus Karl; Hsu, Jason C; Jonsson, Jon J
2013-04-01
The causes of imprecision in microarray expression analysis are poorly understood, limiting the use of this technology in molecular diagnostics. Two-dimensional strandness-dependent electrophoresis (2D-SDE) separates nucleic acid molecules on the basis of length and strandness, i.e., double-stranded DNA (dsDNA), single-stranded DNA (ssDNA), and RNA·DNA hybrids. We used 2D-SDE to measure the efficiency of cDNA synthesis and its importance for the imprecision of an in vitro transcription-based microarray expression analysis. The relative amount of double-stranded cDNA formed in replicate experiments that used the same RNA sample template was highly variable, ranging between 0% and 72% of the total DNA. Microarray experiments showed an inverse relationship between the difference between sample pairs in probe variance and the relative amount of dsDNA. Approximately 15% of probes showed between-sample variation (P < 0.05) when the dsDNA percentage was between 12% and 35%. In contrast, only 3% of probes showed between-sample variation when the dsDNA percentage was 69% and 72%. Replication experiments of the 35% dsDNA and 72% dsDNA samples were used to separate sample variation from probe replication variation. The estimated SD of the sample-to-sample variation and of the probe replicates was lower in 72% dsDNA samples than in 35% dsDNA samples. Variation in the relative amount of double-stranded cDNA synthesized can be an important component of the imprecision in T7 RNA polymerase-based microarray expression analysis. © 2013 American Association for Clinical Chemistry
Boiling point measurement of a small amount of brake fluid by thermocouple and its application.
Mogami, Kazunari
2002-09-01
This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.
NASA Technical Reports Server (NTRS)
Naranong, N.
1980-01-01
The flexural strength and average modulus of graphite fiber reinforced composites were tested before and after exposure to 0.5 Mev electron radiation and 1.33 Mev gamma radiation by using a three point bending test (ASTM D-790). The irradiation was conducted on vacuum treated samples. Graphite fiber/epoxy (T300/5208), graphite fiber/polyimide (C6000/PMR 15) and graphite fiber/polysulfone (C6000/P1700) composites after being irradiated with 0.5 Mev electron radiation in vacuum up to 5000 Mrad, show increases in stress and modulus of approximately 12% compared with the controls. Graphite fiber/epoxy (T300/5208 and AS/3501-6), after being irradiated with 1.33 Mev gamma radiation up to 360 Mrads, show increases in stress and modulus of approximately 6% at 167 Mrad compared with the controls. Results suggest that the graphite fiber composites studied should withstand the high energy radiation in a space environment for a considerable time, e.g., over 30 years.
A scanning tunneling microscope for a dilution refrigerator.
Marz, M; Goll, G; Löhneysen, H v
2010-04-01
We present the main features of a home-built scanning tunneling microscope that has been attached to the mixing chamber of a dilution refrigerator. It allows scanning tunneling microscopy and spectroscopy measurements down to the base temperature of the cryostat, T approximately 30 mK, and in applied magnetic fields up to 13 T. The topography of both highly ordered pyrolytic graphite and the dichalcogenide superconductor NbSe(2) has been imaged with atomic resolution down to T approximately 50 mK as determined from a resistance thermometer adjacent to the sample. As a test for a successful operation in magnetic fields, the flux-line lattice of superconducting NbSe(2) in low magnetic fields has been studied. The lattice constant of the Abrikosov lattice shows the expected field dependence proportional to 1/square root of B and measurements in the scanning tunneling spectroscopy mode clearly show the superconductive density of states with Andreev bound states in the vortex core.
Differential expression profiling of serum proteins and metabolites for biomarker discovery
NASA Astrophysics Data System (ADS)
Roy, Sushmita Mimi; Anderle, Markus; Lin, Hua; Becker, Christopher H.
2004-11-01
A liquid chromatography-mass spectrometry (LC-MS) proteomics and metabolomics platform is presented for quantitative differential expression analysis. Proteome profiles obtained from 1.5 [mu]L of human serum show ~5000 de-isotoped and quantifiable molecular ions. Approximately 1500 metabolites are observed from 100 [mu]L of serum. Quantification is based on reproducible sample preparation and linear signal intensity as a function of concentration. The platform is validated using human serum, but is generally applicable to all biological fluids and tissues. The median coefficient of variation (CV) for ~5000 proteomic and ~1500 metabolomic molecular ions is approximately 25%. For the case of C-reactive protein, results agree with quantification by immunoassay. The independent contributions of two sources of variance, namely sample preparation and LC-MS analysis, are respectively quantified as 20.4 and 15.1% for the proteome, and 19.5 and 13.5% for the metabolome, for median CV values. Furthermore, biological diversity for ~20 healthy individuals is estimated by measuring the variance of ~6500 proteomic and metabolomic molecular ions in sera for each sample; the median CV is 22.3% for the proteome and 16.7% for the metabolome. Finally, quantitative differential expression profiling is applied to a clinical study comparing healthy individuals and rheumatoid arthritis (RA) patients.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Paleobiology and comparative morphology of a late Neandertal sample from El Sidron, Asturias, Spain.
Rosas, Antonio; Martínez-Maza, Cayetana; Bastir, Markus; García-Tabernero, Antonio; Lalueza-Fox, Carles; Huguet, Rosa; Ortiz, José Eugenio; Julià, Ramón; Soler, Vicente; de Torres, Trinidad; Martínez, Enrique; Cañaveras, Juan Carlos; Sánchez-Moral, Sergio; Cuezva, Soledad; Lario, Javier; Santamaría, David; de la Rasilla, Marco; Fortea, Javier
2006-12-19
Fossil evidence from the Iberian Peninsula is essential for understanding Neandertal evolution and history. Since 2000, a new sample approximately 43,000 years old has been systematically recovered at the El Sidrón cave site (Asturias, Spain). Human remains almost exclusively compose the bone assemblage. All of the skeletal parts are preserved, and there is a moderate occurrence of Middle Paleolithic stone tools. A minimum number of eight individuals are represented, and ancient mtDNA has been extracted from dental and osteological remains. Paleobiology of the El Sidrón archaic humans fits the pattern found in other Neandertal samples: a high incidence of dental hypoplasia and interproximal grooves, yet no traumatic lesions are present. Moreover, unambiguous evidence of human-induced modifications has been found on the human remains. Morphologically, the El Sidrón humans show a large number of Neandertal lineage-derived features even though certain traits place the sample at the limits of Neandertal variation. Integrating the El Sidrón human mandibles into the larger Neandertal sample reveals a north-south geographic patterning, with southern Neandertals showing broader faces with increased lower facial heights. The large El Sidrón sample therefore augments the European evolutionary lineage fossil record and supports ecogeographical variability across Neandertal populations.
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...
2016-10-27
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
Effects of salt loading and flow blockage on the WIPP shrouded probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandra, S.; Ortiz, C.A.; McFarland, A.R.
1993-08-01
The shrouded probes at the WIPP site operate in a salt aerosol environment that can cause a buildup of salt deposits on exposed surfaces of the probes that, in turn, could produce changes in the sampling performance of the probes. At Station A, three probes had been operated for a period of approximately 2 1/2 years when they were inspected with a remote television camera. There were visible deposits of unknown thickness on the probes, so WIPP removed the probes for inspection and cleanup. Measurements were made on the probes and they showed the buildups to be approximately 2.5 mmmore » thick on the most critical dimension of a shrouded probe, which is the inside diameter of the inner probe. For reference, the diameter of a clean probe is 30 mm. The sampling performance of this particular shrouded probe had been previously evaluated in a wind tunnel at Aerosol Technology Laboratory (ATL) of Texas A&M University for two free stream velocities (14 and 21 m/s) and three particle sizes (5, 10 and 15 {mu}m AED).« less
Ehala, S; Vassiljeva, I; Kuldvee, R; Vilu, R; Kaljurand, M
2001-09-01
Capillary electrophoresis (CE) can be a valuable tool for on-line monitoring of bioprocesses. Production of organic acids by phosphorus-solubilizing bacteria and fermentation of UHT milk were monitored and controlled by use of a membrane-interfaced dialysis device and a home-made microsampler for a capillary electrophoresis unit. Use of this specially designed sampling device enabled rapid consecutive injections without interruption of the high voltage. No additional sample preparation was required. The time resolution of monitoring in this particular work was approximately 2 h, but could be reduced to 2 min. Analytes were detected at low microg mL(-1) levels with a reproducibility of approximately 10%. To demonstrate the potential of CE in processes of biotechnological interest, results from monitoring phosphate solubilization by bacteria were submitted to qualitative and quantitative analysis. Fermentation experiments on UHT milk showed that monitoring of the processes by CE can provide good resolution of complex mixtures, although for more specific, detailed characterization the identification of individual substances is needed.
NASA Technical Reports Server (NTRS)
Landahl, Marten T.
1988-01-01
Experiments on wall-bounded shear flows (channel flows and boundary layers) have indicated that the turbulence in the region close to the wall exhibits a characteristic intermittently formed pattern of coherent structures. For a quantitative study of coherent structures it is necessary to make use of conditional sampling. One particularly successful sampling technique is the Variable Integration Time Averaging technique (VITA) first explored by Blackwelder and Kaplan (1976). In this, an event is assumed to occur when the short time variance exceeds a certain threshold multiple of the mean square signal. The analysis presented removes some assumptions in the earlier models in that the effects of pressure and viscosity are taken into account in an approximation based on the assumption that the near-wall structures are highly elongated in the streamwise direction. The appropriateness of this is suggested by the observations but is also self consistent with the results of the model which show that the streamwise dimension of the structure grows with time, so that the approximation should improve with the age of the structure.
Further evidence for cosmological evolution of the fine structure constant.
Webb, J K; Murphy, M T; Flambaum, V V; Dzuba, V A; Barrow, J D; Churchill, C W; Prochaska, J X; Wolfe, A M
2001-08-27
We describe the results of a search for time variability of the fine structure constant alpha using absorption systems in the spectra of distant quasars. Three large optical data sets and two 21 cm and mm absorption systems provide four independent samples, spanning approximately 23% to 87% of the age of the universe. Each sample yields a smaller alpha in the past and the optical sample shows a 4 sigma deviation: Delta alpha/alpha = -0.72+/-0.18 x 10(-5) over the redshift range 0.5
Using nearly full-genome HIV sequence data improves phylogeny reconstruction in a simulated epidemic
Yebra, Gonzalo; Hodcroft, Emma B.; Ragonnet-Cronin, Manon L.; Pillay, Deenan; Brown, Andrew J. Leigh; Fraser, Christophe; Kellam, Paul; de Oliveira, Tulio; Dennis, Ann; Hoppe, Anne; Kityo, Cissy; Frampton, Dan; Ssemwanga, Deogratius; Tanser, Frank; Keshani, Jagoda; Lingappa, Jairam; Herbeck, Joshua; Wawer, Maria; Essex, Max; Cohen, Myron S.; Paton, Nicholas; Ratmann, Oliver; Kaleebu, Pontiano; Hayes, Richard; Fidler, Sarah; Quinn, Thomas; Novitsky, Vladimir; Haywards, Andrew; Nastouli, Eleni; Morris, Steven; Clark, Duncan; Kozlakidis, Zisis
2016-01-01
HIV molecular epidemiology studies analyse viral pol gene sequences due to their availability, but whole genome sequencing allows to use other genes. We aimed to determine what gene(s) provide(s) the best approximation to the real phylogeny by analysing a simulated epidemic (created as part of the PANGEA_HIV project) with a known transmission tree. We sub-sampled a simulated dataset of 4662 sequences into different combinations of genes (gag-pol-env, gag-pol, gag, pol, env and partial pol) and sampling depths (100%, 60%, 20% and 5%), generating 100 replicates for each case. We built maximum-likelihood trees for each combination using RAxML (GTR + Γ), and compared their topologies to the corresponding true tree’s using CompareTree. The accuracy of the trees was significantly proportional to the length of the sequences used, with the gag-pol-env datasets showing the best performance and gag and partial pol sequences showing the worst. The lowest sampling depths (20% and 5%) greatly reduced the accuracy of tree reconstruction and showed high variability among replicates, especially when using the shortest gene datasets. In conclusion, using longer sequences derived from nearly whole genomes will improve the reliability of phylogenetic reconstruction. With low sample coverage, results can be highly variable, particularly when based on short sequences. PMID:28008945
Yebra, Gonzalo; Hodcroft, Emma B; Ragonnet-Cronin, Manon L; Pillay, Deenan; Brown, Andrew J Leigh
2016-12-23
HIV molecular epidemiology studies analyse viral pol gene sequences due to their availability, but whole genome sequencing allows to use other genes. We aimed to determine what gene(s) provide(s) the best approximation to the real phylogeny by analysing a simulated epidemic (created as part of the PANGEA_HIV project) with a known transmission tree. We sub-sampled a simulated dataset of 4662 sequences into different combinations of genes (gag-pol-env, gag-pol, gag, pol, env and partial pol) and sampling depths (100%, 60%, 20% and 5%), generating 100 replicates for each case. We built maximum-likelihood trees for each combination using RAxML (GTR + Γ), and compared their topologies to the corresponding true tree's using CompareTree. The accuracy of the trees was significantly proportional to the length of the sequences used, with the gag-pol-env datasets showing the best performance and gag and partial pol sequences showing the worst. The lowest sampling depths (20% and 5%) greatly reduced the accuracy of tree reconstruction and showed high variability among replicates, especially when using the shortest gene datasets. In conclusion, using longer sequences derived from nearly whole genomes will improve the reliability of phylogenetic reconstruction. With low sample coverage, results can be highly variable, particularly when based on short sequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, Eugene H.
2008-10-01
The origin of the approximate 24-hour urine sampling protocol used at Hanford for routine bioassay is attributed to an informal study done in the mid-1940s. While the actual data were never published and have been lost, anecdotal recollections by staff involved in the initial bioassay program design and administration suggest that the sampling protocol had a solid scientific basis. Numerous alternate methods for normalizing partial day samples to represent a total 24-hour collection have since been proposed and used, but no one method is obviously preferred.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. M. Capron
2008-08-08
The 100-F-46 french drain consisted of a 1.5 to 3 m long, vertically buried, gravel-filled pipe that was approximately 1 m in diameter. Also included in this waste site was a 5 cm cast-iron pipeline that drained condensate from the 119-F Stack Sampling Building into the 100-F-46 french drain. In accordance with this evaluation, the confirmatory sampling results support a reclassification of this site to No Action. The current site conditions achieve the remedial action objectives and the corresponding remedial action goals established in the Remaining Sites ROD. The results of confirmatory sampling show that residual contaminant concentrations do notmore » preclude any future uses and allow for unrestricted use of shallow zone soils. The results also demonstrate that residual contaminant concentrations are protective of groundwater and the Columbia River.« less
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Proton-Induced X-Ray Emission Analysis of Crematorium Emissions
NASA Astrophysics Data System (ADS)
Ali, Salina; Nadareski, Benjamin; Yoskowitz, Joshua; Labrake, Scott; Vineyard, Michael
2014-09-01
There has been considerable debate in recent years about possible mercury emissions from crematoria due to amalgam tooth restorations. We have performed a proton-induced X-ray emission (PIXE) analysis of aerosol and soil samples taken near the Vale Cemetery Crematorium in Schenectady, NY, to address this concern. The aerosol samples were collected on the roof of the crematorium using a nine-stage, cascade impactor that separates the particulate matter by aerodynamic diameter and deposits it onto thin Kapton foils. The soil samples were collected at several different distances from the crematorium and compressed into pellets with a hydraulic press. The Kapton foils containing the aerosol samples and the soil pellets were bombarded with 2.2-MeV protons from the 1.1-MV tandem Pelletron accelerator in the Union College Ion-Beam Analysis Laboratory. We measured significant concentrations of sulfur, phosphorus, potassium, calcium, and iron, but essentially no mercury in the aerosol samples. The lower limit of detection for airborne mercury in this experiment was approximately 0.2 ng / m3. The PIXE analysis of the soil samples showed the presence of elements commonly found in soil (Si, K, Ca, Ti, Mn, Fe), but no trace of mercury. There has been considerable debate in recent years about possible mercury emissions from crematoria due to amalgam tooth restorations. We have performed a proton-induced X-ray emission (PIXE) analysis of aerosol and soil samples taken near the Vale Cemetery Crematorium in Schenectady, NY, to address this concern. The aerosol samples were collected on the roof of the crematorium using a nine-stage, cascade impactor that separates the particulate matter by aerodynamic diameter and deposits it onto thin Kapton foils. The soil samples were collected at several different distances from the crematorium and compressed into pellets with a hydraulic press. The Kapton foils containing the aerosol samples and the soil pellets were bombarded with 2.2-MeV protons from the 1.1-MV tandem Pelletron accelerator in the Union College Ion-Beam Analysis Laboratory. We measured significant concentrations of sulfur, phosphorus, potassium, calcium, and iron, but essentially no mercury in the aerosol samples. The lower limit of detection for airborne mercury in this experiment was approximately 0.2 ng / m3. The PIXE analysis of the soil samples showed the presence of elements commonly found in soil (Si, K, Ca, Ti, Mn, Fe), but no trace of mercury. Union College Department of Physics and Astronomy.
Herschel Discovery of a New class of Cold, Faint Debris Discs
NASA Technical Reports Server (NTRS)
Eiroa, C.; Marshall, J. P.; Mora, A.; Krivov, A. V.; Montesinos, B.; Absil, O.; Ardila, D.; Arevalo, M.; Augereau, J. -Ch.; Bayo, A.;
2012-01-01
We present Herschel PACS 100 and 160 micron observations of the solar-type stars alpha Men, HD 88230 and HD 210277, which form part of the FGK stars sample of the Herschel Open Time Key Programme (OTKP) DUNES (DUst around NEarby Stars). Our observations show small infrared excesses at 160 micron for all three stars. HD 210277 also shows a small excess at 100 micron. while the 100 micron fluxes of a Men and HD 88230 agree with the stellar photospheric predictions. We attribute these infrared excesses to a new class of cold, faint debris discs. alpha Men and HD 88230 are spatially resolved in the PACS 160 micron images, while HD 210277 is point-like at that wavelength. The projected linear sizes of the extended emission lie in the range from approximately 115 to <= 250 AU. The estimated black body temperatures from the 100 and 160 micron fluxes are approximately < 22 K, while the fractional luminosity of the cold dust is L(dust)/ L(star) approximates 10(exp -6), close to the luminosity of the Solar-System's Kuiper belt. These debris discs are the coldest and faintest discs discovered so far around mature stars and cannot easily be explained by invoking "classical" debris disc models.
Valero, E; Sanz, J; Martínez-Castro, I
2001-06-01
Direct thermal desorption (DTD) has been used as a technique for extracting volatile components of cheese as a preliminary step to their gas chromatographic (GC) analysis. In this study, it is applied to different cheese varieties: Camembert, blue, Chaumes, and La Serena. Volatiles are also extracted using other techniques such as simultaneous distillation-extraction and dynamic headspace. Separation and identification of the cheese components are carried out by GC-mass spectrometry. Approximately 100 compounds are detected in the examined cheeses. The described results show that DTD is fast, simple, and easy to automate; requires only a small amount of sample (approximately 50 mg); and affords quantitative information about the main groups of compounds present in cheeses.
Examination of the Chayes-Kruskal procedure for testing correlations between proportions
Kork, J.O.
1977-01-01
The Chayes-Kruskal procedure for testing correlations between proportions uses a linear approximation to the actual closure transformation to provide a null value, pij, against which an observed closed correlation coefficient, rij, can be tested. It has been suggested that a significant difference between pij and rij would indicate a nonzero covariance relationship between the ith and jth open variables. In this paper, the linear approximation to the closure transformation is described in terms of a matrix equation. Examination of the solution set of this equation shows that estimation of, or even the identification of, significant nonzero open correlations is essentially impossible even if the number of variables and the sample size are large. The method of solving the matrix equation is described in the appendix. ?? 1977 Plenum Publishing Corporation.
[Stimulation of labour with oxytocin and ventouse deliveries are inadequately documented].
Lindved, Birgitte Freilev; Kierkegaard, Ole; Anhøj, Jacob
2014-09-15
A retrospective sample of 180 records from four regional hospitals and five university hospitals in Denmark was collected and the documentation for use of oxytocin in augmentation of labour and ventouse deliveries according to the national guidelines was registered. Only approximately half of the elements in the national guidelines were documented. This shows that there is a potential for improvement in the ongoing Danish national quality improvement project Safe Deliveries (Sikre Fødsler).
Survey of pesticide poisoning in Sri Lanka
Jeyaratnam, J.; Seneviratne, R. S. de Alwis; Copplestone, J. F.
1982-01-01
This study included a sample survey of the clinical records of patients admitted to the different hospitals in Sri Lanka, and showed that approximately 13 000 patients are admitted to hospital annually for pesticide poisoning and that each year 1000 of them die. Suicidal attempts account for 73% of the total, and occupational and accidental poisoning accounts for 24.9%. It is recommended that urgent action be taken to minimize the extent of the problem. PMID:6982784
Nutrient Enhancement of Fruit and Effects of Storage Conditions
1998-05-01
Cubed, peeled apples, approximately 0.25 in (0.6 cm) Ingredients - apples, high fructose corn syrup , ascorbic and citric acid, (to...addition, a field study of the products also showed high acceptability. Samples stored at 40, 70 and 100 °F were tested at zero time, 1,3, 6, 9, and 12...foods of good nutrition and high sensory acceptability with a reduced cost of processing. Such foods will deliver a high density of nutrients
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Tanada, H; Ikemoto, T; Masutani, R; Tanaka, H; Takubo, T
2014-02-01
In this study, we evaluated the performance of the ADVIA 120 hematology system for cerebrospinal fluid (CSF) assay. Cell counts and leukocyte differentials in CSF were examined with the ADVIA 120 hematology system, while simultaneously confirming an effective hemolysis agent for automated CSF cell counts. The detection limits of both white blood cell (WBC) counts and red blood cell (RBC) counts on the measurement of CSF cell counts by the ADVIA 120 hematology system were superior at 2 cells/μL (10(-6) L). The WBC count was linear up to 9.850 cells/μL, and the RBC count was linear up to approximately 20 000 cells/μL. The intrarun reproducibility indicated good precision. The leukocyte differential of CSF cells, performed by the ADVIA120 hematology system, showed good correlation with the microscopic procedure. The VersaLyse hemolysis solution efficiently lysed the samples without interfering with cell counts and leukocyte differential, even in a sample that included approximately 50 000/μL RBC. These data show the ADVIA 120 hematology system correctly measured the WBC count and leukocyte differential in CSF. The VersaLyse hemolysis solution is considered to be optimal for hemolysis treatment of CSF when measuring cell counts and differentials by the ADVIA 120 hematology system. © 2013 John Wiley & Sons Ltd.
Evolution of nitrogen species in landfill leachates under various stabilization states.
Zhao, Renzun; Gupta, Abhinav; Novak, John T; Goldsmith, C Douglas
2017-11-01
In this study, nitrogen species in landfill leachates under various stabilization states were investigated with emphasis on organic nitrogen. Ammonium nitrogen was found to be approximately 1300mg/L in leachates from younger landfill units (less than 10years old), and approximately 500mg/L in leachates from older landfill units (up to 30years old). The concentration and aerobic biodegradability of organic nitrogen decreased with landfill age. A size distribution study showed that most organic nitrogen in landfill leachates is <1kDa. The Lowry protein concentration (mg/L-N) was analyzed and showed a strong correlation with the total organic nitrogen (TON, mg/L-N, R 2 =0.88 and 0.98 for untreated and treated samples, respectively). The slopes of the regression curves of untreated (protein=0.45TON) and treated (protein=0.31TON) leachates indicated that the protein is more biodegradable than the other organic nitrogen species in landfill leachates. XAD-8 resin was employed to isolate the hydrophilic fraction of leachate samples, and it was found that the hydrophilic fraction proportion in terms of organic nitrogen decreased with landfill age. Solid-state 15 N nuclear magnetic resonance (NMR) was utilized to identify the nitrogen species. Proteinaceous materials were found to be readily biodegradable, while heterocyclic nitrogen species were found to be resistant to biodegradation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Online Reinforcement Learning Using a Probability Density Estimation.
Agostini, Alejandro; Celaya, Enric
2017-01-01
Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.
Brevet, Julien; Claret, Francis; Reiller, Pascal E
2009-10-01
Although a high heterogeneity of composition is awaited for humic substances, their complexation properties do not seem to greatly depend on their origins. The information on the difference in the structure of these complexes is scarce. To participate in the filling of this lack, a study of the spectral and temporal evolution of the Eu(III) luminescence implied in humic substance (HS) complexes is presented. Seven different extracts, namely Suwannee River fulvic acid (SRFA) and humic acid (SRHA), and Leonardite HA (LHA) from the International Humic Substances Society (USA), humic acid from Gorleben (GohyHA), and from the Kleiner Kranichsee bog (KFA, KHA) from Germany, and purified commercial Aldrich HA (PAHA), were made to contact with Eu(III). Eu(III)-HS time-resolved luminescence properties were compared with aqueous Eu(3+) at pH 5. Using an excitation wavelength of 394 nm, the typical bi-exponential luminescence decay for Eu(III)-HS complexes is common to all the samples. The components tau(1) and tau(2) are in the same order of magnitude for all the samples, i.e., 40
NASA Technical Reports Server (NTRS)
Treiman, Allan H.; Bish, David L.; Vaniman, David T.; Chipera, Steve J.; Blake, David F.; Ming, Doug W.; Morris, Richard V.; Bristow, Thomas F.; Morrison, Shaunna M.; Baker, Michael B.;
2016-01-01
The Windjana drill sample, a sandstone of the Dillinger member (Kimberley formation, Gale Crater, Mars), was analyzed by CheMin X-ray diffraction (XRD) in the MSL Curiosity rover. From Rietveld refinements of its XRD pattern, Windjana contains the following: sanidine (21% weight, approximately Or(sub 95)); augite (20%); magnetite (12%); pigeonite; olivine; plagioclase; amorphous and smectitic material (approximately 25%); and percent levels of others including ilmenite, fluorapatite, and bassanite. From mass balance on the Alpha Proton X-ray Spectrometer (APXS) chemical analysis, the amorphous material is Fe rich with nearly no other cations-like ferrihydrite. The Windjana sample shows little alteration and was likely cemented by its magnetite and ferrihydrite. From ChemCam Laser-Induced Breakdown Spectrometer (LIBS) chemical analyses, Windjana is representative of the Dillinger and Mount Remarkable members of the Kimberley formation. LIBS data suggest that the Kimberley sediments include at least three chemical components. The most K-rich targets have 5.6% K2O, approximately 1.8 times that of Windjana, implying a sediment component with greater than 40% sanidine, e.g., a trachyte. A second component is rich in mafic minerals, with little feldspar (like a shergottite). A third component is richer in plagioclase and in Na2O, and is likely to be basaltic. The K-rich sediment component is consistent with APXS and ChemCam observations of K-rich rocks elsewhere in Gale Crater. The source of this sediment component was likely volcanic. The presence of sediment from many igneous sources, in concert with Curiosity's identifications of other igneous materials (e.g.,mugearite), implies that the northern rim of Gale Crater exposes a diverse igneous complex, at least as diverse as that found in similar-age terranes on Earth.
Senger, Jenna-Lynn; Chandran, Geethan; Kanthan, Rani
2014-01-01
To reconsider the routine plastic surgical practice of requesting histopathological evaluation of tissue from gynecomastia. The present study was a retrospective histopathological review (15-year period [1996 to 2012]) involving gynecomastia tissue samples received at the pathology laboratory in the Saskatoon Health Region (Saskatchewan). The Laboratory Information System (LIS) identified all specimens using the key search words "gynecomastia", "gynaecomastia", "gynecomazia" and "gynaecomazia". A literature review to identify all cases of incidentally discovered malignancies in gynecomastia tissue specimens over a 15-year period (1996 to present) was undertaken. The 15-year LIS search detected a total of 452 patients that included two cases of pseudogynecomastia (0.4%). Patients' age ranged from five to 92 years and 43% of the cases were bilateral (28% left sided, 29% right sided). The weight of the specimens received ranged from 0.2 g to 1147.2 g. All cases showed no significant histopathological concerns. The number of tissue blocks sampled ranged from one to 42, averaging four blocks/case (approximately $105/case), resulting in a cost of approximately $3,200/year, with a 15-year expenditure of approximately $48,000. The literature review identified a total of 15 incidental findings: ductal carcinoma in situ (12 cases), atypical ductal hyperplasia (two cases) and infiltrating ductal carcinoma (one case). In the context of evidence-based literature, and because no significant pathological findings were detected in this particular cohort of 452 cases with 2178 slides, the authors believe it is time to re-evaluate whether routine histopathological examination of tissue from gynecomastia remains necessary. The current climate of health care budget fiscal restraints warrants reassessment of the current policies and practices of sending tissue samples of gynecomastia incurring negative productivity costs on routine histopathological examination.
Super-sample covariance approximations and partial sky coverage
NASA Astrophysics Data System (ADS)
Lacasa, Fabien; Lima, Marcos; Aguena, Michel
2018-04-01
Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.
White, Helen E; Hall, Victoria J; Cross, Nicholas C P
2007-11-01
Angelman syndrome (AS) and Prader-Willi syndrome (PWS) are 2 distinct neurodevelopmental disorders caused primarily by deficiency of specific parental contributions at an imprinted domain within the chromosomal region 15q11.2-13. Lack of paternal contribution results in PWS either by paternal deletion (approximately 70%) or maternal uniparental disomy (UPD) (approximately 25%). Most cases of AS result from the lack of a maternal contribution from this same region, by maternal deletion (70%) or paternal UPD (approximately 5%). Analysis of allelic methylation differences at the small nuclear ribonucleoprotein polypeptide N (SNRPN) locus differentiates the maternally and paternally inherited chromosome 15 and can be used as a diagnostic test for AS and PWS. Methylation-sensitive high-resolution melting-curve analysis (MS-HRM) using the DNA binding dye EvaGreen was used to analyze methylation differences at the SNRPN locus in anonymized DNA samples from individuals with PWS (n = 39) or AS (n = 31) and from healthy control individuals (n = 95). Results from the MS-HRM assay were compared to those obtained by use of a methylation-specific PCR (MSP) protocol that is used commonly in diagnostic practice. With the MS-HRM assay 97.6% of samples were unambiguously assigned to the 3 diagnostic categories (AS, PWS, normal) by use of automated calling with an 80% confidence percentage threshold, and the failure rate was 0.6%. One PWS sample showed a discordant result for the MS-HRM assay compared to MSP data. MS-HRM is a simple, rapid, and robust method for screening methylation differences at the SNRPN locus and could be used as a diagnostic screen for PWS and AS.
Phase contrast STEM for thin samples: Integrated differential phase contrast.
Lazić, Ivan; Bosch, Eric G T; Lazar, Sorin
2016-01-01
It has been known since the 1970s that the movement of the center of mass (COM) of a convergent beam electron diffraction (CBED) pattern is linearly related to the (projected) electrical field in the sample. We re-derive a contrast transfer function (CTF) for a scanning transmission electron microscopy (STEM) imaging technique based on this movement from the point of view of image formation and continue by performing a two-dimensional integration on the two images based on the two components of the COM movement. The resulting integrated COM (iCOM) STEM technique yields a scalar image that is linear in the phase shift caused by the sample and therefore also in the local (projected) electrostatic potential field of a thin sample. We confirm that the differential phase contrast (DPC) STEM technique using a segmented detector with 4 quadrants (4Q) yields a good approximation for the COM movement. Performing a two-dimensional integration, just as for the COM, we obtain an integrated DPC (iDPC) image which is approximately linear in the phase of the sample. Beside deriving the CTFs of iCOM and iDPC, we clearly point out the objects of the two corresponding imaging techniques, and highlight the differences to objects corresponding to COM-, DPC-, and (HA) ADF-STEM. The theory is validated with simulations and we present first experimental results of the iDPC-STEM technique showing its capability for imaging both light and heavy elements with atomic resolution and a good signal to noise ratio (SNR). Copyright © 2015 Elsevier B.V. All rights reserved.
Basunia, S; Landsberger, S
2001-10-01
Pantex firing range soil samples were analyzed for Pb, Cu, Sb, Zn, and As. One hundred ninety-seven samples were collected from the firing range and vicinity area. There was a lack of knowledge about the distribution of Pb in the firing range, so a random sampling with proportional allocation was chosen. Concentration levels of Pb and Cu in the firing range were found to be in the range of 11-4675 and 13-359 mg/kg, respectively. Concentration levels of Sb were found to be in the range of 1-517 mg/kg. However, the Zn and As concentration levels were close to average soil background levels. The Sn concentration level was expected to be higher in the Pantex firing range soil samples. However, it was found to be below the neutron activation analysis (NAA) detection limit of 75 mg/kg. Enrichment factor analysis showed that Pb and Sb were highly enriched in the firing range with average magnitudes of 55 and 90, respectively. Cu was enriched approximately 6 times more than the usual soil concentration levels. Toxicity characteristic leaching procedure (TCLP) was carried out on size-fractionated homogeneous soil samples. The concentration levels of Pb in leachates were found to be approximately 12 times higher than the U.S. Environmental Protection Agency (EPA) regulatory concentration level of 5 mg/L. Sequential extraction (SE) was also performed to characterize Pb and other trace elements into five different fractions. The highest Pb fraction was found with organic matter in the soil.
Critical time scales for advection-diffusion-reaction processes.
Ellery, Adam J; Simpson, Matthew J; McCue, Scott W; Baker, Ruth E
2012-04-01
The concept of local accumulation time (LAT) was introduced by Berezhkovskii and co-workers to give a finite measure of the time required for the transient solution of a reaction-diffusion equation to approach the steady-state solution [A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Biophys. J. 99, L59 (2010); A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Phys. Rev. E 83, 051906 (2011)]. Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb [A. McNabb and G. C. Wake, IMA J. Appl. Math. 47, 193 (1991)]. Although McNabb's initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one-dimensional linear advection-diffusion-reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform-to-uniform transitions; these results provide a practical interpretation for MAT by directly linking the stochastic microscopic processes to a meaningful macroscopic time scale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.
NASA Astrophysics Data System (ADS)
Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.
2017-08-01
In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume proportions in the layers dominated by phyllosilicate minerals.
Pituitary gland volumes in bipolar disorder.
Clark, Ian A; Mackay, Clare E; Goodwin, Guy M
2014-12-01
Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Characterization of air contaminants formed by the interaction of lava and sea water.
Kullman, G J; Jones, W G; Cornwell, R J; Parker, J E
1994-01-01
We made environmental measurements to characterize contaminants generated when basaltic lava from Hawaii's Kilauea volcano enters sea water. This interaction of lava with sea water produces large clouds of mist (LAZE). Island winds occasionally directed the LAZE toward the adjacent village of Kalapana and the Hawaii Volcanos National Park, creating health concerns. Environmental samples were taken to measure airborne concentrations of respirable dust, crystalline silica and other mineral compounds, fibers, trace metals, inorganic acids, and organic and inorganic gases. The LAZE contained quantifiable concentrations of hydrochloric acid (HCl) and hydrofluoric acid (HF); HCl was predominant. HCl and HF concentrations were highest in dense plumes of LAZE near the sea. The HCl concentration at this sampling location averaged 7.1 ppm; this exceeds the current occupational exposure ceiling of 5 ppm. HF was detected in nearly half the samples, but all concentrations were <1 ppm Sulfur dioxide was detected in one of four short-term indicator tube samples at approximately 1.5 ppm. Airborne particulates were composed largely of chloride salts (predominantly sodium chloride). Crystalline silica concentrations were below detectable limits, less than approximately 0.03 mg/m3 of air. Settled dust samples showed a predominance of glass flakes and glass fibers. Airborne fibers were detected at quantifiable levels in 1 of 11 samples. These fibers were composed largely of hydrated calcium sulfate. These findings suggest that individuals should avoid concentrated plumes of LAZE near its origin to prevent over exposure to inorganic acids, specifically HCl. Images Figure 1. Figure 2. Figure 3. Figure 4. A Figure 4. B Figure 4. C Figure 4. D PMID:8593853
NDA issues with RFETS vitrified waste forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.; Veazey, G.
1998-12-31
A study was conducted at Los Alamos National Laboratory (LANL) for the purpose of determining the feasibility of using a segmented gamma scanner (SGS) to accurately perform non-destructive analysis (NDA) on certain Rocky Flats Environmental Technology Site (RFETS) vitrified waste samples. This study was performed on a full-scale vitrified ash sample prepared at LANL according to a procedure similar to that anticipated to be used at RFETS. This sample was composed of a borosilicate-based glass frit, blended with ash to produce a Pu content of {approximately}1 wt %. The glass frit was taken to a degree of melting necessary tomore » achieve a full encapsulation of the ash material. The NDA study performed on this sample showed that SGSs with either {1/2}- or 2-inch collimation can achieve an accuracy better than 6 % relative to calorimetry and {gamma}-ray isotopics. This accuracy is achievable, after application of appropriate bias corrections, for transmissions of about {1/2} % through the waste form and counting times of less than 30 minutes. These results are valid for ash material and graphite fines with the same degree of plutonium particle size, homogeneity, sample density, and sample geometry as the waste form used to obtain the results in this study. A drum-sized thermal neutron counter (TNC) was also included in the study to provide an alternative in the event the SGS failed to meet the required level of accuracy. The preliminary indications are that this method will also achieve the required accuracy with counting times of {approximately}30 minutes and appropriate application of bias corrections. The bias corrections can be avoided in all cases if the instruments are calibrated on standards matching the items.« less
Preliminary post-tsunami water quality survey in Phang-Nga province, southern Thailand.
Tharnpoophasiam, Prapin; Suthisarnsuntorn, Usanee; Worakhunpiset, Suwalee; Charoenjai, Prasasana; Tunyong, Witawat; Phrom-In, Suvannee; Chattanadee, Siriporn
2006-01-01
This preliminary water quality survey was performed eight weeks after the tsunami hit Phang-Nga Province on 26 December 2004. Water samples collected from the affected area, 10 km parallel to the seaside, were compared with water samples from the control area approximately 4 km from the seaside, which the tsunami waves could not reach. These samples included 18 surface-water samples, 37 well-water samples, and 8 drinking-water samples, which were examined for microbiology and physical-chemical properties. The microbiological examinations focused on enteric bacteria, which were isolated by culture method, while physical-chemical properties comprised on-site testing for pH, salinity, dissolved oxygen (DO), conductivity and total dissolved solids (TDS) by portable electrochemical meter (Sens Ion 156). The results of the microbiological examinations showed that water samples in the affected areas were more contaminated with enteric bacteria than the control area: 45.4% of surface-water samples in the affected area, and 40.0% in the control; 19.0% of well-water samples in the affected area, and 7.7% in the control. All eight drinking-water samples were clear of enteric bacteria. Tests for physical-chemical properties showed that the salinity, pH, conductivity, and TDS of surface-water samples from the affected area were significantly higher than the control. The salinity, conductivity, and TDS of the well-water samples from the affected areas were also significantly greater than those from the control area. The surface and well water in the tsunami-affected area have been changed greatly and need improvement.
Li, Siqi; Zheng, Xunhua; Liu, Chunyan; Yao, Zhisheng; Zhang, Wei; Han, Shenghui
2018-08-01
Quantifications of soil dissolvable organic carbon concentrations, together with other relevant variables, are needed to understand the carbon biogeochemistry of terrestrial ecosystems. Soil dissolvable organic carbon can generally be grouped into two incomparable categories. One is soil extractable organic carbon (EOC), which is measured by extracting with an aqueous extractant (distilled water or a salt solution). The other is soil dissolved organic carbon (DOC), which is measured by sampling soil water using tension-free lysimeters or tension samplers. The influences of observation methods, natural factors and management practices on the measured concentrations, which ranged from 2.5-3970 (mean: 69) mg kg -1 of EOC and 0.4-200 (mean: 12) mg L -1 of DOC, were investigated through a meta-analysis. The observation methods (e.g., extractant, extractant-to-soil ratio and pre-treatment) had significant effects on EOC concentrations. The most significant divergence (approximately 109%) occurred especially at the extractant of potassium sulfate (K 2 SO 4 ) solutions compared to distilled water. As EOC concentrations were significantly different (approximately 47%) between non-cultivated and cultivated soils, they were more suitable than DOC concentrations for assessing the influence of land use on soil dissolvable organic carbon levels. While season did not significantly affect EOC concentrations, DOC concentrations showed significant differences (approximately 50%) in summer and autumn compared to spring. For management practices, applications of crop residues and nitrogen fertilizers showed positive effects (approximately 23% to 91%) on soil EOC concentrations, while tillage displayed negative effects (approximately -17%), compared to no straw, no nitrogen fertilizer and no tillage. Compared to no nitrogen, applications of synthetic nitrogen also appeared to significantly enhance DOC concentrations (approximately 32%). However, further studies are needed in the future to confirm/investigate the effects of ecosystem management practices using standardized EOC measurement protocols or more DOC cases of field experiments. Copyright © 2018 Elsevier B.V. All rights reserved.
Ivarsson, M; Lindblom, S; Broman, C; Holm, N G
2008-03-01
In this paper we describe carbon-rich filamentous structures observed in association with the zeolite mineral phillipsite from sub-seafloor samples drilled and collected during the Ocean Drilling Program (ODP) Leg 197 at the Emperor Seamounts. The filamentous structures are approximately 5 microm thick and approximately 100-200 microm in length. They are found attached to phillipsite surfaces in veins and entombed in vein-filling carbonates. The carbon content of the filaments ranges between approximately 10 wt% C and 55 wt% C. They further bind to propidium iodide (PI), which is a dye that binds to damaged cell membranes and remnants of DNA. Carbon-rich globular microstructures, 1-2 microm in diameter, are also found associated with the phillipsite surfaces as well as within wedge-shaped cavities in phillipsite assemblages. The globules have a carbon content that range between approximately 5 wt% C and 55 wt% C and they bind to PI. Ordinary globular iron oxides found throughout the samples differ in that they contain no carbon and do not bind to the dye PI. The carbon-rich globules are mostly concentrated to a film-like structure that is attached to the phillipsite surfaces. This film has a carbon content that ranges between approximately 25 wt% C and 75 wt% C and partially binds to PI. EDS analyses show that the carbon in all structures described are not associated with calcium and therefore not bound in carbonates. The carbon content and the binding to PI may indicate that the filamentous structures could represent fossilized filamentous microorganisms, the globules could represent fossilized microbial cells and the film-like structures could represent a microbially produced biofilm. Our results extend the knowledge of possible habitable niches for a deep biosphere in sub-seafloor environments and suggests, as phillipsite is one of the most common zeolite mineral in volcanic rocks of the oceanic crust, that it could be a common feature in the oceanic crust elsewhere.
NASA Astrophysics Data System (ADS)
Smilowitz, L.; Henson, B. F.; Romero, J. J.; Asay, B. W.; Schwartz, C. L.; Saunders, A.; Merrill, F. E.; Morris, C. L.; Kwiatkowski, K.; Hogan, G.; Nedrow, P.; Murray, M. M.; Thompson, T. N.; McNeil, W.; Rightley, P.; Marr-Lyon, M.
2008-06-01
We present a new phenomenology for burn propagation inside a thermal explosion based on dynamic radiography. Radiographic images were obtained of an aluminum cased solid cylindrical sample of a plastic bonded formulation of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine. The phenomenology observed is ignition followed by cracking in the solid accompanied by the propagation of a radially symmetric front of increasing proton transmission. This is followed by a further increase in transmission through the sample, ending after approximately 100μs. We show that these processes are consistent with the propagation of a convective burn front followed by consumption of the remaining solid by conductive particle burning.
Smilowitz, L; Henson, B F; Romero, J J; Asay, B W; Schwartz, C L; Saunders, A; Merrill, F E; Morris, C L; Kwiatkowski, K; Hogan, G; Nedrow, P; Murray, M M; Thompson, T N; McNeil, W; Rightley, P; Marr-Lyon, M
2008-06-06
We present a new phenomenology for burn propagation inside a thermal explosion based on dynamic radiography. Radiographic images were obtained of an aluminum cased solid cylindrical sample of a plastic bonded formulation of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine. The phenomenology observed is ignition followed by cracking in the solid accompanied by the propagation of a radially symmetric front of increasing proton transmission. This is followed by a further increase in transmission through the sample, ending after approximately 100 micros. We show that these processes are consistent with the propagation of a convective burn front followed by consumption of the remaining solid by conductive particle burning.
NASA Technical Reports Server (NTRS)
Davis, S. H.; Kissinger, L. D.
1978-01-01
The effect of humidity on the CO2 removal efficiency of small beds of anhydrous LiOH has been studied. Experimental data taken in this small bed system clearly show that there is an optimum humidity for beds loaded with LiOH from a single lot. The CO2 efficiency falls rapidly under dry conditions, but this behavior is approximately the same in all samples. The behavior of the bed under wet conditions is quite dependent on material size distribution. The presence of large particles in a sample can lead to rapid fall off in the CO2 efficiency as the humidity increases.
Selenium, fluorine, and arsenic in surficial materials of the conterminous United States
Shacklette, Hansford T.; Boerngen, Josephine G.; Keith, John R.
1974-01-01
Concentrations of selenium, fluorine, and arsenic in 912, 911, and 910 samples, respectively, of soils and other regoliths from sites approximately 50 miles (80 km) apart throughout the United States are represented on maps by symbols showing five ranges of values. Histograms of the concentrations of these elements are also given. The geometric-mean concentrations (ppm) in the samples, grouped by area, are as follows: Selenium-- Entire United States, 0.31; Western United States, 0.25; and Eastern United States, 0.39. Fluorine-- Entire United States, 180; Western United States, 250; and Eastern United States, 115. Arsenic-- Entire United States, 5.8; Western United States, 6.1; and Eastern United States, 5.4.
Spacelab J air filter debris analysis
NASA Technical Reports Server (NTRS)
Obenhuber, Donald C.
1993-01-01
Filter debris from the Spacelab module SLJ of STS-49 was analyzed for microbial contamination. Debris for cabin and avionics filters was collected by Kennedy Space Center personnel on 1 Oct. 1992, approximately 5 days postflight. The concentration of microorganisms found was similar to previous Spacelab missions averaging 7.4E+4 CFU/mL for avionics filter debris and 4.5E+6 CFU/mL for the cabin filter debris. A similar diversity of bacterial types was found in the two filters. Of the 13 different bacterial types identified from the cabin and avionics samples, 6 were common to both filters. The overall analysis of these samples as compared to those of previous missions shows no significant differences.
Ferromagnetic and superparamagnetic contamination in pulverized coal
Senftle, F.E.; Thorpe, A.N.; Alexander, C.C.; Finkelman, R.B.
1982-01-01
Although no significant major-element contamination is introduced by grinding coal in a steel pulverizer, abraded steel particles can conceivably affect the magnetic properties of pulverized coal. Magnetic and scanning-electron-microscope analyses of pulverized coal and coal fragments from the Herrin No. 6 seam in Illinois showed ferromagnetic and superparamagnetic contamination from the grinder. Significant changes in the magnetic properties of the coal were noted, indicating a total steel contamination of approximately 0.02 wt%. When coal samples were vibrated in the magnetic field of the vibrating-sample magnetometer, the superparamagnetic steel particles moved through the pulverized coal, and participated in the formation of multidomain clusters that in turn substantially affected the magnetization of the coal. ?? 1982.
Tracing nuclear elements released by Fukushima Nuclear Power Plant accident
NASA Astrophysics Data System (ADS)
Tsujimura, M.; Onda, Y.; Abe, Y.; Hada, M.; Pun, I.
2011-12-01
Radioactive contamination has been detected in Fukushima and the neighboring regions due to the nuclear accident at Fukushima Daiichi Nuclear Power Plant (NPP) following the earthquake and tsunami occurred on 11th March 2011. The small experimental catchments have been established in Yamakiya district, Kawamata Town, Fukushima Prefecture, located approximately 35 km west from the Fukushima NPP. The tritium (3H) concentration and stable isotopic compositions of deuterium and oxygen-18 have been determined on the water samples of precipitation, soil water at the depths of 10 to 30 cm, groundwater at the depths of 5 m to 50 m, spring water and stream water taken at the watersheds in the recharge and discharge zones from the view point of the groundwater flow system. The tritium concentration of the rain water fell just a few days after the earthquake showed a value of approximately 17 Tritium Unit (T.U.), whereas the average concentration of the tritium in the precipitation was less than 5 T.U. before the Fukushima accident. The spring water in the recharge zone showed a relatively high tritium concentration of approximately 12 T.U., whereas that of the discharge zone showed less than 5 T.U. Thus, the artificial tritium was apparently injected in the groundwater flow system due to the Fukushima NPP accident, whereas that has not reached at the discharge zone yet. The monitoring of the nuclear elements is now on going from the view points of the hydrological cycles and the drinking water security.
NASA Technical Reports Server (NTRS)
Livermore, R. C.; Jones, T.; Richard, J.; Bower, R. G.; Ellis, R. S.; Swinbank, A. M.; Rigby, J. R.; Smail, Ian; Arribas, S.; Rodriguez-Zaurin, J.;
2013-01-01
We present Hubble Space Telescope/Wide Field Camera 3 narrow-band imaging of the Ha emission in a sample of eight gravitationally lensed galaxies at z = 1-1.5. The magnification caused by the foreground clusters enables us to obtain a median source plane spatial resolution of 360 pc, as well as providing magnifications in flux ranging from approximately 10× to approximately 50×. This enables us to identify resolved star-forming HII regions at this epoch and therefore study their Ha luminosity distributions for comparisons with equivalent samples at z approximately 2 and in the local Universe. We find evolution in the both luminosity and surface brightness of HII regions with redshift. The distribution of clump properties can be quantified with an HII region luminosity function, which can be fit by a power law with an exponential break at some cut-off, and we find that the cut-off evolves with redshift. We therefore conclude that 'clumpy' galaxies are seen at high redshift because of the evolution of the cut-off mass; the galaxies themselves follow similar scaling relations to those at z = 0, but their HII regions are larger and brighter and thus appear as clumps which dominate the morphology of the galaxy. A simple theoretical argument based on gas collapsing on scales of the Jeans mass in a marginally unstable disc shows that the clumpy morphologies of high-z galaxies are driven by the competing effects of higher gas fractions causing perturbations on larger scales, partially compensated by higher epicyclic frequencies which stabilize the disc.
Li, Yuanxin; Zhou, Jianer; Dong, Dehua; Wang, Yan; Jiang, J Z; Xiang, Hongfa; Xie, Kui
2012-11-28
Composite Ni-YSZ fuel electrodes are able to operate only under strongly reducing conditions for the electrolysis of CO(2) in oxygen-ion conducting solid oxide electrolysers. In an atmosphere without a flow of reducing gas (i.e., carbon monoxide), a composite fuel electrode based on redox-reversible La(0.2)Sr(0.8)TiO(3+δ) (LSTO) provides a promising alternative. The Ti(3+) was approximately 0.3% in the oxidized LSTO (La(0.2)Sr(0.8)TiO(3.1)), whereas the Ti(3+) reached approximately 8.0% in the reduced sample (La(0.2)Sr(0.8)TiO(3.06)). The strong adsorption of atmospheric oxygen in the form of superoxide ions led to the absence of Ti(3+) either on the surface of oxidized LSTO or the reduced sample. Reduced LSTO showed typical metallic behaviour from 50 to 700 °C in wet H(2); and the electrical conductivity of LSTO reached approximately 30 S cm(-1) at 700 °C. The dependence of [Ti(3+)] concentration in LSTO on P(O(2)) was correlated to the applied potentials when the electrolysis of CO(2) was performed with the LSTO composite electrode. The electrochemical reduction of La(0.2)Sr(0.8)TiO(3+δ) was the main process but was still present up to 2 V at 700 °C during the electrolysis of CO(2); however, the electrolysis of CO(2) at the fuel electrode became dominant at high applied voltages. The current efficiency was approximately 36% for the electrolysis of CO(2) at 700 °C and a 2 V applied potential.
NASA Astrophysics Data System (ADS)
Chalise, Sajju; Conlan, Skye; Porat, Zachary; Labrake, Scott; Vineyard, Michael
2017-09-01
The Union College Ion-Beam Analysis Lab's 1.1 MV tandem Pelletron accelerator is used to determine the presence of heavy trace metals in Queens, NY between Astoria Park and 3.5 miles south to Gantry State Park. A PIXE analysis was performed on 0.5 g pelletized soil samples with a 2.2 MeV proton beam. The results show the presence of elements ranging from Ti to Pb with the concentration of Pb in Astoria Park (2200 +/-200 ppm) approximately ten times that of the Gantry State Park. We hypothesize that the high lead concentration at Astoria Park is due to the nearby Hell Gate Bridge, painted in 1916 with lead based paint, then sandblasted and repainted in the '90s. If the lead is from the repair of the bridge, then we should see the concentration decrease as we go further from the bridge. To test this, soil samples were collected and analyzed from seven different locations north and south of the bridge. The concentrations of lead decreased drastically within a 500 m radius and were approximately constant at greater distances. More soil samples need to be collected within the 500 m radius from bridge to identify the potential source of Pb. We will describe the experimental procedure, the PIXE analysis of soil samples, and present preliminary results on the distribution of heavy trace metals.
Lateral Gene Transfer from the Dead
Szöllősi, Gergely J.; Tannier, Eric; Lartillot, Nicolas; Daubin, Vincent
2013-01-01
In phylogenetic studies, the evolution of molecular sequences is assumed to have taken place along the phylogeny traced by the ancestors of extant species. In the presence of lateral gene transfer, however, this may not be the case, because the species lineage from which a gene was transferred may have gone extinct or not have been sampled. Because it is not feasible to specify or reconstruct the complete phylogeny of all species, we must describe the evolution of genes outside the represented phylogeny by modeling the speciation dynamics that gave rise to the complete phylogeny. We demonstrate that if the number of sampled species is small compared with the total number of existing species, the overwhelming majority of gene transfers involve speciation to and evolution along extinct or unsampled lineages. We show that the evolution of genes along extinct or unsampled lineages can to good approximation be treated as those of independently evolving lineages described by a few global parameters. Using this result, we derive an algorithm to calculate the probability of a gene tree and recover the maximum-likelihood reconciliation given the phylogeny of the sampled species. Examining 473 near-universal gene families from 36 cyanobacteria, we find that nearly a third of transfer events (28%) appear to have topological signatures of evolution along extinct species, but only approximately 6% of transfers trace their ancestry to before the common ancestor of the sampled cyanobacteria. [Gene tree reconciliation; lateral gene transfer; macroevolution; phylogeny.] PMID:23355531
Designing Case-Control Studies: Decisions About the Controls
Hodge, Susan E.; Subaran, Ryan L.; Weissman, Myrna M.; Fyer, Abby J.
2014-01-01
The authors quantified, first, the effect of misclassified controls (i.e., individuals who are affected with the disease under study but who are classified as controls) on the ability of a case-control study to detect an association between a disease and a genetic marker, and second, the effect of leaving misclassified controls in the study, as opposed to removing them (thus decreasing sample size). The authors developed an informativeness measure of a study’s ability to identify real differences between cases and controls. They then examined this measure’s behavior when there are no misclassified controls, when there are misclassified controls, and when there were misclassified controls but they have been removed from the study. The results show that if, for example, 10% of controls are misclassified, the study’s informativeness is reduced to approximately 81% of what it would have been in a sample with no misclassified controls, whereas if these misclassified controls are removed from the study, the informativeness is only reduced to about 90%, despite the reduced sample size. If 25% are misclassified, those figures become approximately 56% and 75%, respectively. Thus, leaving the misclassified controls in the control sample is worse than removing them altogether. Finally, the authors illustrate how insufficient power is not necessarily circumvented by having an unlimited number of controls. The formulas provided by the authors enable investigators to make rational decisions about removing misclassified controls or leaving them in. PMID:22854929
Biomass fuel use and the exposure of children to particulate air pollution in southern Nepal
Devakumar, D.; Semple, S.; Osrin, D.; Yadav, S.K.; Kurmi, O.P.; Saville, N.M.; Shrestha, B.; Manandhar, D.S.; Costello, A.; Ayres, J.G.
2014-01-01
The exposure of children to air pollution in low resource settings is believed to be high because of the common use of biomass fuels for cooking. We used microenvironment sampling to estimate the respirable fraction of air pollution (particles with median diameter less than 4 μm) to which 7–9 year old children in southern Nepal were exposed. Sampling was conducted for a total 2649 h in 55 households, 8 schools and 8 outdoor locations of rural Dhanusha. We conducted gravimetric and photometric sampling in a subsample of the children in our study in the locations in which they usually resided (bedroom/living room, kitchen, veranda, in school and outdoors), repeated three times over one year. Using time activity information, a 24-hour time weighted average was modeled for all the children in the study. Approximately two-thirds of homes used biomass fuels, with the remainder mostly using gas. The exposure of children to air pollution was very high. The 24-hour time weighted average over the whole year was 168 μg/m3. The non-kitchen related samples tended to show approximately double the concentration in winter than spring/autumn, and four times that of the monsoon season. There was no difference between the exposure of boys and girls. Air pollution in rural households was much higher than the World Health Organization and the National Ambient Air Quality Standards for Nepal recommendations for particulate exposure. PMID:24533994
Radiation Induced Degradation of the White Thermal Control Paints Z-93 and Z-93P
NASA Technical Reports Server (NTRS)
Edwards, D. L.; Zwiener, J. M.; Wertz, G. E.; Vaughn, J. A.; Kamenetzky, R. R.; Finckenor, M. M.; Meshishnek, M. J.
1996-01-01
This paper details a comparison analysis of the zinc oxide pigmented white thermal control paints Z-93 and Z-93P. Both paints were simultaneously exposed to combined space environmental effects and analyzed using an in-vacuo reflectance technique. The dose applied to the paints was approximately equivalent to 5 years in a geosynchronous orbit. This comparison analysis showed that Z-93P is an acceptable substitute for Z-93. Irradiated samples of Z-93 and Z-93P were subjected to additional exposures of ultraviolet (UV) radiation and analyzed using the in-vacuo reflectance technique to investigate UV activated reflectance recovery. Both samples showed minimal UV activated reflectance recovery after an additional 190 equivalent sun hour (ESH) exposure. Reflectance response utilizing nitrogen as a repressurizing gas instead of air was also investigated. This investigation found the rates of reflectance recovery when repressurized with nitrogen are slower than when repressurized with air.
Radiation Induced Degradation of White Thermal Control Paint
NASA Technical Reports Server (NTRS)
Edwards, D. L.; Zwiener, J. M.; Wertz, G. E.; Vaughn, Jason A.; Kamenetzky, Rachel R.; Finckenor, M. M.; Meshishnek, M. J.
1999-01-01
This paper details a comparison analysis of the zinc-oxide pigmented white thermal control paints Z-93 and Z-93P. Both paints were simultaneously exposed to combined space environmental effects and analyzed using an in-vacuo reflectance technique. The dose applied to the paints was approximately equivalent to 5 yr in a geosynchronous orbit. This comparison analysis showed that Z-93P is an acceptable substitute for Z-93. Irradiated samples of Z-93 and Z-93P were subjected to additional exposures of ultraviolet (UV) radiation and analyzed using the in-vacuo reflectance technique to investigate UV activated reflectance recovery. Both samples showed minimal UV activated reflectance recovery after an additional 190 equivalent Sun hour (ESH) exposure. Reflectance response utilizing nitrogen as a repressurizing gas instead of air was also investigated. This investigation found the rates of reflectance recovery when repressurized with nitrogen are slower than when repressurized with air.
Hippelein, M; Matthiessen, A; Kolychalow, O; Ostendorp, G
2012-12-01
In rural areas of Schleswig-Holstein, Germany, drinking water for about 37 000 people is provided by approximately 10 000 small-scale water supplies. For those wells data on pesticides in the drinking water are rare. In this study 100 small-scale water supplies, mainly situated in areas with intensive agriculture, fruit-growing or tree-nursery, were selected and the drinking water was analysed for pesticides. In 68 samples at least one agent or metabolite was detectable, 38 samples showed multiple contaminations. The metabolites dimethylsulfamide and chloridazone-desphenyl were found in nearly 40% of the wells in concentrations up to 42 µg/L. Bentazone was the most frequently detected biocidal agent. These data show that pesticides in drinking water from small-scale supplies are a notable issue in preventive public health. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Mojica-Ramirez, E.; Monreal-Gomez, M. A.; Salas-de-Leon, D. A.
2007-05-01
Physical and biological data were gathered in Bay of La Paz, Southern Gulf of California, in summer, 2004. These include hydrographical data, ADP currents, backscattering signals, in vivo natural fluorescence, as well as, zooplankton samples. The topography of the 15°C shows a dome in the central part of the bay that becomes deeper towards the periphery suggesting the existence of a cyclonic eddy. The 35 salinity topography shows an uplift of 35 m. The eddy has a north-south diameter of approximately 35 km and cover almost all the bay. The zooplankton samples reveal the existence of 23 groups the most abundant were cladocera, copepods, siphonophores, chaetognaths and larvae of crustaceans. The zooplankton biomass presents higher values in the periphery of the eddy indicating an influence of the cyclonic circulation in their distribution.
Reproduction and optical analysis of Morpho-inspired polymeric nanostructures
NASA Astrophysics Data System (ADS)
Tippets, Cary A.; Fu, Yulan; Jackson, Anne-Martine; Donev, Eugenii U.; Lopez, Rene
2016-06-01
The brilliant blue coloration of the Morpho rhetenor butterfly originates from complex nanostructures found on the surface of its wings. The Morpho butterfly exhibits strong short-wavelength reflection and a unique two-lobe optical signature in the incident (θ) and reflected (ϕ) angular space. Here, we report the large-area fabrication of a Morpho-like structure and its reproduction in perfluoropolyether. Reflection comparisons of periodic and quasi-random ‘polymer butterfly’ nanostructures show similar normal-incidence spectra but differ in the angular θ-ϕ dependence. The periodic sample shows strong specular reflection and simple diffraction. However, the quasi-random sample produces a two-lobe angular reflection pattern with minimal specular refection, approximating the real butterfly’s optical behavior. Finite-difference time-domain simulations confirm that this pattern results from the quasi-random periodicity and highlights the significance of the inherent randomness in the Morpho’s photonic structure.
Thermoluminscence of irradiated herbs and spices
NASA Astrophysics Data System (ADS)
Mamoon, A.; Abdul-Fattah, A. A.; Abulfaraj, W. H.
1994-07-01
Several types of herbs and spices from the local market were irradiated with different doses of γ radiations. Doses varied from a few kilograys to 10 kilograys. Thermoluminescence of the irradiated samples and their controls was investigated. For the same type of herb or spice glow curves of different magnitudes, corresponding somewhat to the doses given, were obtained from the irradiated samples. Most control samples gave little or insignificant glow. Glow curves from different herbs and spices irradiated with the same doses were not similar in the strength of the glow signal given. Samples of the black pepper obtained from different packages sometimes give glow curves of very different intensities. Samples from irradiated black pepper were found to show little fading of their glow curves even at 9 months postirradiation. All irradiations were done under the same experimental conditions and at a dose rate of approximately 1 kGy h-1. The glow curves were obtained using a heating rate of about 9°C s-1 and a constant nitrogen flow rate.
Cutaway line drawing of STS-34 middeck experiment Polymer Morphology (PM)
NASA Technical Reports Server (NTRS)
1989-01-01
Cutaway line drawing shows components of STS-34 middeck experiment Polymer Morphology (PM). Generic Electronics Module (GEM) components include the control housing, circulating fans, hard disk, tape drives, computer boards, and heat exchanger. PM, a 3M-developed organic materials processing experiment, is designed to explore the effects of microgravity on polymeric materials as they are processed in space. The samples of polymeric materials being studied in the PM experiment are thin films (25 microns or less) approximately 25mm in diameter. The samples are mounted between two infrared transparent windows in a specially designed infrared cell that provides the capability of thermally processing the samples to 200 degrees Celsius with a high degree of thermal control. The samples are mounted on a carousel that allows them to be positioned, one at a time, in the infrared beam where spectra may be acquired. The GEM provides all carousel and sample cell control (SCC). The first flight of P
Exploiting Multi-Step Sample Trajectories for Approximate Value Iteration
2013-09-01
WORK UNIT NUMBER IH 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AFRL/ RISC 525 Brooks Road, Rome NY 13441-4505 Binghamton University...S) AND ADDRESS(ES) Air Force Research Laboratory/Information Directorate Rome Research Site/ RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR...iteration methods for reinforcement learning (RL) generalize experience from limited samples across large state-action spaces. The function approximators
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
NASA Technical Reports Server (NTRS)
Ryan, R. E., Jr.; Mccarthy, P.J.; Cohen, S. H.; Yan, H.; Hathi, N. P.; Koekemoer, A. M.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A.; O’Connell, R. W.;
2012-01-01
We present the size evolution of passively evolving galaxies at z approximately 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z greater than approximately 1.5. We identify 30 galaxies in approximately 40 arcmin(sup 2) to H less than 25 mag. By fitting the 10-band Hubble Space Telescope photometry from 0.22 micrometers less than approximately lambda (sub obs) 1.6 micrometers with stellar population synthesis models, we simultaneously determine photometric redshift, stellar mass, and a bevy of other population parameters. Based on the six galaxies with published spectroscopic redshifts, we estimate a typical redshift uncertainty of approximately 0.033(1+z).We determine effective radii from Sersic profile fits to the H-band image using an empirical point-spread function. By supplementing our data with published samples, we propose a mass-dependent size evolution model for passively evolving galaxies, where the most massive galaxies (M(sub *) approximately 10(sup 11) solar mass) undergo the strongest evolution from z approximately 2 to the present. Parameterizing the size evolution as (1 + z)(sup - alpha), we find a tentative scaling of alpha approximately equals (-0.6 plus or minus 0.7) + (0.9 plus or minus 0.4) log(M(sub *)/10(sup 9 solar mass), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of highredshift systems. We discuss the implications of this result for the redshift evolution of the M(sub *)-R(sub e) relation for red galaxies.
(Fe II) 1.53 and 1.64 micron emission from pre-main-sequence stars
NASA Technical Reports Server (NTRS)
Hamann, Fred; Simon, Michal; Carr, John S.; Prato, Lisa
1994-01-01
We present flux-calibrated profiles of the (Fe II) 1.53 and 1.64 micron lines in five pre-main-sequence stars, PV Cep, V1331 Cyg, R Mon, and DG and HL Tau. The line centroids are blueshifted in all five sources, and four of the five have only blueshifted flux. In agreement with previous studies, we attribute the line asymmetries to local obscuration by dusty circumstellar disks. The absence of redshifted flux implies a minimum column density of obscuring material. The largest limit, N(sub H) greater than 3 x 10(exp 22)/sq cm, derived for V1331 Cyg, suggests disk surface densities greater than 0.05 g/sq cm and disk masses greater than 0.001 solar mass within a radius of approximately 200 AU. The narrow high-velocity lines in PV Cep, V1331 Cyg, and HL Tau require formation in well collimated winds. The maximum full opening angles of their winds range from less than 20 deg in V1331 Cyg to less than 40 deg in HL Tau. The (Fe II) data also yield estimates of the electron densities (n(sub e) approximately 10(exp 4)/cu cm), hydrogen ionization fractions (f(sub H(+)) approximately 1/3), mass-loss rates (approximately 10(exp -7) to 2 x 10(exp -6) solar mass/yr), and characteristic radii of the emitting regions (approximately 32 to approximately 155 AU). The true radial extents will be larger, and the mass-loss rates smaller, by factors of a few for the outflows with limited opening angles. In our small sample the higher mass stars have stronger lines, larger emitting regions, and greater mass-loss rates. These differences are probably limited to the scale and energetics of the envelopes, because the inferred geometries, kinematics and physical conditions are similar. The measured (Fe II) profiles samples both 'high'- and 'low'-velocity environments. Recent studies indicate that these regions have some distinct physical properties and may be spatially separate. The (Fe II) data show that similar sizes and densities can occur in both environments.
(Fe II) 1.53 and 1.64 micron emission from pre-main-sequence stars
NASA Astrophysics Data System (ADS)
Hamann, Fred; Simon, Michal; Carr, John S.; Prato, Lisa
1994-11-01
We present flux-calibrated profiles of the (Fe II) 1.53 and 1.64 micron lines in five pre-main-sequence stars, PV Cep, V1331 Cyg, R Mon, and DG and HL Tau. The line centroids are blueshifted in all five sources, and four of the five have only blueshifted flux. In agreement with previous studies, we attribute the line asymmetries to local obscuration by dusty circumstellar disks. The absence of redshifted flux implies a minimum column density of obscuring material. The largest limit, NH greater than 3 x 1022/sq cm, derived for V1331 Cyg, suggests disk surface densities greater than 0.05 g/sq cm and disk masses greater than 0.001 solar mass within a radius of approximately 200 AU. The narrow high-velocity lines in PV Cep, V1331 Cyg, and HL Tau require formation in well collimated winds. The maximum full opening angles of their winds range from less than 20 deg in V1331 Cyg to less than 40 deg in HL Tau. The (Fe II) data also yield estimates of the electron densities (ne approximately 104/cu cm), hydrogen ionization fractions (fH(+) approximately 1/3), mass-loss rates (approximately 10-7 to 2 x 10-6 solar mass/yr), and characteristic radii of the emitting regions (approximately 32 to approximately 155 AU). The true radial extents will be larger, and the mass-loss rates smaller, by factors of a few for the outflows with limited opening angles. In our small sample the higher mass stars have stronger lines, larger emitting regions, and greater mass-loss rates. These differences are probably limited to the scale and energetics of the envelopes, because the inferred geometries, kinematics and physical conditions are similar. The measured (Fe II) profiles samples both 'high'- and 'low'-velocity environments. Recent studies indicate that these regions have some distinct physical properties and may be spatially separate. The (Fe II) data show that similar sizes and densities can occur in both environments.
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
On spatial coalescents with multiple mergers in two dimensions.
Heuer, Benjamin; Sturm, Anja
2013-08-01
We consider the genealogy of a sample of individuals taken from a spatially structured population when the variance of the offspring distribution is relatively large. The space is structured into discrete sites of a graph G. If the population size at each site is large, spatial coalescents with multiple mergers, so called spatial Λ-coalescents, for which ancestral lines migrate in space and coalesce according to some Λ-coalescent mechanism, are shown to be appropriate approximations to the genealogy of a sample of individuals. We then consider as the graph G the two dimensional torus with side length 2L+1 and show that as L tends to infinity, and time is rescaled appropriately, the partition structure of spatial Λ-coalescents of individuals sampled far enough apart converges to the partition structure of a non-spatial Kingman coalescent. From a biological point of view this means that in certain circumstances both the spatial structure as well as larger variances of the underlying offspring distribution are harder to detect from the sample. However, supplemental simulations show that for moderately large L the different structure is still evident. Copyright © 2012 Elsevier Inc. All rights reserved.
Geochemical Comparison of Four Cores from the Manson Impact Structure
NASA Technical Reports Server (NTRS)
Korotev, Randy L.; Rockow, Kaylynn M.; Jolliff, Bradley L.; Haskin, Larry A.; McCarville, Peter; Crossey, Laura J.
1996-01-01
Concentrations of 33 elements were determined in relatively unaltered, matrix-rich samples of impact breccia at approximately 3-m-depth intervals in the M-1 core from the Manson impact structure, Iowa. In addition, 46 matrix-rich samples from visibly altered regions of the M-7, M-8, and M-10 cores were studied, along with 42 small clasts from all four cores. Major element compositions were determined for a subset of impact breccias from the M-1 core, including matrix-rich impact-melt breccia. Major- and trace-element compositions were also determined for a suite of likely target rocks. In the M-1 core, different breccia units identified from lithologic examination of cores are compositionally distinct. There is a sharp compositional discontinuity at the boundary between the Keweenawan-shale-clast breccia and the underlying unit of impact-melt breccia (IMB) for most elements, suggesting minimal physical mixing between the two units during emplacement. Samples from the 40-m-thick IMB (M-1) are all similar to each other in composition, although there are slight increases in concentration with depth for those elements that have high concentrations in the underlying fragmental-matrix suevite breccia (SB) (e.g., Na, Ca, Fe, Sc), presumably as a result of greater clast proportions at the bottom margin of the unit of impact-melt breccia. The high degree of compositional similarity we observe in the impact-melt breccias supports the interpretation that the matrix of this unit represents impact melt. That our analyses show such compositional similarity results in part from our technique for sampling these breccias: for each sample we analyzed a few small fragments (total mass: approximately 200 mg) selected to be relatively free of large clasts and visible signs of alteration instead of subsamples of powders prepared from a large mass of breccia. The mean composition of the matrix-rich part of impact-melt breccia from the M-1 core can be modeled as a mixture of approximately 35% shale and siltstone (Proterozoic "Red Clastics"), 23% granite, 40% hornblende-biotite gneiss, and a small component (less than 2%) of mafic-dike rocks.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Long-term fate of depleted uranium at Aberdeen and Yuma Proving Grounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebinger, M.H.; Essington, E.H.; Gladney, E.S.
1990-06-01
The environmental fate of fragments of depleted uranium (DU) penetrators in soils and waters at Aberdeen Proving Ground (APG) and Yuma Proving Ground (YPG) is a concern to the Testing and Evaluation Command (TECOM) of the US Army. This report presents the information from preliminary soil and water samples that were collected from the humid woodlands of APG and the arid Sonoran Desert of YPG. Soil samples collected beneath a penetrator fragment of the firing range at APG showed approximately 12% DU by weight in the surface horizon and DU significantly above background to a depth of about 20 cm.more » Samples of surface water at APG showed U only at background levels, and bottom sediments showed background U levels but with isotopic ratios of DU instead of natural U. Soil samples beneath a penetrator fragment at YPG showed about 0.5% by weight U in the surface horizon, but only background concentrations and isotopic ratios of U between 8 and 20 cm depth. Results from this preliminary study indicate that DU at APG was redistributed primarily be dissolution and transport with water and possibly by migration of DU colloids or DU attached to small particles. Redistribution at YPG, however, was mainly due to erosion of DU fragments from the impact area and redeposition in washes that drain the area. Proposed work for FY90-FY92 includes additional field sampling, laboratory column studies, and the development of a computer model of DU redistribution at both sites. 39 refs., 11 figs., 5 tabs.« less
The Radial Distribution of Star Formation in Galaxies at Z approximately 1 from the 3D-HST Survey
NASA Technical Reports Server (NTRS)
Nelson, Erica June; vanDokkum, Pieter G.; Momcheva, Ivelina; Brammer, Gabriel; Lundgren, Britt; Skelton, Rosalind E.; Whitaker, Katherine E.; DaCunha, Elisabete; Schreiber, Natascha Foerster; Franx, Marijn;
2013-01-01
The assembly of galaxies can be described by the distribution of their star formation as a function of cosmic time. Thanks to the WFC3 grism on the Hubble Space Telescope (HST) it is now possible to measure this beyond the local Universe. Here we present the spatial distribution of H emission for a sample of 54 strongly star-forming galaxies at z 1 in the 3D-HST Treasury survey. By stacking the H emission, we find that star formation occurred in approximately exponential distributions at z approximately 1, with a median Sersic index of n = 1.0 +/- 0.2. The stacks are elongated with median axis ratios of b/a = 0.58 +/- 0.09 in H consistent with (possibly thick) disks at random orientation angles. Keck spectra obtained for a subset of eight of the galaxies show clear evidence for rotation, with inclination corrected velocities of 90.330 km s(exp 1-). The most straightforward interpretation of our results is that star formation in strongly star-forming galaxies at z approximately 1 generally occurred in disks. The disks appear to be scaled-up versions of nearby spiral galaxies: they have EW(H alpha) at approximately 100 A out to the solar orbit and they have star formation surface densities above the threshold for driving galactic scale winds.
Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud
2018-02-07
With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Wang, Xujun; Wan, Yong; Wang, Ruiqi; Xu, Xiantang; Wang, He; Chang, Mingning; Yuan, Feng; Ge, Xiaohui; Shao, Weiquan; Xu, Sheng
2018-04-01
LiNi1/3ZnxCo1/3-xMn1/3O2 (0.000 ≤ x ≤ 0.133) hollow microspheres are synthesized using MnO2 hollow microspheres both as a self-template and Mn source. These hollow microspheres, ~4 μm in diameter, are composed of approximately 300 nm basic nanoparticles. The XRD patterns of LiNi1/3ZnxCo1/3-xMn1/3O2 were analyzed by the RIETAN-FP program, and the obtained samples have a layered α-NaFeO2 structure. Electrochemical performances of the samples were carried out between 2.5 V and 4.5 V. The behavior of the lattice parameters is consistent with Cycling performance and rate performance change with increase of x. Compared with the others, the sample of x = 0.133 exhibits a relatively superior electrochemical performance. The specific capacity of x = 0.133 was increased by 10.7% than no-doped. In addition, the cyclic voltammograms curves of the second cycle show no significant alteration compared with the first cycle and the electrochemical impedance of zinc doping sample showed smaller transfer resistance than the no-doping sample.
In Situ Neutron Scattering Study of Nanostructured PbTe-PbS Bulk Thermoelectric Material
NASA Astrophysics Data System (ADS)
Ren, Fei; Schmidt, Robert; Case, Eldon D.; An, Ke
2017-05-01
Nanostructures play an important role in thermoelectric materials. Their thermal stability, such as phase change and evolution at elevated temperatures, is thus of great interest to the thermoelectric community. In this study, in situ neutron diffraction was used to examine the phase evolution of nanostructured bulk PbTe-PbS materials fabricated using hot pressing and pulsed electrical current sintering (PECS). The PbS second phase was observed in all samples in the as-pressed condition. The temperature dependent lattice parameter and phase composition data show an initial formation of PbS precipitates followed by a redissolution during heating. The redissolution process started around 570-600 K, and completed at approximately 780 K. During cooling, the PECS sample followed a reversible curve while the heating/cooling behavior of the hot pressed sample was irreversible.
Melting a Gold Sample within TEMPUS
NASA Technical Reports Server (NTRS)
2003-01-01
A gold sample is heated by the TEMPUS electromagnetic levitation furnace on STS-94, 1997, MET:10/09:20 (approximate). The sequence shows the sample being positioned electromagnetically and starting to be heated to melting. TEMPUS (stands for Tiegelfreies Elektromagnetisches Prozessiere unter Schwerelosigkeit (containerless electromagnetic processing under weightlessness). It was developed by the German Space Agency (DARA) for flight aboard Spacelab. The DARA project scientist was Igon Egry. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). DARA and NASA are exploring the possibility of flying an advanced version of TEMPUS on the International Space Station. (378KB JPEG, 2380 x 2676 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300191.html.
Effects of three phosphate industrial sites on ground-water quality in central Florida, 1979 to 1980
Miller, R.L.; Sutcliffe, Horace
1984-01-01
Geologic, hydrologic, and water quality data and information on test holes collected in the vicinity of gypsum stack complexes at two phosphate chemical plants and one phosphatic clayey waste disposal pond at a phosphate mine and beneficiation plant in central Florida are presented. The data were collected from September 1979 to October 1980 at the AMAX Phosphate, Inc. chemical plant, Piney Point; the USS Agri-Chemicals chemical plant, Bartow; and the International Minerals and Chemical Corporation Clear Springs mine, Bartow. Approximately 5,400 field and laboratory water quality determinations on water samples collected from about 100 test holes and 28 surface-water , 5 rainfall, and other sampling sites at phosphate industry beneficiation and chemical plant waste disposal operations are tabulated. Maps are included to show sampling sites. (USGS)
Lake Superior water quality near Duluth from analysis of aerial photos and ERTS imagery
NASA Technical Reports Server (NTRS)
Scherz, J. P.; Van Domelen, J. F.
1973-01-01
ERTS imagery of Lake Superior in the late summer of 1972 shows dirty water near the city of Duluth. Water samples and simultaneous photographs were taken on three separate days following a heavy storm which caused muddy runoff water. The water samples were analyzed for turbidity, color, and solids. Reflectance and transmittance characteristics of the water samples were determined with a spectrophotometer apparatus. This same apparatus attached to a microdensitometer was used to analyze the photographs for the approximate colors or wavelengths of reflected energy that caused the exposure. Although other parameters do correlate for any one particular day, it is only the water quality parameter of turbidity that correlates with the aerial imagery on all days, as the character of the dirty water changes due to settling and mixing.
Carbon System Dynamics within the Papahānaumokuākea Marine National Monument
NASA Astrophysics Data System (ADS)
Kealoha, A. K.; Winn, C. D.; Kahng, S.; Alin, S. R.; Mackenzie, F. T.; Kosaki, R.
2013-12-01
Continuous underway measurements of atmospheric CO2, oceanic pCO2, pH, salinity, temperature, and oxygen were collected in surface waters within Papahānaumokuākea Marine National Monument (PMNM). Transects were conducted in the summers of 2011 and 2012 and encompassed the entire length of monument waters from approximately 21° to 28°N. Discrete samples were obtained from the underway system for the determination of spectrophotometric pH and titration alkalinity. The discrete pH samples were used to assess the consistency of the underway pH electrode and indicate that the electrode generated consistent and precise data over the duration of each cruise. The underway data collected over the entire transects show considerable variability in carbon parameters and reflects mainly the intense biological activity that occurs within coral reef ecosystems in and around the atolls comprising the Northwestern Hawaiian Archipelago. The impact of organic and inorganic metabolism on the carbon system in nearshore water was based primarily on measurements taken at French Frigate Shoals (FFS), where our most intense sampling occurred. For this analysis, all of the data collected within the area encompassed by the atoll and the surrounding ocean roughly 10 km from the 50-meter depth contour were included. These data, which span an approximate 300-km2 area, clearly show that nearshore metabolic processes influence surface water chemistry out to at least 10 km away from the shallow-water environment. Our data also show that, while the spatio-temporal complexities associated with analyzing underway data can complicate the interpretation of pCO2 and pH variability, an obvious diel trend in total alkalinity (TA) was apparent. In addition, plotting temporal changes in total dissolved inorganic carbon (DIC) and TA revealed the relative contributions of organic and inorganic metabolism to net reef metabolism.
Electron Emission Observations from As-Grown and Vacuum-Coated Chemical Vapor Deposited Diamond
NASA Technical Reports Server (NTRS)
Lamouri, A.; Wang, Yaxin; Mearini, G. T.; Krainsky, I. L.; Dayton, J. A., Jr.; Mueller,W.
1996-01-01
Field emission has been observed from chemical vapor deposited diamond grown on Mo and Si substrates. Emission was observed at fields as low as 20 kV/cm. The samples were tested in the as-grown form, and after coating with thin films of Au, CsI, and Ni. The emission current was typically maximum at the onset of the applied field, but was unstable, and decreased rapidly with time from the as-grown films. Thin Au layers, approximately 15 nm thick, vacuum deposited onto the diamond samples significantly improved the stability of the emission current at values approximately equal to those from uncoated samples at the onset of the applied field. Thin layers of CsI, approximately 5 nm thick, were also observed to improve the stability of the emission current but at values less than those from the uncoated samples at the onset of the applied field. While Au and CsI improved the stability of the emission, Ni was observed to have no effect.
Rotation periods of open-cluster stars, 3
NASA Technical Reports Server (NTRS)
Prosser, Charles F.; Shetrone, Matthew D.; Dasgupta, Amil; Backman, Dana E.; Laaksonen, Bentley D.; Baker, Shawn W.; Marschall, Laurence A.; Whitney, Barbara A.; Kuijken, Konrad; Stauffer, John R.
1995-01-01
We present the results from a photometric monitoring program of 15 open cluster stars and one weak-lined T Tauri star during late 1993/early 1994. Several show rotators which are members of the Alpha Persei, Pleiades, and Hyades open clusters have been monitored and period estimates derived. Using all available Pleiades stars with photometric periods together with current X-ray flux measurements, we illustrate the X-ray activity/rotation relation among Pleiades late-G/K dwarfs. The data show a clear break in the rotation-activity relation around P approximately 6-7 days -- in general accordance with previous results using more heterogeneous samples of G/K stars.
Hostinar, Camelia E.; McQuillan, Mollie T.; Mirous, Heather J.; Grant, Kathryn E.; Adam, Emma K.
2014-01-01
Laboratory social stress tests involving public speaking challenges are widely used for eliciting an acute stress response in older children, adolescents, and adults. Recently, a group protocol for a social stress test (the Trier Social Stress Test for Groups, TSST-G) was shown to be effective in adults and is dramatically less time-consuming and resource-intensive compared to the single-subject version of the task. The present study sought to test the feasibility and effectiveness of an adapted group public speaking task conducted with a racially diverse, urban sample of U.S. adolescents (N = 191; 52.4% female) between the ages of 11 and 18 (M = 14.4 years, SD = 1.93). Analyses revealed that this Group Public Speaking Task for Adolescents (GPST-A) provoked a significant increase in cortisol production (on average, approximately 60% above baseline) and in self-reported negative affect, while at the same time avoiding excessive stress responses that would raise ethical concerns or provoke substantial participant attrition. Approximately 63.4% of participants exhibited an increase in cortisol levels in response to the task, with 59.2% of the total sample showing a 10% or greater increase from baseline. Results also suggested that groups of 5 adolescents might be ideal for achieving more uniform cortisol responses across various serial positions for speech delivery. Basal cortisol levels increased with age and participants belonging to U.S. national minorities tended to have either lower basal cortisol or diminished cortisol reactivity compared to non-Hispanic Whites. This protocol facilitates the recruitment of larger sample sizes compared to prior research and may show great utility in answering new questions about adolescent stress reactivity and development. PMID:25218656
Melancon, M.J.; Kutay, A.L.; Woodin, Bruce R.; Stegeman, John J.
2006-01-01
Six-month-old lesser scaup (Aythya affinis) and nestling tree swallows (Tachycineta bicolor) were injected intraperitoneally with beta-naphthoflavone (BNF) in corn oil or in vehicle alone. Liver samples were taken and stored at -80 degrees C until microsome preparation and monooxygenase assay. Skin samples were placed in buffered formalin for subsequent immunohistochemical (IHC) analysis for cytochrome P4501A (CYP1A). Lesser scaup treated with BNF at 20 or 100 mg/kg body weight showed approximately 6- to 18-fold increases in four monooxygenases (benzyloxyresorufin-O-dealkylase, ethoxyresorufin-O-dealkylase, methoxyresorufin-O-dealkylase, and pentoxyresorufin-O-dealkylase). No IHC response was observed for CYP1A in the skin of vehicle-injected ducks, whereas in the skin from BNF-treated ducks, the positive IHC response was of similar magnitude for both dose levels of BNF. Tree swallows injected with BNF at 100 mg/kg, but not at. 20 mg/kg, showed significant increases (approximately fivefold) in hepatic microsomal O-dealkylase activities. Cytochrome P4501A was undetectable by IHC response in skin from corn oil-treated swallows, but positive IHC responses were observed in the skin of one of five swallows at 20 mg/kg and four of five swallows at 100 mg/kg. Although these data do not allow construction of significant dose-response curves, the IHC responses for CYP1A in skin support the possible use of this nonlethal approach for biomonitoring contaminant exposure of birds. In addition, the CYP1A signal observed at the bases of emerging feathers suggest that these might provide less invasive sampling sites for IHC analysis of CYP1A.
NASA Astrophysics Data System (ADS)
Deng, Shijie; McAuliffe, Michael A. P.; Salaj-Kosla, Urszula; Wolfe, Raymond; Lewis, Liam; Huyet, Guillaume
2017-02-01
In this work, a low cost optical pH sensing system that allows for small volume sample measurements was developed. The system operates without the requirement of laboratory instruments (e.g. laser source, spectrometer and CCD camera), this lowers the cost and enhances the portability. In the system, an optical arrangement employing a dichroic filter was used which allows the excitation and emission light to be transmitted using a single fibre thus improving the collection efficiency of the fluorescence signal and also the ability of inserting measurement. The pH sensor in the system uses bromocresol purple as the indicator which is immobilised by sol-gel technology through a dip-coating process. The sensor material was coated on the tip of a 1 mm diameter optical fibre which makes it possible for inserting into very small volume samples to measure the pH. In the system, a LED with a peak emission wavelength of 465 nm is used as the light source and a silicon photo-detector is used to detect the uorescence signal. Optical filters are applied after the LED and in front of the photo-detector to separate the excitation and emission light. The fluorescence signal collected is transferred to a PC through a DAQ and processed by a Labview-based graphic-user-interface (GUI). Experimental results show that the system is capable of sensing pH values from 5.3 to 8.7 with a linear response of R2=0.969. Results also show that the response times for a pH changes from 5.3 to 8.7 is approximately 150 s and for a 0.5 pH changes is approximately 50 s.
Hostinar, Camelia E; McQuillan, Mollie T; Mirous, Heather J; Grant, Kathryn E; Adam, Emma K
2014-12-01
Laboratory social stress tests involving public speaking challenges are widely used for eliciting an acute stress response in older children, adolescents, and adults. Recently, a group protocol for a social stress test (the Trier Social Stress Test for Groups, TSST-G) was shown to be effective in adults and is dramatically less time-consuming and resource-intensive compared to the single-subject version of the task. The present study sought to test the feasibility and effectiveness of an adapted group public speaking task conducted with a racially diverse, urban sample of U.S. adolescents (N=191; 52.4% female) between the ages of 11 and 18 (M=14.4 years, SD=1.93). Analyses revealed that this Group Public Speaking Task for Adolescents (GPST-A) provoked a significant increase in cortisol production (on average, approximately 60% above baseline) and in self-reported negative affect, while at the same time avoiding excessive stress responses that would raise ethical concerns or provoke substantial participant attrition. Approximately 63.4% of participants exhibited an increase in cortisol levels in response to the task, with 59.2% of the total sample showing a 10% or greater increase from baseline. Results also suggested that groups of five adolescents might be ideal for achieving more uniform cortisol responses across various serial positions for speech delivery. Basal cortisol levels increased with age and participants belonging to U.S. national minorities tended to have either lower basal cortisol or diminished cortisol reactivity compared to non-Hispanic Whites. This protocol facilitates the recruitment of larger sample sizes compared to prior research and may show great utility in answering new questions about adolescent stress reactivity and development. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fine tuning classical and quantum molecular dynamics using a generalized Langevin equation
NASA Astrophysics Data System (ADS)
Rossi, Mariana; Kapil, Venkat; Ceriotti, Michele
2018-03-01
Generalized Langevin Equation (GLE) thermostats have been used very effectively as a tool to manipulate and optimize the sampling of thermodynamic ensembles and the associated static properties. Here we show that a similar, exquisite level of control can be achieved for the dynamical properties computed from thermostatted trajectories. We develop quantitative measures of the disturbance induced by the GLE to the Hamiltonian dynamics of a harmonic oscillator, and show that these analytical results accurately predict the behavior of strongly anharmonic systems. We also show that it is possible to correct, to a significant extent, the effects of the GLE term onto the corresponding microcanonical dynamics, which puts on more solid grounds the use of non-equilibrium Langevin dynamics to approximate quantum nuclear effects and could help improve the prediction of dynamical quantities from techniques that use a Langevin term to stabilize dynamics. Finally we address the use of thermostats in the context of approximate path-integral-based models of quantum nuclear dynamics. We demonstrate that a custom-tailored GLE can alleviate some of the artifacts associated with these techniques, improving the quality of results for the modeling of vibrational dynamics of molecules, liquids, and solids.
High-throughput analysis of spatio-temporal dynamics in Dictyostelium
Sawai, Satoshi; Guan, Xiao-Juan; Kuspa, Adam; Cox, Edward C
2007-01-01
We demonstrate a time-lapse video approach that allows rapid examination of the spatio-temporal dynamics of Dictyostelium cell populations. Quantitative information was gathered by sampling life histories of more than 2,000 mutant clones from a large mutagenesis collection. Approximately 4% of the clonal lines showed a mutant phenotype at one stage. Many of these could be ordered by clustering into functional groups. The dataset allows one to search and retrieve movies on a gene-by-gene and phenotype-by-phenotype basis. PMID:17659086
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erchinger, J. L.; Orrell, John L.; Aalseth, C. E.
The Ultra-Low Background Liquid Scintillation Counter developed by Pacific Northwest National Laboratory will expand the application of liquid scintillation counting by enabling lower detection limits and smaller sample volumes. By reducing the overall count rate of the background environment approximately 2 orders of magnitude below that of commercially available systems, backgrounds on the order of tens of counts per day over an energy range of ~3–3600 keV can be realized. Finally, initial test results of the ULB LSC show promising results for ultra-low background detection with liquid scintillation counting.
Kittawornrat, Apisit; Panyasing, Yaowalak; Goodell, Christa; Wang, Chong; Gauger, Phillip; Harmon, Karen; Rauh, Rolf; Desfresne, Luc; Levis, Ian; Zimmerman, Jeffrey
2014-01-31
Oral fluid samples collected from litters of piglets (n=600) one day prior to weaning were evaluated as a method to surveil for porcine reproductive and respiratory syndrome virus (PRRSV) infections in four sow herds of approximately 12,500 sow each. Serum samples from the litters' dam (n=600) were included for comparison. All four herds were endemically infected with PRRSV and all sows had been vaccinated ≥ 2 times with PRRSV modified-live virus vaccines. After all specimens had been collected, samples were randomized and assayed by PRRSV real-time reverse transcription polymerase chain reaction (RT-qPCR) and four PRRSV antibody ELISA assays (IgM, IgA, IgG, and Commercial Kit). All sow serum samples were negative by PRRSV RT-qPCR, but 9 of 600 oral fluid samples tested positive at two laboratories. Open reading frame 5 (ORF5) sequencing of 2 of the 9 positive oral fluid samples identified wild-type viruses as the source of the infection. A comparison of antibody responses in RT-qPCR positive vs. negative oral fluid samples showed significantly higher IgG S/P ratios in RT-qPCR-positive oral fluid samples (mean S/P 3.46 vs. 2.36; p=0.02). Likewise, sow serum samples from RT-qPCR-positive litter oral fluid samples showed significantly higher serum IgG (mean S/P 1.73 vs. 0.98; p<0.001) and Commercial Kit (mean S/P 1.97 vs. 0.98; p<0.001) S/P ratios. Overall, the study showed that pre-weaning litter oral fluid samples could provide an efficient and sensitive approach to surveil for PRRSV in infected, vaccinated, or presumed-negative pig breeding herds. Copyright © 2014 Elsevier B.V. All rights reserved.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Epsky, Nancy D; Espinoza, Hernán R; Kendra, Paul E; Abernathy, Robert; Midgarden, David; Heath, Robert R
2010-10-01
Studies were conducted in Honduras to determine effective sampling range of a female-targeted protein-based synthetic attractant for the Mediterranean fruit fly, Ceratitis capitata (Wiedemann) (Diptera: Tephritidae). Multilure traps were baited with ammonium acetate, putrescine, and trimethylamine lures (three-component attractant) and sampled over eight consecutive weeks. Field design consisted of 38 traps (over 0.5 ha) placed in a combination of standard and high-density grids to facilitate geostatistical analysis, and tests were conducted in coffee (Coffea arabica L.),mango (Mangifera indica L.),and orthanique (Citrus sinensis X Citrus reticulata). Effective sampling range, as determined from the range parameter obtained from experimental variograms that fit a spherical model, was approximately 30 m for flies captured in tests in coffee or mango and approximately 40 m for flies captured in orthanique. For comparison, a release-recapture study was conducted in mango using wild (field-collected) mixed sex C. capitata and an array of 20 baited traps spaced 10-50 m from the release point. Contour analysis was used to document spatial distribution of fly recaptures and to estimate effective sampling range, defined by the area that encompassed 90% of the recaptures. With this approach, effective range of the three-component attractant was estimated to be approximately 28 m, similar to results obtained from variogram analysis. Contour maps indicated that wind direction had a strong influence on sampling range, which was approximately 15 m greater upwind compared with downwind from the release point. Geostatistical analysis of field-captured insects in appropriately designed trapping grids may provide a supplement or alternative to release-recapture studies to estimate sampling ranges for semiochemical-based trapping systems.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
ERIC Educational Resources Information Center
Murray, Frank
2013-01-01
This article is a report of the findings from a sample of approximately 2,700 students and 1,000 faculty in the first 50 Teacher Education Accreditation\tCouncil (TEAC)-accredited programs for which the online surveys were used. The sample represents nearly all the full-time faculty members surveyed and approximately 30% of the students. On the…
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.
Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument
NASA Astrophysics Data System (ADS)
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.
2015-06-01
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
NASA Astrophysics Data System (ADS)
López-Sánchez, M.; Mansilla-Plaza, L.; Sánchez-de-laOrden, M.
2017-10-01
Prior to field scale research, soil samples are analysed on a laboratory scale for electrical resistivity calibrations. Currently, there are a variety of field instruments to estimate the water content in soils using different physical phenomena. These instruments can be used to develop moisture-resistivity relationships on the same soil samples. This assures that measurements are performed on the same material and under the same conditions (e.g., humidity and temperature). A geometric factor is applied to the location of electrodes, in order to calculate the apparent electrical resistivity of the laboratory test cells. This geometric factor can be determined in three different ways: by means of the use of an analytical approximation, laboratory trials (experimental approximation), or by the analysis of a numerical model. The first case, the analytical approximation, is not appropriate for complex cells or arrays. And both, the experimental and numerical approximation can lead to inaccurate results. Therefore, we propose a novel approach to obtain a compromise solution between both techniques, providing a more precise determination of the geometrical factor.
Amann, Rupert P; Chapman, Phillip L
2009-01-01
We retrospectively mined and modeled data to answer 3 questions. 1) Relative to an estimate based on approximately 20 semen samples, how imprecise is an estimate of an individual's total sperm per ejaculate (TSperm) based on 1 sample? 2) What is the impact of abstinence interval on TSperm and TSperm/h? 3) How many samples are needed to provide a meaningful estimate of an individual's mean TSperm or TSperm/h? Data were for 18-20 consecutive masturbation samples from each of 48 semen donors. Modeling exploited the gamma distribution of values for TSperm and a unique approach to project to future samples. Answers: 1) Within-individual coefficients of variation were similar for TSperm or TSperm/h abstinence and ranged from 17% to 51%; average approximately 34%. TSperm or TSperm/h in any individual sample from a given donor was between -20% and +20% of the mean value in 48% of 18-20 samples per individual. 2) For a majority of individuals, TSperm increased in a nearly linear manner through approximately 72 hours of abstinence. TSperm and TSperm/h after 18-36 hours' abstinence are high. To obtain meaningful values for diagnostic purposes and maximize distinction of individuals with relatively low or high sperm production, the requested abstinence should be 42-54 hours with an upper limit of 64 hours. For individuals producing few sperm, 7 days or more of abstinence might be appropriate to obtain sperm for insemination. 3) At least 3 samples from a hypothetical future subject are recommended for most applications. Assuming 60 hours' abstinence, 80% confidence limits for TSperm/h for 1, 3, or 6 samples would be 70%-163%, 80%-130%, or 85%-120% of the mean for observed values. In only approximately 50% of cases would TSperm/h for a single sample be within -16% and +30% of the true mean value for that subject. Pooling values for TSperm in samples obtained after 18-36 or 72-168 hours' abstinence with values for TSperm obtained after 42-64 hours is inappropriate. Reliance on TSperm for a single sample per subject is unwise.
NASA Technical Reports Server (NTRS)
Eriksson, S.; Wilder, F. D.; Ergun, R. E.; Schwartz, S. J.; Cassak, P. A.; Burch, J. L.; Chen, Li-Jen; Torbert, R. B.; Phan, T. D.; Lavraud, B.;
2016-01-01
We report observations from the Magnetospheric Multiscale (MMS) satellites of a large guide field magnetic reconnection event. The observations suggest that two of the four MMS spacecraft sampled the electron diffusion region, whereas the other two spacecraft detected the exhaust jet from the event. The guide magnetic field amplitude is approximately 4 times that of the reconnecting field. The event is accompanied by a significant parallel electric field (E(sub parallel lines) that is larger than predicted by simulations. The high-speed (approximately 300 km/s) crossing of the electron diffusion region limited the data set to one complete electron distribution inside of the electron diffusion region, which shows significant parallel heating. The data suggest that E(sub parallel lines) is balanced by a combination of electron inertia and a parallel gradient of the gyrotropic electron pressure.
Evidence of iridescence in TiO2 nanostructures: An approximation in plane wave expansion method
NASA Astrophysics Data System (ADS)
Quiroz, Heiddy P.; Barrera-Patiño, C. P.; Rey-González, R. R.; Dussan, A.
2016-11-01
Titanium dioxide nanotubes, TiO2 NTs, can be obtained by electrochemical anodization of Titanium sheets. After nanotubes are removed by mechanical stress, residual structures or traces on the surface of titanium sheets can be observed. These traces show iridescent effects. In this paper we carry out both experimental and theoretical study of those interesting and novel optical properties. For the experimental analysis we use angle resolved UV-vis spectroscopy while in the theoretical study is evaluated the photonic spectra using numerical simulations into the frequency-domain and the framework of the wave plane approximation. The iridescent effect is a strong property and independent of the sample. This behavior can be important to design new materials or compounds for several applications such as, cosmetic industry, optoelectronic devices, photocatalysis, sensors, among others.
Magnetostatic modes in ferromagnetic samples with inhomogeneous internal fields
NASA Astrophysics Data System (ADS)
Arias, Rodrigo
2015-03-01
Magnetostatic modes in ferromagnetic samples are very well characterized and understood in samples with uniform internal magnetic fields. More recently interest has shifted to the study of magnetization modes in ferromagnetic samples with inhomogeneous internal fields. The present work shows that under the magnetostatic approximation and for samples of arbitrary shape and/or arbitrary inhomogeneous internal magnetic fields the modes can be classified as elliptic or hyperbolic, and their associated frequency spectrum can be delimited. This results from the analysis of the character of the second order partial differential equation for the magnetostatic potential under these general conditions. In general, a sample with an inhomogeneous internal field and at a given frequency, may have regions of elliptic and hyperbolic character separated by a boundary. In the elliptic regions the magnetostatic modes have a smooth monotonic character (generally decaying form the surfaces (a ``tunneling'' behavior)) and in hyperbolic regions an oscillatory wave-like character. A simple local criterion distinguishes hyperbolic from elliptic regions: the sign of a susceptibility parameter. This study shows that one may control to some extent magnetostatic modes via external fields or geometry. R.E.A. acknowledges Financiamiento Basal para Centros Cientificos y Tecnologicos de Excelencia under Project No. FB 0807 (Chile), Grant No. ICM P10-061-F by Fondo de Innovacion para la Competitividad-MINECON, and Proyecto Fondecyt 1130192.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Goldstein, Steven J; Abdel-Fattah, Amr I; Murrell, Michael T; Dobson, Patrick F; Norman, Deborah E; Amato, Ronald S; Nunn, Andrew J
2010-03-01
Uranium-series data for groundwater samples from the Nopal I uranium ore deposit were obtained to place constraints on radionuclide transport and hydrologic processes for a nuclear waste repository located in fractured, unsaturated volcanic tuff. Decreasing uranium concentrations for wells drilled in 2003 are consistent with a simple physical mixing model that indicates that groundwater velocities are low ( approximately 10 m/y). Uranium isotopic constraints, well productivities, and radon systematics also suggest limited groundwater mixing and slow flow in the saturated zone. Uranium isotopic systematics for seepage water collected in the mine adit show a spatial dependence which is consistent with longer water-rock interaction times and higher uranium dissolution inputs at the front adit where the deposit is located. Uranium-series disequilibria measurements for mostly unsaturated zone samples indicate that (230)Th/(238)U activity ratios range from 0.005 to 0.48 and (226)Ra/(238)U activity ratios range from 0.006 to 113. (239)Pu/(238)U mass ratios for the saturated zone are <2 x 10(-14), and Pu mobility in the saturated zone is >1000 times lower than the U mobility. Saturated zone mobility decreases in the order (238)U approximately (226)Ra > (230)Th approximately (239)Pu. Radium and thorium appear to have higher mobility in the unsaturated zone based on U-series data from fractures and seepage water near the deposit.
Schenk, Liam N.; Bragg, Heather M.
2014-01-01
The drawdown of Fall Creek Lake resulted in the net transport of approximately 50,300 tons of sediment from the lake during a 6-day drawdown operation, based on computed daily values of suspended-sediment load downstream of Fall Creek Dam and the two main tributaries to Fall Creek Lake. A suspended-sediment budget calculated for 72 days of the study period indicates that as a result of drawdown operations, there was approximately 16,300 tons of sediment deposition within the reaches of Fall Creek and the Middle Fork Willamette River between Fall Creek Dam and the streamgage on the Middle Fork Willamette River at Jasper, Oregon. Bedload samples collected at the station downstream of Fall Creek Dam during the drawdown were primarily composed of medium to fine sands and accounted for an average of 11 percent of the total instantaneous sediment load (also termed sediment discharge) during sample collection. Monitoring of dissolved oxygen at the station downstream of Fall Creek Dam showed an initial decrease in dissolved oxygen concurrent with the sediment release over the span of 5 hours, though the extent of dissolved oxygen depletion is unknown because of extreme and rapid fouling of the probe by the large amount of sediment in transport. Dissolved oxygen returned to background levels downstream of Fall Creek Dam on December 18, 2012, approximately 1 day after the end of the drawdown operation.
NASA Astrophysics Data System (ADS)
Kreisberg, N. M.; Worton, D. R.; Zhao, Y.; Isaacman, G.; Goldstein, A. H.; Hering, S. V.
2014-07-01
A reliable method of sample introduction is presented for on-line gas chromatography with a special application to in-situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a controlled pressure switching device that offers the advantage of long term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing the interface for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient detector and 15% accurate when applied to a vacuum based detector. Laboratory comparisons made between the two methods of sample introduction using a thermal desorption aerosol gas chromatograph (TAG) show approximately three times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in-situ instrument demonstrate minimal trending and a zero failure rate during field deployments ranging up to four weeks of continuous sampling. Extension of the VLI to dual collection cells is presented with less than 3% cell-to-cell carry-over.
Rapid Determination of Salmonella in Samples of Egg Noodles, Cake Mixes, and Candies
Banwart, George J.; Kreitzer, Madeleine J.
1969-01-01
A glass apparatus system was compared with a standard enrichment broth-selective agar method to test samples of egg noodles, cake mixes, and candy for the presence or absence of salmonellae. The glass apparatus system used fermentation of mannitol, production of H2S, or motility, in conjunction with a serological test of flagellar antigens, to detect salmonellae. No salmonellae were detected in 173 samples of food products. Of these samples, 171 were found to be Salmonella-negative after 48 hr with the glass apparatus system. After 72 hr, the standard Salmonella procedure yielded 38 samples which produced Salmonella false-positive results on selective agars. Inoculation of samples with cultures of Salmonella showed that approximately one inoculated cell could be detected after 48 hr of incubation with the glass apparatus. The standard Salmonella test requires a minimum of 72 hr for completion. Compared with the standard Salmonella test, the glass apparatus system is a more rapid and simple system that can be used to determine the presence or absence of Salmonella in these food products. Images PMID:5370460
Liu, Heping; Zhang, Qianyu; Katul, Gabriel G.; ...
2016-05-24
CO 2 emissions from inland waters are commonly determined by indirect methods that are based on the product of a gas transfer coefficient and the concentration gradient at the air water interface (e.g., wind-based gas transfer models). The measurements of concentration gradient are typically collected during the day in fair weather throughout the course of a year. Direct measurements of eddy covariance CO 2 fluxes from a large inland water body (Ross Barnett reservoir, Mississippi, USA) show that CO 2 effluxes at night are approximately 70% greater than those during the day. At longer time scales, frequent synoptic weather eventsmore » associated with extratropical cyclones induce CO 2 flux pulses, resulting in further increase in annual CO 2 effluxes by 16%. Therefore, CO 2 emission rates from this reservoir, if these diel and synoptic processes are under-sampled, are likely to be underestimated by approximately 40%. Our results also indicate that the CO 2 emission rates from global inland waters reported in the literature, when based on indirect methods, are likely underestimated. Field samplings and indirect modeling frameworks that estimate CO 2 emissions should account for both daytime-nighttime efflux difference and enhanced emissions during synoptic weather events. Furthermore, the analysis here can guide carbon emission sampling to improve regional carbon estimates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Heping; Zhang, Qianyu; Katul, Gabriel G.
CO 2 emissions from inland waters are commonly determined by indirect methods that are based on the product of a gas transfer coefficient and the concentration gradient at the air water interface (e.g., wind-based gas transfer models). The measurements of concentration gradient are typically collected during the day in fair weather throughout the course of a year. Direct measurements of eddy covariance CO 2 fluxes from a large inland water body (Ross Barnett reservoir, Mississippi, USA) show that CO 2 effluxes at night are approximately 70% greater than those during the day. At longer time scales, frequent synoptic weather eventsmore » associated with extratropical cyclones induce CO 2 flux pulses, resulting in further increase in annual CO 2 effluxes by 16%. Therefore, CO 2 emission rates from this reservoir, if these diel and synoptic processes are under-sampled, are likely to be underestimated by approximately 40%. Our results also indicate that the CO 2 emission rates from global inland waters reported in the literature, when based on indirect methods, are likely underestimated. Field samplings and indirect modeling frameworks that estimate CO 2 emissions should account for both daytime-nighttime efflux difference and enhanced emissions during synoptic weather events. Furthermore, the analysis here can guide carbon emission sampling to improve regional carbon estimates.« less
Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Grasso, Salvatore; Sakka, Yoshio; Tok, Alfred; Su, Liap Tat; Bosman, Michael; Ma, Jan
2012-02-01
Boron carbide B4C powders were subject to reactive spark plasma sintering (also known as field assisted sintering, pulsed current sintering or plasma assisted sintering) under nitrogen atmosphere. For an optimum hexagonal BN (h-BN) content estimated from X-ray diffraction measurements at approximately 0.4 wt%, the as-prepared BaCb-(BxOy/BN) ceramic shows values of Berkovich and Vickers hardness of 56.7 +/- 3.1 GPa and 39.3 +/- 7.6 GPa, respectively. These values are higher than for the vacuum SPS processed B4C pristine sample and the h-BN -mechanically-added samples. XRD and electronic microscopy data suggest that in the samples produced by reactive SPS in N2 atmosphere, and containing an estimated amount of 0.3-1.5% h-BN, the crystallite size of the boron carbide grains is decreasing with the increasing amount of N2, while for the newly formed lamellar h-BN the crystallite size is almost constant (approximately 30-50 nm). BN is located at the grain boundaries between the boron carbide grains and it is wrapped and intercalated by a thin layer of boron oxide. BxOy/BN forms a fine and continuous 3D mesh-like structure that is a possible reason for good mechanical properties.
Computational tools for exact conditional logistic regression.
Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P
Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Families of FPGA-Based Accelerators for Approximate String Matching1
Van Court, Tom; Herbordt, Martin C.
2011-01-01
Dynamic programming for approximate string matching is a large family of different algorithms, which vary significantly in purpose, complexity, and hardware utilization. Many implementations have reported impressive speed-ups, but have typically been point solutions – highly specialized and addressing only one or a few of the many possible options. The problem to be solved is creating a hardware description that implements a broad range of behavioral options without losing efficiency due to feature bloat. We report a set of three component types that address different parts of the approximate string matching problem. This allows each application to choose the feature set required, then make maximum use of the FPGA fabric according to that application’s specific resource requirements. Multiple, interchangeable implementations are available for each component type. We show that these methods allow the efficient generation of a large, if not complete, family of accelerators for this application. This flexibility was obtained while retaining high performance: We have evaluated a sample against serial reference codes and found speed-ups of from 150× to 400× over a high-end PC. PMID:21603598
NASA Astrophysics Data System (ADS)
Dragoni, Daniele; Daff, Thomas D.; Csányi, Gábor; Marzari, Nicola
2018-01-01
We show that the Gaussian Approximation Potential (GAP) machine-learning framework can describe complex magnetic potential energy surfaces, taking ferromagnetic iron as a paradigmatic challenging case. The training database includes total energies, forces, and stresses obtained from density-functional theory in the generalized-gradient approximation, and comprises approximately 150,000 local atomic environments, ranging from pristine and defected bulk configurations to surfaces and generalized stacking faults with different crystallographic orientations. We find the structural, vibrational, and thermodynamic properties of the GAP model to be in excellent agreement with those obtained directly from first-principles electronic-structure calculations. There is good transferability to quantities, such as Peierls energy barriers, which are determined to a large extent by atomic configurations that were not part of the training set. We observe the benefit and the need of using highly converged electronic-structure calculations to sample a target potential energy surface. The end result is a systematically improvable potential that can achieve the same accuracy of density-functional theory calculations, but at a fraction of the computational cost.
Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.
Fang, Hongyan; Zhang, Hong; Yang, Yaning
2016-07-01
Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.
Chemical analyses of provided samples
NASA Technical Reports Server (NTRS)
Becker, Christopher H.
1993-01-01
Two batches of samples were received and chemical analysis was performed of the surface and near surface regions of the samples by the surface analysis by laser ionization (SALI) method. The samples included four one-inch optics and several paint samples. The analyses emphasized surface contamination or modification. In these studies, pulsed sputtering by 7 keV Ar+ and primarily single-photon ionization (SPI) by coherent 118 nm radiation (at approximately 5 x 10(exp 5) W/cm(sup 2) were used. For two of the samples, also multiphoton ionization (MPI) at 266 nm (approximately 5 x 10(exp 11) W/cm(sup 2) was used. Most notable among the results was the silicone contamination on Mg2 mirror 28-92, and that the Long Duration Exposure Facility (LDEF) paint sample had been enriched in K and Na and depleted in Zn, Si, B, and organic compounds relative to the control paint.
The effect of sampling rate on observed statistics in a correlated random walk
Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.
2013-01-01
Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484
CTEPP STANDARD OPERATING PROCEDURE FOR COLLECTION OF URINE SAMPLES (SOP-2.14)
This SOP describes the method for collecting urine samples from the study participants (children and their primary caregivers). Urine samples will be approximate 48-hr collections, collected as spot urine samples accumulated over the 48-hr sampling period. If the household or da...
Hu, Youxin; Shanjani, Yaser; Toyserkani, Ehsan; Grynpas, Marc; Wang, Rizhi; Pilliar, Robert
2014-02-01
Porous calcium polyphosphate (CPP) structures proposed as bone-substitute implants and made by sintering CPP powders to form bending test samples of approximately 35 vol % porosity were machined from preformed blocks made either by additive manufacturing (AM) or conventional gravity sintering (CS) methods and the structure and mechanical characteristics of samples so made were compared. AM-made samples displayed higher bending strengths (≈1.2-1.4 times greater than CS-made samples), whereas elastic constant (i.e., effective elastic modulus of the porous structures) that is determined by material elastic modulus and structural geometry of the samples was ≈1.9-2.3 times greater for AM-made samples. X-ray diffraction analysis showed that samples made by either method displayed the same crystal structure forming β-CPP after sinter annealing. The material elastic modulus, E, determined using nanoindentation tests also showed the same value for both sample types (i.e., E ≈ 64 GPa). Examination of the porous structures indicated that significantly larger sinter necks resulted in the AM-made samples which presumably resulted in the higher mechanical properties. The development of mechanical properties was attributed to the different sinter anneal procedures required to make 35 vol % porous samples by the two methods. A primary objective of the present study, in addition to reporting on bending strength and sample stiffness (elastic constant) characteristics, was to determine why the two processes resulted in the observed mechanical property differences for samples of equivalent volume percentage of porosity. An understanding of the fundamental reason(s) for the observed effect is considered important for developing improved processes for preparation of porous CPP implants as bone substitutes for use in high load-bearing skeletal sites. Copyright © 2013 Wiley Periodicals, Inc.
Capillary microextraction: A new method for sampling methamphetamine vapour.
Nair, M V; Miskelly, G M
2016-11-01
Clandestine laboratories pose a serious health risk to first responders, investigators, decontamination companies, and the public who may be inadvertently exposed to methamphetamine and other chemicals used in its manufacture. Therefore there is an urgent need for reliable methods to detect and measure methamphetamine at such sites. The most common method for determining methamphetamine contamination at former clandestine laboratory sites is selected surface wipe sampling, followed by analysis with gas chromatography-mass spectrometry (GC-MS). We are investigating the use of sampling for methamphetamine vapour to complement such wipe sampling. In this study, we report the use of capillary microextraction (CME) devices for sampling airborne methamphetamine, and compare their sampling efficiency with a previously reported dynamic SPME method. The CME devices consisted of PDMS-coated glass filter strips inside a glass tube. The devices were used to dynamically sample methamphetamine vapour in the range of 0.42-4.2μgm -3 , generated by a custom-built vapour dosing system, for 1-15min, and methamphetamine was analysed using a GC-MS fitted with a ChromatoProbe thermal desorption unit. The devices showed good reproducibility (RSD<15%), and a curvilinear pre-equilibrium relationship between sampling times and peak area, which can be utilised for calibration. Under identical sampling conditions, the CME devices were approximately 30 times more sensitive than the dynamic SPME method. The CME devices could be stored for up to 3days after sampling prior to analysis. Consecutive sampling of methamphetamine and its isotopic substitute, d-9 methamphetamine showed no competitive displacement. This suggests that CME devices, pre-loaded with an internal standard, could be a feasible method for sampling airborne methamphetamine at former clandestine laboratories. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Orthen, E; Lange, P; Wöhrmann, K
1984-12-01
This paper analyses the fate of artificially induced mutations and their importance to the fitness of populations of the yeast, Saccharomyces cerevisiae, an increasingly important model organism in population genetics. Diploid strains, treated with UV and EMS, were cultured asexually for approximately 540 generations and under conditions where the asexual growth was interrupted by a sexual phase. Growth rates of 100 randomly sampled diploid clones were estimated at the beginning and at the end of the experiment. After the induction of sporulation the growth rates of 100 randomly sampled spores were measured. UV and EMS treatment decreases the average growth rate of the clones significantly but increases the variability in comparison to the untreated control. After selection over approximately 540 generations, variability in growth rates was reduced to that of the untreated control. No increase in mean population fitness was observed. However, the results show that after selection there still exists a large amount of hidden genetic variability in the populations which is revealed when the clones are cultivated in environments other than those in which selection took place. A sexual phase increased the reduction of the induced variability.
View generation for 3D-TV using image reconstruction from irregularly spaced samples
NASA Astrophysics Data System (ADS)
Vázquez, Carlos
2007-02-01
Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).
The Fourth SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-4)
NASA Technical Reports Server (NTRS)
Hooker, Stanford B.; Thomas, Crystal S.; van Heukelem, Laurie; Schlueter, louise; Russ, Mary E.; Ras, Josephine; Claustre, Herve; Clementson, Lesley; Canuti, Elisabetta; Berthon, Jean-Francois;
2010-01-01
Ten international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a mixed pigment sample. Although prior Sea-viewing Wide Field-of-view Sensor (SeaWiFS) High Performance Liquid Chromatography (HPLC) Round-Robin Experiment (SeaHARRE) activities conducted in open-ocean waters covered a wide dynamic range in productivity, and some of the samples were collected in the coastal zone, none of the activities involved exclusively coastal samples. Consequently, SeaHARRE-4 was organized and executed as a strictly coastal activity and the field samples were collected from primarily eutrophic waters within the coastal zone of Denmark. The more restrictive perspective limited the dynamic range in chlorophyll concentration to approximately one and a half orders of magnitude (previous activities covered more than two orders of magnitude). The method intercomparisons were used for the following objectives: a) estimate the uncertainties in quantitating individual pigments and higher-order variables formed from sums and ratios; b) confirm if the chlorophyll a accuracy requirements for ocean color validation activities (approximately 25%, although 15% would allow for algorithm refinement) can be met in coastal waters; c) establish the reduction in uncertainties as a result of applying QA procedures; d) show the importance of establishing a properly defined referencing system in the computation of uncertainties; e) quantify the analytical benefits of performance metrics, and f) demonstrate the utility of a laboratory mix in understanding method performance. In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied.
Preservation of primary porosity in the Neogene clastic reservoirs of the Surma Basin, Bangladesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferdous, H.S.; Renaut, R.W.
1996-01-01
The Surma Basin is a Tertiary sub-basin within the greater Bengal Basin, in N.E. Bangladesh. The Neogene sequence ([approximately]17 km thick) contains the producing hydrocarbon reservoirs with proven gas reserves. These sediments are alternating coarse and fine clastics, representing a complex interfingering of deltaic and marine subenvironments, with the former dominating. The principal reservoir facies are distributary channel-fill sandstones in a lower delta-plain setting. Kailashtila, Beanibazar and Rashidpur, located in anticlinal structures, are major hydrocarbon-producing fields in the E. Surma Basin. Petrographic analysis shows that primary intergranular porosity mainly controls the reservoir quality of these Neogene sands, which occur atmore » a depth of [approximately]3000 m. Most samples show primary pores with about 20% porosity and permeabilities of about 200 mD. The preservation of a higher proportion of primary pores in fine to medium grained sandstones is a result of (1) moderate compaction resulting from overpressuring caused by a higher rate of subsidence and sedimentation, (2) weak cementation, and (3) a general lack of deleterious clays and the presence of some grain-rimming chlorites. The general absence of long and sutured grain contacts also supports these observations. Some of the existing literature suggests that secondary pores are dominant in the Neogene sandy reservoirs of the Bengal Basin; however, they contribute little ([approximately]2%) to the total porosity in the Surma Basin.« less
Preservation of primary porosity in the Neogene clastic reservoirs of the Surma Basin, Bangladesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferdous, H.S.; Renaut, R.W.
1996-12-31
The Surma Basin is a Tertiary sub-basin within the greater Bengal Basin, in N.E. Bangladesh. The Neogene sequence ({approximately}17 km thick) contains the producing hydrocarbon reservoirs with proven gas reserves. These sediments are alternating coarse and fine clastics, representing a complex interfingering of deltaic and marine subenvironments, with the former dominating. The principal reservoir facies are distributary channel-fill sandstones in a lower delta-plain setting. Kailashtila, Beanibazar and Rashidpur, located in anticlinal structures, are major hydrocarbon-producing fields in the E. Surma Basin. Petrographic analysis shows that primary intergranular porosity mainly controls the reservoir quality of these Neogene sands, which occur atmore » a depth of {approximately}3000 m. Most samples show primary pores with about 20% porosity and permeabilities of about 200 mD. The preservation of a higher proportion of primary pores in fine to medium grained sandstones is a result of (1) moderate compaction resulting from overpressuring caused by a higher rate of subsidence and sedimentation, (2) weak cementation, and (3) a general lack of deleterious clays and the presence of some grain-rimming chlorites. The general absence of long and sutured grain contacts also supports these observations. Some of the existing literature suggests that secondary pores are dominant in the Neogene sandy reservoirs of the Bengal Basin; however, they contribute little ({approximately}2%) to the total porosity in the Surma Basin.« less
Reconstructing lake ice cover in subarctic lakes using a diatom-based inference model
NASA Astrophysics Data System (ADS)
Weckström, Jan; Hanhijärvi, Sami; Forsström, Laura; Kuusisto, Esko; Korhola, Atte
2014-03-01
A new quantitative diatom-based lake ice cover inference model was developed to reconstruct past ice cover histories and applied to four subarctic lakes. The used ice cover model is based on a calculated melting degree day value of +130 and a freezing degree day value of -30 for each lake. The reconstructed Holocene ice cover duration histories show similar trends to the independently reconstructed regional air temperature history. The ice cover duration was around 7 days shorter than the average ice cover duration during the warmer early Holocene (approximately 10 to 6.5 calibrated kyr B.P.) and around 3-5 days longer during the cool Little Ice Age (approximately 500 to 100 calibrated yr B.P.). Although the recent climate warming is represented by only 2-3 samples in the sediment series, these show a rising trend in the prolonged ice-free periods of up to 2 days. Diatom-based ice cover inference models can provide a powerful tool to reconstruct past ice cover histories in remote and sensitive areas where no measured data are available.
Monte Carlo simulations of product distributions and contained metal estimates
Gettings, Mark E.
2013-01-01
Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.
Uranium and radon in private bedrock well water in Maine: geospatial analysis at two scales
Yang, Qiang; Smitherman, Paul; Hess, C.T.; Culbertson, Charles W.; Marvinney, Robert G.; Zheng, Yan
2014-01-01
In greater Augusta of central Maine, 53 out of 1093 (4.8%) private bedrock well water samples from 1534 km2 contained [U] >30 μg/L, the U.S. Environmental Protection Agency’s (EPA) Maximum Contaminant Level (MCL) for drinking water; and 226 out of 786 (29%) samples from 1135 km2 showed [Rn] >4,000 pCi/L (148 Bq/L), the U.S. EPA’s Alternative MCL. Groundwater pH, calcite dissolution and redox condition are factors controlling the distribution of groundwater U but not Rn due to their divergent chemical and hydrological properties. Groundwater U is associated with incompatible elements (S, As, Mo, F, and Cs) in water samples within granitic intrusions. Elevated [U] and [Rn] are located within 5–10 km distance of granitic intrusions but do not show correlations with metamorphism at intermediate scales (100−101 km). This spatial association is confirmed by a high-density sampling (n = 331, 5–40 samples per km2) at local scales (≤10–1 km) and the statewide sampling (n = 5857, 1 sample per 16 km2) at regional scales (102–103 km). Wells located within 5 km of granitic intrusions are at risk of containing high levels of [U] and [Rn]. Approximately 48 800–63 900 and 324 000 people in Maine are estimated at risk of exposure to U (>30 μg/L) and Rn (>4000 pCi/L) in well water, respectively.
NASA Astrophysics Data System (ADS)
Siddique, M. Naseem; Ahmed, Ateeq; Ali, T.; Tripathi, P.
2018-05-01
Nickel oxide (NiO) nanoparticles with a crystal size of around 16.26 nm have been synthesized via sol-gel method. The synthesized precursor was calcined at 600 °C for 4 hours to obtain the nickel oxide nanoparticles. The XRD analysis result indicated that the calcined sample has a cubic structure without any impurity phases. The FTIR analysis result confirmed the formation of NiO. The NiO nanoparticle exhibited absorption band edge at 277.27 nm and the optical band gap have been estimated approximately 4.47 eV using diffuse reflectance spectroscopy and photoluminescence emission spectrum of our as-synthesized sample showed strong peak at 3.65 eV attributed to the band edge transition.
The topology of large-scale structure. V - Two-dimensional topology of sky maps
NASA Astrophysics Data System (ADS)
Gott, J. R., III; Mao, Shude; Park, Changbom; Lahav, Ofer
1992-01-01
A 2D algorithm is applied to observed sky maps and numerical simulations. It is found that when topology is studied on smoothing scales larger than the correlation length, the topology is approximately in agreement with the random phase formula for the 2D genus-threshold density relation, G2(nu) varies as nu(e) exp-nu-squared/2. Some samples show small 'meatball shifts' similar to those seen in corresponding 3D observational samples and similar to those produced by biasing in cold dark matter simulations. The observational results are thus consistent with the standard model in which the structure in the universe today has grown from small fluctuations caused by random quantum noise in the early universe.
Analytical model of the optical vortex microscope.
Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz
2016-04-20
This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.
NASA Astrophysics Data System (ADS)
Grattieri, Matteo; Suvira, Milomir; Hasan, Kamrul; Minteer, Shelley D.
2017-07-01
The treatment of hypersaline wastewater (approximately 5% of the wastewater worldwide) cannot be performed by classical biological techniques. Herein the halotolerant extremophile bacteria obtained from the Great Salt Lake (Utah) were explored in single chamber microbial fuel cells with Pt-free cathodes for more than 18 days. The bacteria samples collected in two different locations of the lake (Stansbury Bay and Antelope Island) showed different electrochemical performances. The maximum achieved power output of 36 mW m-2 was from the microbial fuel cell based on the sample originated from Stansbury Bay, at a current density of 820 mA m-2. The performances throughout the long-term operation are discussed and a bioelectrochemical mechanism is proposed.
NASA Astrophysics Data System (ADS)
Flügge, Jens; Köning, Rainer; Schötka, Eugen; Weichert, Christoph; Köchert, Paul; Bosse, Harald; Kunzmann, Horst
2014-12-01
The paper describes recent improvements of Physikalisch-Technische Bundesanstalt's (PTB) reference measuring instrument for length graduations, the so-called nanometer comparator, intended to achieve a measurement uncertainty in the domain of 1 nm for a length up to 300 mm. The improvements are based on the design and realization of a new sample carriage, integrated into the existing structure and the optimization of coupling this new device to the vacuum interferometer, by which the length measuring range of approximately 540 mm with sub-nm resolution is given. First, measuring results of the enhanced nanometer comparator are presented and discussed, which show the improvements of the measuring capabilities and verify the step toward the sub-nm accuracy level.
Mercury in Precipitation in Indiana, January 2001-December 2003
Risch, Martin R.
2007-01-01
Total mercury deposition that was more than 10 percent of the mean annual deposition (1,262 ng/m2 ) was recorded in 11 of 551 weekly samples from the study period. These samples contained approximately 3 inches or more of rain and most were collected in spring and summer 2003. The highest deposition (2,456 ng/m2 in a sample from Roush Lake) was 15.7 percent of the annual deposition at that station and approximately 10 times the mean weekly deposition for Indiana. High deposition recorded in three weekly samples at Clifty Falls contributed 31 percent of the annual deposition at that station in 2003. Weekly samples with high mercury deposition may help to explain the differences in annual mercury deposition among the four monitoring stations in Indiana.
Schmid, Christina; Baumstark, Annette; Pleus, Stefan; Haug, Cornelia; Tesar, Martina; Freckmann, Guido
2014-03-01
The partial pressure of oxygen (pO2) in blood samples can affect glucose measurements with oxygen-sensitive systems. In this study, we assessed the influence of different pO2 levels on blood glucose (BG) measurements with five glucose oxidase (GOD) systems and one glucose dehydrogenase (GDH) system. All selected GOD systems were indicated by the manufacturers to be sensitive to increased oxygen content of the blood sample. Venous blood samples of 16 subjects (eight women, eight men; mean age, 52 years; three with type 1 diabetes, four with type 2 diabetes, and nine without diabetes) were collected. Aliquots of each sample were adjusted to the following pO2 values: ≤45 mm Hg, approximately 70 mm Hg, and ≥150 mm Hg. For each system, five consecutive measurements on each sample were performed using the same test strip lot. Relative differences between the mean BG value at a pO2 level of approximately 70 mm Hg, which was considered to be similar to pO2 values in capillary blood samples, and the mean BG value at pO2 levels ≤45 mm Hg and ≥150 mm Hg were calculated. The GOD systems showed mean relative differences between 11.8% and 44.5% at pO2 values ≤45 mm Hg and between -14.6% and -21.2% at pO2 values ≥150 mm Hg. For the GDH system, the mean relative differences were -0.3% and -0.2% at pO2 values ≤45 mm Hg and ≥150 mm Hg, respectively. The magnitude of the pO2 impact on BG measurements seems to vary among the tested oxygen-sensitive GOD systems. The pO2 range in which oxygen-sensitive systems operate well should be provided in the product information.
SALI chemical analysis of provided samples
NASA Technical Reports Server (NTRS)
Becker, Christopher H.
1993-01-01
SRI has completed the chemical analysis of all the samples supplied by NASA. The final batch of four samples consisted of: one inch diameter MgF2 mirror, control 1200-ID-FL3; one inch diameter neat resin, PMR-15, AO171-IV-55, half exposed and half unexposed; one inch diameter chromic acid anodized, EOIM-3 120-47 aluminum disc; and AO-exposed and unexposed samples of fullerene extract material in powdered form, pressed into In foil for analysis. Chemical analyses of the surfaces were performed by the surface analysis by laser ionization (SALI) method. The analyses emphasize surface contamination or general organic composition. SALI uses nonselective photoionization of sputtered or desorbed atoms and molecules above but close (approximately one mm) to the surface, followed by time-of-flight (TOF) mass spectrometry. In these studies, we used laser-induced desorption by 5-ns pulse-width 355-nm light (10-100 mJ/sq cm) and single-photon ionization (SPI) by coherent 118-nm radiation (at approximately 5 x 10(exp 5) W/sq cm). SPI was chosen primarily for its ability to obtain molecular information, whereas multiphoton ionization (not used in the present studies) is intended primarily for elemental and small molecule information. In addition to these four samples, the Au mirror (EOIM-3 200-11, sample four) was depth profiled again. Argon ion sputtering was used together with photoionization with intense 355-nm radiation (35-ps pulsewidths). Depth profiles are similar to those reported earlier, showing reproducibility. No chromium was found in the sample above noise level; its presence could at most be at the trace level. Somewhat more Ni appears to be present in the Au layer in the unexposed side, indicating thermal diffusion without chemical enhancement. The result of the presence of oxygen is apparently to tie-up/draw out the Ni as an oxide at the surface. The exposed region has a brownish tint appearance to the naked eye.
Empirical Study of the Multiaxial, Thermomechanical Behavior of NiTiHf Shape Memory Alloys
NASA Technical Reports Server (NTRS)
Shukla, Dhwanil; Noebe, Ronald D.; Stebner Aaron P.
2013-01-01
An empirical study was conducted to characterize the multiaxial, thermomechanical responses of new high temperature NiTiHf alloys. The experimentation included loading thin walled tube Ni(sub 50.3)Ti(sub 29.7)Hf(sub 20) alloy samples along both proportional and nonproportional axial-torsion paths at different temperatures while measuring surface strains using stereo digital image correlation. A Ni(sub 50.3)Ti(sub 33.7)Hf(sub 16) alloy was also studied in tension and compression to document the effect of slightly depleting the Hf content on the constitutive responses of NiTiHf alloys. Samples of both alloys were made from nearly texture free polycrystalline material processed by hot extrusion. Analysis of the data shows that very small changes in composition significantly alter NiTiHf alloy properties, as the austenite finish (Af) temperature of the 16-at Hf alloy was found to be approximately 60 C less than the 20-at Hf alloy (approximately 120 C vs. 180 C). In addition, the 16-at Hf alloy exhibited smaller compressive transformation strains (2 vs. 2.5 percent). Multi-axial characterization of the 20-at % Hf alloy showed that while the random polycrystal transformation strains in tension (4 percent) and compression (2.5 percent) are modest in comparison with binary NiTi (6 percent, 4 percent), the torsion performance is superior (7 vs. 4 shear strain width to the pseudoelastic plateau).
Soto-Alonso, G; Cruz-Medina, J A; Caballero-Pérez, J; Arvizu-Hernández, I; Ávalos-Esparza, L M; Cruz-Hernández, A; Romero-Gómez, S; Rodríguez, A L; Pastrana-Martínez, X; Fernández, F; Loske, A M; Campos-Guillén, J
2015-07-01
Genetic characterization of plasmids from bacterial strains provides insight about multidrug resistance. Ten wild type Escherichia coli (E. coli) strains isolated from cow fecal samples were characterized by their antibiotic resistance profile, plasmid patterns and three different identification methods. From one of the strains, a fertility factor-like plasmid was replicated using tandem shock wave-mediated transformation. Underwater shock waves with a positive pressure peak of up to approximately 40 MPa, followed by a pressure trough of approximately -19 MPa were generated using an experimental piezoelectric shock wave source. Three different shock wave energies and a fixed delay of 750 μs were used to study the relationship between energy and transformation efficiency (TE), as well as the influence of shock wave energy on the integrity of the plasmid. Our results showed that the mean shock wave-mediated TE and the integrity of the large plasmid (~70 kb) were reduced significantly at the energy levels tested. The sequencing analysis of the plasmid revealed a high identity to the pHK17a plasmid, including the replication system, which was similar to the plasmid incompatibility group FII. It also showed that it carried an extended spectrum beta-lactamase gene, ctx-m-14. Furthermore, diverse genes for the conjugative mechanism were identified. Our results may be helpful in improving methodologies for conjugative plasmid transfer and directly selecting the most interesting plasmids from environmental samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
Final Reports of the Stardust ISPE: Seven Probable Interstellar Dust Particles
NASA Technical Reports Server (NTRS)
Allen, Carlton; Sans Tresseras, Juan-Angel; Westphal, Andrew J.; Stroud, Rhonda M.; Bechtel, Hans A.; Brenker, Frank E.; Butterworth, Anna L.; Flynn, George J.; Frank, David R.; Gainsforth, Zack;
2014-01-01
The Stardust spacecraft carried the first spaceborne collector specifically designed to capture and return a sample of contemporary interstellar dust to terrestrial laboratories for analysis [1]. The collector was exposed to the interstellar dust stream in two periods in 2000 and 2002 with a total exposure of approximately 1.8 10(exp 6) square meters sec. Approximately 85% of the collector consisted of aerogel, and the remainder consisted of Al foils. The Stardust Interstellar Preliminary Examination (ISPE) was a consortiumbased effort to characterize the collection in sufficient detail to enable future investigators to make informed sample requests. Among the questions to be answered were these: How many impacts are consistent in their characteristics with interstellar dust, with interplanetary dust, and with secondary ejecta from impacts on the spacecraft? Are the materials amorphous or crystalline? Are organics detectable? An additional goal of the ISPE was to develop or refine the techniques for preparation, analysis, and curation of these tiny samples, expected to be approximately 1 picogram or smaller, roughly three orders of magnitude smaller in mass than the samples in other small particle collections in NASA's collections - the cometary samples returned by Stardust, and the collection of Interplanetary Dust Particles collected in the stratosphere.
Creation of 0.10-cm-1 resolution quantitative infrared spectral libraries for gas samples
NASA Astrophysics Data System (ADS)
Sharpe, Steven W.; Sams, Robert L.; Johnson, Timothy J.; Chu, Pamela M.; Rhoderick, George C.; Guenther, Franklin R.
2002-02-01
The National Institute of Standards and Technology (NIST) and the Pacific Northwest National Laboratory (PNNL) are independently creating quantitative, approximately 0.10 cm-1 resolution, infrared spectral libraries of vapor phase compounds. The NIST library will consist of approximately 100 vapor phase spectra of volatile hazardous air pollutants (HAPs) and suspected greenhouse gases. The PNNL library will consist of approximately 400 vapor phase spectra associated with DOE's remediation mission. A critical part of creating and validating any quantitative work involves independent verification based on inter-laboratory comparisons. The two laboratories use significantly different sample preparation and handling techniques. NIST uses gravimetric dilution and a continuous flowing sample while PNNL uses partial pressure dilution and a static sample. Agreement is generally found to be within the statistical uncertainties of the Beer's law fit and less than 3 percent of the total integrated band areas for the 4 chemicals used in this comparison. There does appear to be a small systematic difference between the PNNL and NIST data, however. Possible sources of the systematic difference will be discussed as well as technical details concerning the sample preparation and the procedures for overcoming instrumental artifacts.
Lee, Se Hee; Jung, Ji Young; Jeon, Che Ok
2014-01-01
To investigate the effects of salt concentration on saeu-jeot (salted shrimp) fermentation, four sets of saeu-jeot samples with 20%, 24%, 28%, and 32% salt concentrations were prepared, and the pH, bacterial and archaeal abundances, bacterial communities, and metabolites were monitored during the entire fermentation period. Quantitative PCR showed that Bacteria were much more abundant than Archaea in all saeu-jeot samples, suggesting that bacterial populations play more important roles than archaeal populations even in highly salted samples. Community analysis indicated that Vibrio, Photobacterium, Psychrobacter, Pseudoalteromonas, and Enterovibrio were identified as the initially dominant genera, and the bacterial successions were significantly different depending on the salt concentration. During the early fermentation period, Salinivibrio predominated in the 20% salted samples, whereas Staphylococcus, Halomonas, and Salimicrobium predominated in the 24% salted samples; eventually, Halanaerobium predominated in the 20% and 24% salted samples. The initially dominant genera gradually decreased as the fermentation progressed in the 28% and 32% salted samples, and eventually Salimicrobium became predominant in the 28% salted samples. However, the initially dominant genera still remained until the end of fermentation in the 32% salted samples. Metabolite analysis showed that the amino acid profile and the initial glycerol increase were similar in all saeu-jeot samples regardless of the salt concentration. After 30–80 days of fermentation, the levels of acetate, butyrate, and methylamines in the 20% and 24% salted samples increased with the growth of Halanaerobium, even though the amino acid concentrations steadily increased until approximately 80–107 days of fermentation. This study suggests that a range of 24–28% salt concentration in saeu-jeot fermentation is appropriate for the production of safe and tasty saeu-jeot. PMID:24587230
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Survival of fecal coliforms in dry-composting toilets.
Redlinger, T; Graham, J; Corella-Barud, V; Avitia, R
2001-09-01
The dry-composting toilet, which uses neither water nor sewage infrastructure, is a practical solution in areas with inadequate sewage disposal and where water is limited. These systems are becoming increasingly popular and are promoted to sanitize human excreta and to recycle them into fertilizer for nonedible plants, yet there are few data on the safety of this technology. This study analyzed fecal coliform reduction in approximately 90 prefabricated, dry-composting toilets (Sistema Integral de Reciclamiento de Desechos Orgánicos [SIRDOs]) that were installed on the U.S.-Mexico border in Ciudad Juárez, Chihuahua, Mexico. The purpose of this study was to determine fecal coliform reduction over time and the most probable method of this reduction. Biosolid waste samples were collected and analyzed at approximately 3 and 6 months and were classified based on U.S. Environmental Protection Agency standards. Results showed that class A compost (high grade) was present in only 35.8% of SIRDOs after 6 months. The primary mechanism for fecal coliform reduction was found to be desiccation rather than biodegradation. There was a significant correlation (P = 0.008) between classification rating and percent moisture categories of the biosolid samples: drier samples had a greater proportion of class A samples. Solar exposure was critical for maximal class A biosolid end products (P = 0.001). This study only addressed fecal coliforms as an indicator organism, and further research is necessary to determine the safety of composting toilets with respect to other pathogenic microorganisms, some of which are more resistant to desiccation.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Chip-LC-MS for label-free profiling of human serum.
Horvatovich, Peter; Govorukhina, Natalia I; Reijmers, Theo H; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer
2007-12-01
The discovery of biomarkers in easily accessible body fluids such as serum is one of the most challenging topics in proteomics requiring highly efficient separation and detection methodologies. Here, we present the application of a microfluidics-based LC-MS system (chip-LC-MS) to the label-free profiling of immunodepleted, trypsin-digested serum in comparison to conventional capillary LC-MS (cap-LC-MS). Both systems proved to have a repeatability of approximately 20% RSD for peak area, all sample preparation steps included, while repeatability of the LC-MS part by itself was less than 10% RSD for the chip-LC-MS system. Importantly, the chip-LC-MS system had a two times higher resolution in the LC dimension and resulted in a lower average charge state of the tryptic peptide ions generated in the ESI interface when compared to cap-LC-MS while requiring approximately 30 times less (~5 pmol) sample. In order to characterize both systems for their capability to find discriminating peptides in trypsin-digested serum samples, five out of ten individually prepared, identical sera were spiked with horse heart cytochrome c. A comprehensive data processing methodology was applied including 2-D smoothing, resolution reduction, peak picking, time alignment, and matching of the individual peak lists to create an aligned peak matrix amenable for statistical analysis. Statistical analysis by supervised classification and variable selection showed that both LC-MS systems could discriminate the two sample groups. However, the chip-LC-MS system allowed to assign 55% of the overall signal to selected peaks against 32% for the cap-LC-MS system.
Michalke, Klaus; Schmidt, Annette; Huber, Britta; Meyer, Jörg; Sulkowski, Margareta; Hirner, Alfred V; Boertz, Jens; Mosel, Frank; Dammann, Philip; Hilken, Gero; Hedrich, Hans J; Dorsch, Martina; Rettenmeier, Albert W; Hensel, Reinhard
2008-05-01
The present study shows that feces samples of 14 human volunteers and isolated gut segments of mice (small intestine, cecum, and large intestine) are able to transform metals and metalloids into volatile derivatives ex situ during anaerobic incubation at 37 degrees C and neutral pH. Human feces and the gut of mice exhibit highly productive mechanisms for the formation of the toxic volatile derivative trimethylbismuth [(CH(3))(3)Bi] at rather low concentrations of bismuth (0.2 to 1 mumol kg(-1) [dry weight]). An increase of bismuth up to 2 to 14 mmol kg(-1) (dry weight) upon a single (human volunteers) or continuous (mouse study) administration of colloidal bismuth subcitrate resulted in an average increase of the derivatization rate from approximately 4 pmol h(-1) kg(-1) (dry weight) to 2,100 pmol h(-1) kg(-1) (dry weight) in human feces samples and from approximately 5 pmol h(-1) kg(-1) (dry weight) to 120 pmol h(-1) kg(-1) (dry weight) in mouse gut samples, respectively. The upshift of the bismuth content also led to an increase of derivatives of other elements (such as arsenic, antimony, and lead in human feces or tellurium and lead in the murine large intestine). The assumption that the gut microbiota plays a dominant role for these transformation processes, as indicated by the production of volatile derivatives of various elements in feces samples, is supported by the observation that the gut segments of germfree mice are unable to transform administered bismuth to (CH(3))(3)Bi.
Survival of Fecal Coliforms in Dry-Composting Toilets
Redlinger, Thomas; Graham, Jay; Corella-Barud, Verónica; Avitia, Raquel
2001-01-01
The dry-composting toilet, which uses neither water nor sewage infrastructure, is a practical solution in areas with inadequate sewage disposal and where water is limited. These systems are becoming increasingly popular and are promoted to sanitize human excreta and to recycle them into fertilizer for nonedible plants, yet there are few data on the safety of this technology. This study analyzed fecal coliform reduction in approximately 90 prefabricated, dry-composting toilets (Sistema Integral de Reciclamiento de Desechos Orgánicos [SIRDOs]) that were installed on the U.S.-Mexico border in Ciudad Juárez, Chihuahua, Mexico. The purpose of this study was to determine fecal coliform reduction over time and the most probable method of this reduction. Biosolid waste samples were collected and analyzed at approximately 3 and 6 months and were classified based on U.S. Environmental Protection Agency standards. Results showed that class A compost (high grade) was present in only 35.8% of SIRDOs after 6 months. The primary mechanism for fecal coliform reduction was found to be desiccation rather than biodegradation. There was a significant correlation (P = 0.008) between classification rating and percent moisture categories of the biosolid samples: drier samples had a greater proportion of class A samples. Solar exposure was critical for maximal class A biosolid end products (P = 0.001). This study only addressed fecal coliforms as an indicator organism, and further research is necessary to determine the safety of composting toilets with respect to other pathogenic microorganisms, some of which are more resistant to desiccation. PMID:11526002
Gold Sample Heating within the TEMPUS Electromagnetic Levitation Furnace
NASA Technical Reports Server (NTRS)
2003-01-01
A gold sample is heated by the TEMPUS electromagnetic levitation furnace on STS-94, 1997, MET:10/09:20 (approximate). The sequence shows the sample being positioned electromagnetically and starting to be heated to melting. TEMPUS (stands for Tiegelfreies Elektromagnetisches Prozessiere unter Schwerelosigkeit (containerless electromagnetic processing under weightlessness). It was developed by the German Space Agency (DARA) for flight aboard Spacelab. The DARA project scientist was Igon Egry. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). DARA and NASA are exploring the possibility of flying an advanced version of TEMPUS on the International Space Station. (460KB, 14-second MPEG, screen 160 x 120 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300190.html.
Hardness of AISI type 410 martensitic steels after high temperature irradiation via nanoindentation
NASA Astrophysics Data System (ADS)
Waseem, Owais Ahmed; Jeong, Jong-Ryul; Park, Byong-Guk; Maeng, Cheol-Soo; Lee, Myoung-Goo; Ryu, Ho Jin
2017-11-01
The hardness of irradiated AISI type 410 martensitic steel, which is utilized in structural and magnetic components of nuclear power plants, is investigated in this study. Proton irradiation of AISI type 410 martensitic steel samples was carried out by exposing the samples to 3 MeV protons up to a 1.0 × 1017 p/cm2 fluence level at a representative nuclear reactor coolant temperature of 350 °C. The assessment of deleterious effects of irradiation on the micro-structure and mechanical behavior of the AISI type 410 martensitic steel samples via transmission electron microscopy-energy dispersive spectroscopy and cross-sectional nano-indentation showed no significant variation in the microscopic or mechanical characteristics. These results ensure the integrity of the structural and magnetic components of nuclear reactors made of AISI type 410 martensitic steel under high-temperature irradiation damage levels up to approximately 5.2 × 10-3 dpa.
Atomistic study of two-level systems in amorphous silica
NASA Astrophysics Data System (ADS)
Damart, T.; Rodney, D.
2018-01-01
Internal friction is analyzed in an atomic-scale model of amorphous silica. The potential energy landscape of more than 100 glasses is explored to identify a sample of about 700 two-level systems (TLSs). We discuss the properties of TLSs, particularly their energy asymmetry and barrier as well as their deformation potential, computed as longitudinal and transverse averages of the full deformation potential tensors. The discrete sampling is used to predict dissipation in the classical regime. Comparison with experimental data shows a better agreement with poorly relaxed thin films than well relaxed vitreous silica, as expected from the large quench rates used to produce numerical glasses. The TLSs are categorized in three types that are shown to affect dissipation in different temperature ranges. The sampling is also used to discuss critically the usual approximations employed in the literature to represent the statistical properties of TLSs.
Detection of Papaverine for the Possible Identification of Illicit Opium Cultivation.
Mirsafavi, Rustin Y; Lai, Kristine; Kline, Neal D; Fountain, Augustus W; Meinhart, Carl D; Moskovits, Martin
2017-02-07
Papaverine is a non-narcotic alkaloid found endemically and uniquely in the latex of the opium poppy. It is normally refined out of the opioids that the latex is typically collected for, hence its presence in a sample is strong prima facie evidence that the carrier from whom the sample was collected is implicated in the mass cultivation of poppies or the collection and handling of their latex. We describe an analysis technique combining surface-enhanced Raman spectroscopy (SERS) with microfluidics for detecting papaverine at low concentrations and show that its SERS spectrum has unique spectroscopic features that allows its detection at low concentrations among typical opioids. The analysis requires approximately 2.5 min from sample loading to results, which is compatible with field use. The weak acid properties of papaverine hydrochloride were investigated, and Raman bands belonging to the protonated and unprotonated forms of the isoquinoline ring of papaverine were identified.
Floating plastic debris in the Central and Western Mediterranean Sea.
Ruiz-Orejón, Luis F; Sardá, Rafael; Ramis-Pujol, Juan
2016-09-01
In two sea voyages throughout the Mediterranean (2011 and 2013) that repeated the historical travels of Archduke Ludwig Salvator of Austria (1847-1915), 71 samples of floating plastic debris were obtained with a Manta trawl. Floating plastic was observed in all the sampled sites, with an average weight concentration of 579.3 g dw km(-2) (maximum value of 9298.2 g dw km(-2)) and an average particle concentration of 147,500 items km(-2) (the maximum concentration was 1,164,403 items km(-2)). The plastic size distribution showed microplastics (<5 mm) in all the samples. The most abundant particles had a surface area of approximately 1 mm(2) (the mesh size was 333 μm). The general estimate obtained was a total value of 1455 tons dw of floating plastic in the entire Mediterranean region, with various potential spatial accumulation areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Nematodes with zoonotic potential in parks of the city of Tunja, Colombia].
Díaz-Anaya, Adriana María; Pulido-Medellín, Martín Orlando; Giraldo-Forero, Julio César
2015-01-01
To identify the presence of parasites with zoonotic potential in major parks in the city of Tunja, Boyacá. Twenty eight parks in the city were selected, where 124 samples of feces of dogs and soil were collected with the help of a spatula, gathering approximately 150 g per sample. They were processed by the method of concentration of Ritchie modified making the identification of parasitic forms in an optical microscope. A 60.7% of the parks were positive to nematodes in samples of canine fecal material and 100% on soil. Found nematodes were eggs and larvae of Toxocara spp., Ancylostoma spp., Trichuris vulpis and Strongiloides spp. This study demonstrated the potential risk of transmission of zoonoses caused by nematodes in canines and for the need to strengthen public health measures to reduce the risk shows the population exposed to such zoonoses.
Weiskel, Peter K.; Barbaro, Jeffrey R.; DeSimone, Leslie A.
2016-09-23
The tidal creek sampling stations established in the 1990s were resampled in 2003–4 and 2010–11 to evaluate potential effects of the treated wastewater plume on creek water quality. The annual medians of the 2011 biweekly nitrate and total dissolved nitrogen concentrations were determined for each station and compared to the annual medians of biweekly samples for the baseline years 1994, 1995, and 1996. At all stations, the 2011 median nitrate concentrations were within the range of medians for the 3 baseline years. A similar result was obtained for total dissolved nitrogen. We conclude that the 2011 creek samples, collected approximately 8 years after the shallow plume segment was first detected beneath the marsh, do not show evidence of elevated nitrate or total dissolved nitrogen concentrations attributable to discharge of either the shallow or deep segments of the treated wastewater plume.
Spectral Absorption Properties of Aerosol Particles from 350-2500nm
NASA Technical Reports Server (NTRS)
Martins, J. Vanderlei; Artaxo, Paulo; Kaufman, Yoram J.; Castanho, Andrea D.; Remer, Lorraine A.
2009-01-01
The aerosol spectral absorption efficiency (alpha (sub a) in square meters per gram) is measured over an extended wavelength range (350 2500 nm) using an improved calibrated and validated reflectance technique and applied to urban aerosol samples from Sao Paulo, Brazil and from a site in Virginia, Eastern US, that experiences transported urban/industrial aerosol. The average alpha (sub a) values (approximately 3 square meters per gram at 550 nm) for Sao Paulo samples are 10 times larger than alpha (sub a) values obtained for aerosols in Virginia. Sao Paulo aerosols also show evidence of enhanced UV absorption in selected samples, probably associated with organic aerosol components. This extra UV absorption can double the absorption efficiency observed from black carbon alone, therefore reducing by up to 50% the surface UV fluxes, with important implications for climate, UV photolysis rates, and remote sensing from space.
Analysis of image formation in optical coherence elastography using a multiphysics approach
Chin, Lixin; Curatolo, Andrea; Kennedy, Brendan F.; Doyle, Barry J.; Munro, Peter R. T.; McLaughlin, Robert A.; Sampson, David D.
2014-01-01
Image formation in optical coherence elastography (OCE) results from a combination of two processes: the mechanical deformation imparted to the sample and the detection of the resulting displacement using optical coherence tomography (OCT). We present a multiphysics model of these processes, validated by simulating strain elastograms acquired using phase-sensitive compression OCE, and demonstrating close correspondence with experimental results. Using the model, we present evidence that the approximation commonly used to infer sample displacement in phase-sensitive OCE is invalidated for smaller deformations than has been previously considered, significantly affecting the measurement precision, as quantified by the displacement sensitivity and the elastogram signal-to-noise ratio. We show how the precision of OCE is affected not only by OCT shot-noise, as is usually considered, but additionally by phase decorrelation due to the sample deformation. This multiphysics model provides a general framework that could be used to compare and contrast different OCE techniques. PMID:25401007
Multiplexed operation of a micromachined ultrasonic droplet ejector array.
Forbes, Thomas P; Degertekin, F Levent; Fedorov, Andrei G
2007-10-01
A dual-sample ultrasonic droplet ejector array is developed for use as a soft-ionization ion source for multiplexed mass spectrometry (MS). Such a multiplexed ion source aims to reduce MS analysis time for multiple analyte streams, as well as allow for the synchronized ejection of the sample(s) and an internal standard for quantitative results and mass calibration. Multiplexing is achieved at the device level by division of the fluid reservoir and separating the active electrodes of the piezoelectric transducer for isolated application of ultrasonic wave energy to each domain. The transducer is mechanically shaped to further reduce the acoustical crosstalk between the domains. Device design is performed using finite-element analysis simulations and supported by experimental characterization. Isolated ejection of approximately 5 microm diameter water droplets from individual domains in the micromachined droplet ejector array at around 1 MHz frequency is demonstrated by experiments. The proof-of-concept demonstration using a dual-sample device also shows potential for multiplexing with larger numbers of analytes.
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Interparticle interactions effects on the magnetic order in surface of FeO4 nanoparticles.
Lima, E; Vargas, J M; Rechenberg, H R; Zysler, R D
2008-11-01
We report interparticle interactions effects on the magnetic structure of the surface region in Fe3O4 nanoparticles. For that, we have studied a desirable system composed by Fe3O4 nanoparticles with (d) = 9.3 nm and a narrow size distribution. These particles present an interesting morphology constituted by a crystalline core and a broad (approximately 50% vol.) disordered superficial shell. Two samples were prepared with distinct concentrations of the particles: weakly-interacting particles dispersed in a polymer and strongly-dipolar-interacting particles in a powder sample. M(H, T) measurements clearly show that strong dipolar interparticle interaction modifies the magnetic structure of the structurally disordered superficial shell. Consequently, we have observed drastically distinct thermal behaviours of magnetization and susceptibility comparing weakly- and strongly-interacting samples for the temperature range 2 K < T < 300 K. We have also observed a temperature-field dependence of the hysteresis loops of the dispersed sample that is not observed in the hysteresis loops of the powder one.
Weigel, Stefan; Kuhlmann, Jan; Hühnerfuss, Heinrich
2002-08-05
An analytical method is presented, which allows the simultaneous extraction of neutral and acidic compounds from 20-L seawater samples at ambient pH (approximately 8.3). It is based on a solid-phase extraction by means of a polystyrene-divinylbenzene sorbent and gas chromatographic-mass spectrometric detection, and provides detection limits in the lower pg/L range. The method was applied to the screening of samples from different North Sea areas for clofibric acid, diclofenac, ibuprofen, ketoprofen, propyphenazone, caffeine and N,N-diethyl-3-toluamide (DEET). Whereas clofibric acid, caffeine and DEET showed to be present throughout the North Sea in concentrations of up to 1.3, 16 and 1.1 ng/L, respectively, propyphenazone could only be detected after further clean-up. Diclofenac and ibuprofen were found in the estuary of the river Elbe (6.2 and 0.6 ng/L, respectively) but in none of the marine samples. Ketoprofen was below the detection limit in all samples.
The effect of surface nanocrystallization on plasma nitriding behaviour of AISI 4140 steel
NASA Astrophysics Data System (ADS)
Li, Yang; Wang, Liang; Zhang, Dandan; Shen, Lie
2010-11-01
A plastic deformation surface layer with nanocrystalline grains was produced on AISI 4140 steel by means of surface mechanical attrition treatment (SMAT). Plasma nitriding of SMAT and un-SMAT AISI 4140 steel was carried out by a low-frequency pulse excited plasma unit. A series of nitriding experiments has been conducted at temperatures ranging from 380 to 500 °C for 8 h in an NH 3 gas. The samples were characterized using X-ray diffraction, scanning electron microscopy, optical microscopy and Vickers microhardness tester. The results showed that a much thicker compound layer with higher hardness was obtained for the SMAT samples when compared with un-SMAT samples after nitriding at the low temperature. In particular, plasma nitriding SMAT AISI 4140 steel at 380 °C for 8 h can produced a compound layer of 2.5 μm thickness with very high hardness on the surface, which is similar to un-SMAT samples were plasma nitrided at approximately 430 °C within the same time.
Scintillation and optical properties of Sn-doped Ga2O3 single crystals
NASA Astrophysics Data System (ADS)
Usui, Yuki; Nakauchi, Daisuke; Kawano, Naoki; Okada, Go; Kawaguchi, Noriaki; Yanagida, Takayuki
2018-06-01
Sn-doped Ga2O3 single crystals were synthesized by the Floating Zone (FZ) method. In photoluminescence (PL) under the excitation wavelength of 280 nm, we observed two types of luminescence: (1) defect luminescence due to recombination of the donor/acceptor pairs which appears at 430 nm and (2) the nsnp-ns2 transitions of Sn2+ which appear at 530 nm. The PL and scintillation decay time curves of the Sn-doped samples were approximated by a sum of exponential decay functions. The faster two components were ascribed to the defect luminescence, and the slowest component was owing to the nsnp-ns2 transitions. In the pulse height spectrum measurements under 241Am α-rays irradiation, all the Sn-doped Ga2O3 samples were confirmed to show a full energy absorption peak but the undoped one. Among the present samples, the 1% Sn-doped sample exhibited the highest scintillation light yield (1,500 ± 150 ph/5.5 MeV-α).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.
Li, Yuan-Xin; Yang, Guang-Hong
2018-04-01
This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
NASA Technical Reports Server (NTRS)
Smith, C. D.; Parrott, T. L.
1978-01-01
The treatment consisted of immersing samples of Kevlar in a solution of distilled water and Zepel. The samples were then drained, dried in a circulating over, and cured. Flow resistance tests showed approximately one percent decrease in flow resistance of the samples. Also there was a density increase of about three percent. It was found that the treatment caused a change in the texture of the samples. There were significant changes in the acoustic properties of the treated Kevlar over the frequency range 0.5 to 3.5 kHz. In general it was found that the propagation constant and characteristic impedance increased with increasing frequency. The real and imaginary components of the propagation constant for the treated Kevlar exhibited a decrease of 8 to 12 percent relative to that for the untreated Kevlar at the higher frequencies. The magnitude of the reactance component of the characteristic impedance decreased by about 40 percent at the higher frequencies.
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
Khumaeni, Ali; Lie, Zener Sukra; Niki, Hideaki; Lee, Yong Inn; Kurihara, Kazuyoshi; Wakasugi, Motoomi; Takahashi, Touru; Kagawa, Kiichiro
2012-03-01
Taking advantage of the specific characteristics of a transversely excited atmospheric (TEA) CO(2) laser, a sophisticated technique for the analysis of chromated copper arsenate (CCA) in wood samples has been developed. In this study, a CCA-treated wood sample with a dimension of 20 mm × 20 mm and a thickness of 2 mm was attached in contact to a nickel plate (20 mm × 20 mm × 0.15 mm), which functions as a subtarget. When the TEA CO(2) laser was successively irradiated onto the wood surface, a hole with a diameter of approximately 2.5 mm was produced inside the sample and the laser beam was directly impinged onto the metal subtarget. Strong and stable gas plasma with a very large diameter of approximately 10 mm was induced once the laser beam had directly struck the metal subtarget. This gas plasma then interacted with the fine particles of the sample inside the hole and finally the particles were effectively dissociated and excited in the gas plasma region. By using this technique, high precision and sensitive analysis of CCA-treated wood sample was realized. A linear calibration curve of Cr was successfully made using the CCA-treated wood sample. The detection limits of Cr, Cu, and As were estimated to be approximately 1, 2, and 15 mg/kg, respectively. In the case of standard LIBS using the Nd:YAG laser, the analytical intensities fluctuate and the detection limit was much lower at approximately one-tenth that of TEA CO(2) laser. © 2012 Optical Society of America
Nuopponen, M; Willför, S; Jääskeläinen, A-S; Sundberg, A; Vuorinen, T
2004-11-01
The wood resin in Scots pine (Pinus sylvestris) stemwood and branch wood were studied using UV resonance Raman (UVRR) spectroscopy. UVRR spectra of the sapwood and heartwood hexane extracts, solid wood samples and model compounds (six resin acids, three fatty acids, a fatty acid ester, sitosterol and sitosterol acetate) were collected using excitation wavelengths of 229, 244 and 257 nm. In addition, visible Raman spectra of the fatty and resin acids were recorded. Resin compositions of heartwood and sapwood hexane extracts were determined using gas chromatography. Raman signals of both conjugated and isolated double bonds of all the model compounds were resonance enhanced by UV excitation. The oleophilic structures showed strong bands in the region of 1660-1630 cm(-1). Distinct structures were enhanced depending on the excitation wavelength. The UVRR spectra of the hexane extracts showed characteristic bands for resin and fatty acids. It was possible to identify certain resin acids from the spectra. UV Raman spectra collected from the solid wood samples containing wood resin showed a band at approximately 1650 cm(-1) due to unsaturated resin components. The Raman signals from extractives in the resin rich branch wood sample gave even more strongly enhanced signals than the aromatic lignin.
Harmonic-phase path-integral approximation of thermal quantum correlation functions
NASA Astrophysics Data System (ADS)
Robertson, Christopher; Habershon, Scott
2018-03-01
We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.
Dlugokencky, E. J. [National Oceanic and Atmospheric Administration, Boulder, Colorado (USA); Lang, P. M. [National Oceanic and Atmospheric Administration, Boulder, Colorado (USA); Masarie, K. A. [National Oceanic and Atmospheric Administration, Boulder, Colorado (USA); Steele, L. P. [Commonwealth Scientific and Industrial Research Organisation, Aspendale, Victoria, Australia
1994-01-01
This data base presents atmospheric methane (CH4) mixing ratios from flask air samples collected over the period 1983-1993 by the National Oceanic and Atmospheric Administration, Climate Monitoring and Diagnostics Laboratory's (NOAA/CMDL's) global cooperative air sampling network. Air samples were collected approximately once per week at 44 fixed sites (37 of which were still active at the end of 1993). Samples were also collected at 5 degree latitude intervals along shipboard cruise tracks in the Pacific Ocean between North America and New Zealand (or Australia) and at 3 degree latitude intervals along cruise tracks in the South China Sea between Singapore and Hong Kong. The shipboard measurements were made approximately every 3 weeks per latitude zone by each of two ships in the Pacific Ocean and approximately once every week per latitude zone in the South China Sea. All samples were analyzed for CH4 at the NOAA/CMDL laboratory in Boulder, Colorado, by gas chromatography with flame ionization detection, and each aliquot was referenced to the NOAA/CMDL methane standard scale. In addition to providing the complete set of atmospheric CH4 measurements from flask air samples collected at the NOAA/CMDL network sites, this data base also includes files which list monthly mean mixing ratios derived from the individual flask air measurements. These monthly summary data are available for 35 of the fixed sites and 21 of the shipboard sampling sites.
Uncertainties in the measurements of water-soluble organic nitrogen in the aerosol
NASA Astrophysics Data System (ADS)
Matsumoto, Kiyoshi; Yamato, Koki
2016-11-01
In order to evaluate the positive and negative artifacts in the measurements of the water-soluble organic nitrogen (WSON) in the aerosols by filter sampling, comparative experiments between the filter sampling and denuder-filter sampling were conducted during both the warm and cold seasons. The results suggest that the traditional filter sampling underestimates the concentrations of the particulate WSON due to its volatilization loss, but this effect on the ratio of the WSON to the water-soluble total nitrogen (WSTN) was small probably because inorganic nitrogen species were also lost during the filter sampling. Approximately 32.5% of the WSON in the PM2.5 was estimated to be lost during the filter sampling. The denuder-filter sampling also demonstrated the existence of the WSON in the gas phase with approximately quarter concentrations of the WSON in the PM2.5. On the other hand, the filter sampling would overestimate the gaseous WSON concentration due to the loss of the WSON from the aerosol collection filter.
The -(α)(5.2) Deletion Detected in a Uruguayan Family: First Case Report in the Americas.
Soler, Ana María; Schelotto, Magdalena; de Oliveira Mota, Natalia; Dorta Ferreira, Roberta; Sonati, Maria de Fatima; da Luz, Julio Abayubá
2016-08-01
In Uruguay, α-thalassemia (α-thal) mutations were introduced predominantly by Mediterranean European immigrant populations and by slave trade of African populations. A patient with anemia with hypochromia and microcytosis, refractory to iron treatment and with normal hemoglobin (Hb) electrophoresis was analyzed for α-thal mutations by multiplex gap-polymerase chain reaction (gap-PCR), automated sequencing and multiplex ligation-dependent probe amplification (MLPA) analyses. Agarose gel electrophoresis of the multiplex gap-PCR showed a band of unexpected size (approximately 700 bp) in the samples from the proband and mother. Automated sequencing of the amplified fragment showed the presence of the -(α)(5.2) deletion (NG_000006.1: g.32867_38062del5196) [an α-thal-1 deletion of 5196 nucleotides (nts)]. The MLPA analysis of the proband's sample also showed the presence of the -(α)(5.2) deletion in heterozygous state. We report here the presence of the -(α)(5.2) deletion, for the first time in the Americas, in a Uruguayan family with Italian ancestry, detected with a previously described multiplex gap-PCR.
Mechanical Properties and Durability of "Waterless Concrete"
NASA Technical Reports Server (NTRS)
Toutanji, Houssam; Grugel, Richard N.
2008-01-01
Waterless concrete consists of molten elementary sulfur and aggregate. The aggregates in lunar environment will be lunar rocks and soil. Sulfur is present on the Moon in Troilite soil (FeS) and by oxidation soil iron and sulfur can be produced. Iron can be used to reinforce the sulfur concrete. Sulfur concrete specimens were cycled between liquid nitrogen (approximately 191 C) and room temperature (approximately 21 C) to simulate exposure to a lunar environment. Cycled and control specimens were subsequently tested in compression at room temperatures (approximately 21 C) and approximately 101 C. Test results showed that due to temperature cycling, compressive strength of cycled specimens was 20% of those non-cycled. Microscopic examination of the fracture surfaces from the cycled samples showed clear de-bonding of the sulfur from the aggregate material whereas it was seen well bonded in those non-cycled. This reduction in strength can be attributed to the large differences in thermal coefficients of expansion of the materials constituting the concrete which promoted cracking. Similar sulfur concrete mixtures were strengthened with short and long glass fibers. The glass fibers from lunar regolith simulant was melted in a 25 cc Pt-Rh crucible in a Sybron Thermoline high temperature MoSi2 furnace at melting temperatures of 1450 to 1600 C for times of 30 min to 1 hour. Glass fibers were cast from the melt into graphite crucibles and were annealed for a couple of hours at 600 C. Glass fibers and small rods were pulled from the melt. The glass melt wets the ceramic rod and long continuous glass fibers were easily hand drawn. The glass fibers were immediately coated with a protective polymer to maintain the mechanical strength. The glass fibers were used to reinforce sulfur concrete plated to improve the flexural strength of the sulfur concrete. Prisms beams strengthened with glass fibers were tested in 4-point bending test. Beams strengthened with glass fiber showed to exhibit an increase in the flexura strength by as much as 45%.
A new static sampler for airborne total dust in workplaces.
Mark, D; Vincent, J H; Gibson, H; Lynch, G
1985-03-01
This paper describes the development and laboratory testing of a new static dust sampler for airborne total dust in workplaces. Particular attention is paid to designing the sampling head and entry consistent with the concept of inspirability which in turn defines a biologically-relevant aspiration efficiency. The sampling head has a small cylindrical body and a transverse entry slot with thin protruding lips forming an integral part of a weighable capsule containing a 37 mm filter which collects all of the sampled dust (without introducing errors due to external particle blow-off or internal wall losses). A battery-powered sampling pump provides both air suction at 3 L/min and rigid mounting for the sampling head. The sampling head is rotated continuously through 360 degrees at approximately 1.5 rpm by a simple electric drive, connected to the stationary pump through a rotating seal. Wind tunnel testing of the instrument showed it to display an entry efficiency very close to the inspirability curve of Vincent and Armbruster (now recommended by the ACGIH Technical Committee on Air Sampling Procedures for defining inspirable particulate matter (IPM] for particles of aerodynamic diameter up to 90 micron and for windspeeds in the range of one to three m/sec.
NASA Astrophysics Data System (ADS)
Kreisberg, N. M.; Worton, D. R.; Zhao, Y.; Isaacman, G.; Goldstein, A. H.; Hering, S. V.
2014-12-01
A reliable method of sample introduction is presented for online gas chromatography with a special application to in situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a valveless sample introduction interface that offers the advantage of long-term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing this pressure-switching-based device for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient-pressure detector and 15% accurate when applied to a vacuum-based detector. Laboratory comparisons made between the two methods of sample introduction using a thermal desorption aerosol gas chromatograph (TAG) show that the new interface has approximately 3 times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in situ instrument demonstrate typically less than 2% week-1 response trending and a zero failure rate during field deployments ranging up to 4 weeks of continuous sampling. Extension of the valveless interface to dual collection cells is presented with less than 3% cell-to-cell carryover.
Kreisberg, N. M.; Worton, D. R.; Zhao, Y.; ...
2014-12-12
A reliable method of sample introduction is presented for online gas chromatography with a special application to in situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a valveless sample introduction interface that offers the advantage of long-term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing this pressure-switching-based device for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient-pressure detector and 15% accurate when applied to a vacuum-based detector. Laboratory comparisons made between the two methods of sample introductionmore » using a thermal desorption aerosol gas chromatograph (TAG) show that the new interface has approximately 3 times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in situ instrument demonstrate typically less than 2% week -1 response trending and a zero failure rate during field deployments ranging up to 4 weeks of continuous sampling. Extension of the valveless interface to dual collection cells is presented with less than 3% cell-to-cell carryover.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreisberg, N. M.; Worton, D. R.; Zhao, Y.
A reliable method of sample introduction is presented for online gas chromatography with a special application to in situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a valveless sample introduction interface that offers the advantage of long-term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing this pressure-switching-based device for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient-pressure detector and 15% accurate when applied to a vacuum-based detector. Laboratory comparisons made between the two methods of sample introductionmore » using a thermal desorption aerosol gas chromatograph (TAG) show that the new interface has approximately 3 times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in situ instrument demonstrate typically less than 2% week -1 response trending and a zero failure rate during field deployments ranging up to 4 weeks of continuous sampling. Extension of the valveless interface to dual collection cells is presented with less than 3% cell-to-cell carryover.« less
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Muller, Wayne; Scheuermann, Alexander
2016-04-01
Measuring the electrical permittivity of civil engineering materials is important for a range of ground penetrating radar (GPR) and pavement moisture measurement applications. Compacted unbound granular (UBG) pavement materials present a number of preparation and measurement challenges using conventional characterisation techniques. As an alternative to these methods, a modified free-space (MFS) characterisation approach has previously been investigated. This paper describes recent work to optimise and validate the MFS technique. The research included finite difference time domain (FDTD) modelling to better understand the nature of wave propagation within material samples and the test apparatus. This research led to improvements in the test approach and optimisation of sample sizes. The influence of antenna spacing and sample thickness on the permittivity results was investigated by a series of experiments separating antennas and measuring samples of nylon and water. Permittivity measurements of samples of nylon and water approximately 100 mm and 170 mm thick were also compared, showing consistent results. These measurements also agreed well with surface probe measurements of the nylon sample and literature values for water. The results indicate permittivity estimates of acceptable accuracy can be obtained using the proposed approach, apparatus and sample sizes.
SGAS 143845.1 + 145407: A Big, Cool Starburst at Redshift 0.816
NASA Technical Reports Server (NTRS)
Gladders, Michael D.; Rigby, Jane R.; Sharon, Keren; Wuyts, Eva; Abramson, Louis E.; Dahle, Hakon; Persson, S. E.; Monson, Andrew J.; Kelson, Daniel D.; Benford, Dominic J.;
2012-01-01
We present the discovery and a detailed multi-wavelength study of a strongly-lensed luminous infrared galaxy at z=0.816. Unlike most known lensed galaxies discovered at optical or near-infrared wavelengths, this lensed source is red, which the data presented here demonstrate is due to ongoing dusty star formation. The overall lensing magnification (a factor of 17) facilitates observations from the blue optical through to 500 micrometers, fully capturing both the stellar photospheric emission as well as the reprocessed thermal dust emission. We also present optical and near-IR spectroscopy. These extensive data show that this lensed galaxy is in many ways typical of IR-detected sources at z approximates 1, with both a total luminosity and size in accordance with other (albeit much less detailed) measurements in samples of galaxies observed in deep fields with the Spitzer telescope. Its far-infrared spectral energy distribution is well-fit by local templates that are an order of magnitude less luminous than the lensed galaxy; local templates of comparable luminosity are too hot to fit. Its size (D approximately 7 kpc) is much larger than local luminous infrared galaxies, but in line with sizes observed for such galaxies at z approximates 1. The star formation appears uniform across this spatial scale. Thus, this lensed galaxy, which appears representative of vigorously star-forming z approximates 1 galaxies, is forming stars in a fundamentally different mode than is seen at z approximates 0.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Enhancement of tritium concentrations on uptake by marine biota: experience from UK coastal waters.
Hunt, G J; Bailey, T A; Jenkinson, S B; Leonard, K S
2010-03-01
Concentrations of tritium in sea water and marine biota as reported over the last approximately 10 years from monitoring programmes carried out by this laboratory under contract to the UK Food Standards Agency are reviewed from three areas: near Cardiff; Sellafield; and Hartlepool. Near Cardiff, enhancement of concentration factors (CFs) above an a priori value of approximately 1 have already been studied, and attributed to compounds containing organically bound tritium in local radioactive waste discharges. Further data for Cardiff up to 2006 are reported in this note. Up to 2001, CFs increased to values of more than approximately 7000 in flounders and approximately 4000 in mussels, but have subsequently reduced; this variability could be due to changes in the organic constitution of compounds discharged. Near Sellafield and Hartlepool, enhancements to the tritium concentration factor are observed but they are relatively small compared with those near Cardiff. Near Sellafield, plaice and mussels appear to have a CF for tritium of approximately 10; in some cases concentrations of tritium in winkles are below detection limits and positively measured values indicate a CF of approximately 3. The variation could be due to mechanisms of uptake by the different organisms. Near Hartlepool there were only a few cases where tritium was positively measured. These data give a value of approximately 5 for the CF in plaice (on the basis of two samples); approximately 15 in winkles (eight samples); and > 45 in mussels (two samples). Any differences between the behaviours at Sellafield and Hartlepool would need to be confirmed by improved measurements. Possible causes are the organic composition of the effluent and differences in environmental behaviour and uptake by organisms near the two sites. These potential causes need further investigation. It is emphasised that results from tritium analyses are heavily method dependent; thus comparison with results from other programmes needs to take this into account. Further, the results for enhancement of CF will also depend on the definition of CF itself.
Identification of Atherosclerotic Plaques in Carotid Artery by Fluorescence Spectroscopy
NASA Astrophysics Data System (ADS)
Rocha, Rick; Villaverde, Antonio Balbin; Silveira, Landulfo; Costa, Maricília Silva; Alves, Leandro Procópio; Pasqualucci, Carlos Augusto; Brugnera, Aldo
2008-04-01
The aim of this work was to identify the presence of atherosclerotic plaques in carotid artery using the Fluorescence Spectroscopy. The most important pathogeny in the cardiovascular disorders is the atherosclerosis, which may affect even younger individuals. With approximately 1.2 million heart attacks and 750,000 strokes afflicting an aging American population each year, cardiovascular disease remains the number one cause of death. Carotid artery samples were obtained from the Autopsy Service at the University of São Paulo (São Paulo, SP, Brazil) taken from cadavers. After a histopathological analysis the 60 carotid artery samples were divided into two groups: normal (26) and atherosclerotic plaques (34). Samples were irradiated with the wavelength of 488 nm from an Argon laser. A 600 μm core optical fiber, coupled to the Argon laser, was used for excitation of the sample, whereas another 600 optical fiber, coupled to the spectrograph entrance slit, was used for collecting the fluorescence from the sample. Measurements were taken at different points on each sample and then averaged. Fluorescence spectra showed a single broad line centered at 549 nm. The fluorescence intensity for each sample was calculated by subtracting the intensity at the peak (550 nm) and at the bottom (510 nm) and then data were statistically analyzed, looking for differences between both groups of samples. ANOVA statistical test showed a significant difference (p<0,05) between both types of tissues, with regard to the fluorescence peak intensities. Our results indicate that this technique could be used to detect the presence of the atherosclerotic in carotid tissue.
Sipiä, Vesa O; Sjövall, Olli; Valtonen, Terhi; Barnaby, Deborah L; Codd, Geoffrey A; Metcalf, James S; Kilpi, Mikael; Mustonen, Olli; Meriluoto, Jussi A O
2006-11-01
Nodularin (NODLN) is a cyanobacterial hepatotoxin that may cause toxic effects at very low exposure levels. The NODLN-producing cyanobacterium Nodularia spumigena forms massive blooms in the northern Baltic Sea, especially during the summer. We analyzed liver and muscle (edible meat) samples from common eider (Somateria mollissima), roach (Rutilus rutilus L.), and flounder (Platichthys flesus L.) for NODLN-R by liquid chromatography/mass spectrometry (LC-MS) and enzyme-linked immunosorbent assay (ELISA). Thirty eiders, 11 roach, and 15 flounders were caught from the western Gulf of Finland between September 2002 and October 2004. Eiders from April to June 2003 were found dead. The majority of samples were analyzed by LC-MS and ELISA from the same sample extracts (water:methanol:n-butanol, 75:20:5, v:v:v). Nodularin was detected in 27 eiders, nine roach, and eight flounders. Eider liver samples contained NODLN up to approximately 200 microg/kg dry weight and muscle samples at approximately 20 microg/kg dry weight, roach liver samples 20 to 900 microg NODLN/kg dry weight and muscle samples 2 to 200 microg NODLN/kg dry weight, and flounder liver samples approximately 5 to 1,100 microg NODLN/kg dry weight and muscle samples up to 100 microg NODLN/kg dry weight. The NODLN concentrations found in individual muscle samples of flounders, eiders, and roach (1-200 microg NODLN/kg dry wt) indicate that screening and risk assessment of NODLN in Baltic Sea edible fish and wildlife are required for the protection of consumer's health.
Flexible nonlinear estimates of the association between height and mental ability in early life.
Murasko, Jason E
2014-01-01
To estimate associations between early-life mental ability and height/height-growth in contemporary US children. Structured additive regression models are used to flexibly estimate the associations between height and mental ability at approximately 24 months of age. The sample is taken from the Early Childhood Longitudinal Study-Birth Cohort, a national study whose target population was children born in the US during 2001. A nonlinear association is indicated between height and mental ability at approximately 24 months of age. There is an increasing association between height and mental ability below the mean value of height, but a flat association thereafter. Annualized growth shows the same nonlinear association to ability when controlling for baseline length at 9 months. Restricted growth at lower values of the height distribution is associated with lower measured mental ability in contemporary US children during the first years of life. Copyright © 2013 Wiley Periodicals, Inc.
Dehydration and Denitrification in the Arctic Polar Vortex During the 1995-1996 Winter
NASA Technical Reports Server (NTRS)
Hintsa, E. J.; Newman, P. A.; Jonsson, H. H.; Webster, C. R.; May, R. D.; Herman, R. L.; Lait, L. R.; Schoeberl, M. R.; Elkins, J. W.; Wamsley, P. R.;
1998-01-01
Dehydration of more than 0.5 ppmv water was observed between 18 and 19 km (theta approximately 450-465 K) at the edge of the Arctic polar vortex on February 1, 1996. More than half the reactive nitrogen (NO(y)) had also been removed, with layers of enhanced NO(y) at lower altitudes. Back trajectory calculations show that air parcels sampled inside the vortex had experienced temperatures as low as 188 K within the previous 12 days, consistent with a small amount of dehydration. The depth of the dehydrated layer (approximately 1 km) and the fact that trajectories passed through the region of ice saturation in one day imply selective growth of a small fraction of particles to sizes large enough (>10 micrometers) to be irreversibly removed on this timescale. Over 25% of the Arctic vortex in a 20-30 K range Transport of theta is estimated to have been dehydrated in this event.
NASA Astrophysics Data System (ADS)
Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.
2018-02-01
We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.
Site-selective nitrogen isotopic ratio measurement of nitrous oxide using 2 microm diode lasers.
Uehara, K; Yamamoto, K; Kikugawa, T; Yoshida, N
2003-03-15
We demonstrate a high-precision measurement of the isotopomer abundance ratio 14N(15)N(16)O/15N(14)N(16)O/14N(14)N(16)O (approximately 0.37/0.37/100) using three wavelength-modulated 2 microm diode lasers combined with a multipass cell which provides different optical pathlengths of 100 and 1 m to compensate the large abundance difference. A set of absorption lines for which the absorbances have almost the same temperature dependence are selected so that the effect of a change in gas temperature is minimized. The test experiment using pure nearly natural-abundance N(2)O samples showed that the site-selective 15N/14N ratios can be measured relative to a reference material with a precision of +/-3 x 10(-4) (+/-0.3 per thousand) in approximately 2 h. Copyright 2002 Elsevier Science B.V.
Timing of ore-related magmatism in the western Alaska Range, southwestern Alaska
Taylor, Ryan D.; Graham, Garth E.; Anderson, Eric D.; Selby, David
2014-01-01
This report presents isotopic age data from mineralized granitic plutons in an area of the Alaska Range located approximately 200 kilometers to the west-northwest of Anchorage in southwestern Alaska. Uranium-lead isotopic data and trace element concentrations of zircons were determined for 12 samples encompassing eight plutonic bodies ranging in age from approximately 76 to 57.4 millions of years ago (Ma). Additionally, a rhenium-osmium age of molybdenite from the Miss Molly molybdenum occurrence is reported (approx. 59 Ma). All of the granitic plutons in this study host gold-, copper-, and (or) molybdenum-rich prospects. These new ages modify previous interpretations regarding the age of magmatic activity and mineralization within the study area. The new ages show that the majority of the gold-quartz vein-hosting plutons examined in this study formed in the Late Cretaceous. Further work is necessary to establish the ages of ore-mineral deposition in these deposits.
NASA Astrophysics Data System (ADS)
Berbiche, A.; Sadouki, M.; Fellah, Z. E. A.; Ogam, E.; Fellah, M.; Mitri, F. G.; Depollier, C.
2016-01-01
An acoustic reflectivity method is proposed for measuring the permeability or flow resistivity of air-saturated porous materials. In this method, a simplified expression of the reflection coefficient is derived in the Darcy's regime (low frequency range), which does not depend on frequency and porosity. Numerical simulations show that the reflection coefficient of a porous material can be approximated by its simplified expression obtained from its Taylor development to the first order. This approximation is good especially for resistive materials (of low permeability) and for the lower frequencies. The permeability is reconstructed by solving the inverse problem using waves reflected by plastic foam samples, at different frequency bandwidths in the Darcy regime. The proposed method has the advantage of being simple compared to the conventional methods that use experimental reflected data, and is complementary to the transmissivity method, which is more adapted to low resistive materials (high permeability).