Sample records for reduce sample complexity

  1. Accuracy assessment with complex sampling designs

    Treesearch

    Raymond L. Czaplewski

    2010-01-01

    A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...

  2. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  3. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  4. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  5. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  6. Minimized state complexity of quantum-encoded cryptic processes

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.

    2016-05-01

    The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.

  7. High-Resolution Imaged-Based 3D Reconstruction Combined with X-Ray CT Data Enables Comprehensive Non-Destructive Documentation and Targeted Research of Astromaterials

    NASA Technical Reports Server (NTRS)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2014-01-01

    Providing web-based data of complex and sensitive astromaterials (including meteorites and lunar samples) in novel formats enhances existing preliminary examination data on these samples and supports targeted sample requests and analyses. We have developed and tested a rigorous protocol for collecting highly detailed imagery of meteorites and complex lunar samples in non-contaminating environments. These data are reduced to create interactive 3D models of the samples. We intend to provide these data as they are acquired on NASA's Astromaterials Acquisition and Curation website at http://curator.jsc.nasa.gov/.

  8. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  9. Event-triggered synchronization for reaction-diffusion complex networks via random sampling

    NASA Astrophysics Data System (ADS)

    Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng

    2018-04-01

    In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.

  10. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  11. Diagnostic value of succinate ubiquinone reductase activity in the identification of patients with mitochondrial DNA depletion.

    PubMed

    Hargreaves, P; Rahman, S; Guthrie, P; Taanman, J W; Leonard, J V; Land, J M; Heales, S J R

    2002-02-01

    Mitochondrial DNA (mtDNA) depletion syndrome (McKusick 251880) is characterized by a progressive quantitative loss of mtDNA resulting in severe mitochondrial dysfunction. A diagnosis of mtDNA depletion can only be confirmed after Southern blot analysis of affected tissue. Only a limited number of centres have the facilities to offer this service, and this is frequently on an irregular basis. There is therefore a need for a test that can refine sample selection as well as complementing the molecular analysis. In this study we compared the activities of the nuclear-encoded succinate ubiquinone reductase (complex II) to the activities of the combined mitochondrial and nuclear-encoded mitochondrial electron transport chain (ETC) complexes; NADH:ubiquinone reductase (complex I), ubiquinol-cytochrome-c reductase (complex III), and cytochrome-c oxidase (complex IV), in skeletal muscle biopsies from 7 patients with confirmed mtDNA depletion. In one patient there was no evidence of an ETC defect. However, the remaining 6 patients exhibited reduced complex I and IV activities. Five of these patients also displayed reduced complex II-III (succinate:cytochrome-c reductase) activity. Individual measurement of complex II and complex III activities demonstrated normal levels of complex II activity compared to complex III, which was reduced in the 5 biopsies assayed. These findings suggest a possible diagnostic value for the detection of normal levels of complex II activity in conjunction with reduced complex I, III and IV activity in the identification of likely candidates for mtDNA depletion syndrome

  12. Surface active complexes formed between keratin polypeptides and ionic surfactants.

    PubMed

    Pan, Fang; Lu, Zhiming; Tucker, Ian; Hosking, Sarah; Petkov, Jordan; Lu, Jian R

    2016-12-15

    Keratins are a group of important proteins in skin and hair and as biomaterials they can provide desirable properties such as strength, biocompatibility, and moisture regaining and retaining. The aim of this work is to develop water-soluble keratin polypeptides from sheep wool and then explore how their surface adsorption behaves with and without surfactants. Successful preparation of keratin samples was demonstrated by identification of the key components from gel electrophoresis and the reproducible production of gram scale samples with and without SDS (sodium dodecylsulphate) during wool fibre dissolution. SDS micelles could reduce the formation of disulphide bonds between keratins during extraction, reducing inter-molecular crosslinking and improving keratin polypeptide solubility. However, Zeta potential measurements of the two polypeptide batches demonstrated almost identical pH dependent surface charge distributions with isoelectric points around pH 3.5, showing complete removal of SDS during purification by dialysis. In spite of different solubility from the two batches of keratin samples prepared, very similar adsorption and aggregation behavior was revealed from surface tension measurements and dynamic light scattering. Mixing of keratin polypeptides with SDS and C 12 TAB (dodecyltrimethylammonium bromide) led to the formation of keratin-surfactant complexes that were substantially more effective at reducing surface tension than the polypeptides alone, showing great promise in the delivery of keratin polypeptides via the surface active complexes. Neutron reflection measurements revealed the coexistence of surfactant and keratin polypeptides at the interface, thus providing the structural support to the observed surface tension changes associated with the formation of the surface active complexes. Copyright © 2016. Published by Elsevier Inc.

  13. Fatigue reduces the complexity of knee extensor torque fluctuations during maximal and submaximal intermittent isometric contractions in man

    PubMed Central

    Pethick, Jamie; Winter, Samantha L; Burnley, Mark

    2015-01-01

    Neuromuscular fatigue increases the amplitude of fluctuations in torque output during isometric contractions, but the effect of fatigue on the temporal structure, or complexity, of these fluctuations is not known. We hypothesised that fatigue would result in a loss of temporal complexity and a change in fractal scaling of the torque signal during isometric knee extensor exercise. Eleven healthy participants performed a maximal test (5 min of intermittent maximal voluntary contractions, MVCs), and a submaximal test (contractions at a target of 40% MVC performed until task failure), each with a 60% duty factor (6 s contraction, 4 s rest). Torque and surface EMG signals were sampled continuously. Complexity and fractal scaling of torque were quantified by calculating approximate entropy (ApEn), sample entropy (SampEn) and the detrended fluctuation analysis (DFA) scaling exponent α. Fresh submaximal contractions were more complex than maximal contractions (mean ± SEM, submaximal vs. maximal: ApEn 0.65 ± 0.09 vs. 0.15 ± 0.02; SampEn 0.62 ± 0.09 vs. 0.14 ± 0.02; DFA α 1.35 ± 0.04 vs. 1.55 ± 0.03; all P < 0.005). Fatigue reduced the complexity of submaximal contractions (ApEn to 0.24 ± 0.05; SampEn to 0.22 ± 0.04; DFA α to 1.55 ± 0.03; all P < 0.005) and maximal contractions (ApEn to 0.10 ± 0.02; SampEn to 0.10 ± 0.02; DFA α to 1.63 ± 0.02; all P < 0.01). This loss of complexity and shift towards Brownian-like noise suggests that as well as reducing the capacity to produce torque, fatigue reduces the neuromuscular system's adaptability to external perturbations. PMID:25664928

  14. Beam-induced redox transformation of arsenic during As K-edge XAS measurements: availability of reducing or oxidizing agents and As speciation.

    PubMed

    Han, Young Soo; Jeong, Hoon Young; Hyun, Sung Pil; Hayes, Kim F; Chon, Chul Min

    2018-05-01

    During X-ray absorption spectroscopy (XAS) measurements of arsenic (As), beam-induced redox transformation is often observed. In this study, the As species immobilized by poorly crystallized mackinawite (FeS) was assessed for the susceptibility to beam-induced redox reactions as a function of sample properties including the redox state of FeS and the solid-phase As speciation. The beam-induced oxidation of reduced As species was found to be mediated by the atmospheric O 2 and the oxidation products of FeS [e.g. Fe(III) (oxyhydr)oxides and intermediate sulfurs]. Regardless of the redox state of FeS, both arsenic sulfide and surface-complexed As(III) readily underwent the photo-oxidation upon exposure to the atmospheric O 2 during XAS measurements. With strict O 2 exclusion, however, both As(0) and arsenic sulfide were less prone to the photo-oxidation by Fe(III) (oxyhydr)oxides than NaAsO 2 and/or surface-complexed As(III). In case of unaerated As(V)-reacted FeS samples, surface-complexed As(V) was photocatalytically reduced during XAS measurements, but arsenic sulfide did not undergo the photo-reduction.

  15. Understanding scaling through history-dependent processes with collapsing sample space.

    PubMed

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  16. Feasibility of diffuse reflectance infrared Fourier spectroscopy (DRIFTS) to quantify iron-cyanide (Fe-CN) complexes in soil

    NASA Astrophysics Data System (ADS)

    Sut-Lohmann, Magdalena; Raab, Thomas

    2017-04-01

    Contaminated sites create a significant risk to human health, by poisoning drinking water, soil, air and as a consequence food. Continuous release of persistent iron-cyanide (Fe-CN) complexes from various industrial sources poses a high hazard to the environment and indicates the necessity to analyze considerable amount of samples. At the present time quantitative determination of Fe-CN concentration in soil usually requires a time consuming two step process: digestion of the sample (e.g., micro distillation system) and its analytical detection performed, e.g., by automated spectrophotometrical flow injection analysis (FIA). In order to determine the feasibility of diffuse reflectance infrared Fourier spectroscopy (DRIFTS) to quantify the Fe-CN complexes in soil matrix, 42 soil samples were collected (8 to 12.520 mg kg-1CN) indicating single symmetrical CN band in the range 2092 - 2084 cm-1. Partial least squares (PLS) calibration-validation model revealed IR response to CNtot exceeding 1268 mg kg-1 (limit of detection, LOD). Subsequently, leave-one-out cross-validation (LOO-CV) was performed on soil samples containing low CNtot (<900 mg kg-1), which improved the sensitivity of the model by reducing the LOD to 154 mg kg-1. Finally, the LOO-CV conducted on the samples with CNtot >900 mg kg-1 resulted in LOD equal to 3494 mg kg-1. Our results indicate that spectroscopic data in combination with PLS statistics can efficiently be used to predict Fe-CN concentrations in soil. We conclude that the protocol applied in this study can strongly reduce the time and costs essential for the spatial and vertical screening of the site affected by complexed Fe-CN.

  17. Vertical blind phase search for low-complexity carrier phase recovery of offset-QAM Nyquist WDM transmission

    NASA Astrophysics Data System (ADS)

    Lu, Jianing; Fu, Songnian; Tang, Haoyuan; Xiang, Meng; Tang, Ming; Liu, Deming

    2017-01-01

    Low complexity carrier phase recovery (CPR) scheme based on vertical blind phase search (V-BPS) for M-ary offset quadrature amplitude modulation (OQAM) is proposed and numerically verified. After investigating the constellations of both even and odd samples with respect to the phase noise, we identify that the CPR can be realized by measuring the verticality of constellation with respect to different test phase angles. Then measurement without multiplication in the complex plane is found with low complexity. Furthermore, a two-stage configuration is put forward to further reduce the computational complexity (CC). Compared with our recently proposed modified blind phase search (M-BPS) algorithm, the proposed algorithm shows comparable tolerance of phase noise, but reduces the CC by a factor of 3.81 (or 3.05) in the form of multipliers (or adders), taking the CPR of 16-OQAM into account.

  18. A Genome-Wide Association Study of Autism Incorporating Autism Diagnostic Interview-Revised, Autism Diagnostic Observation Schedule, and Social Responsiveness Scale

    ERIC Educational Resources Information Center

    Connolly, John J.; Glessner, Joseph T.; Hakonarson, Hakon

    2013-01-01

    Efforts to understand the causes of autism spectrum disorders (ASDs) have been hampered by genetic complexity and heterogeneity among individuals. One strategy for reducing complexity is to target endophenotypes, simpler biologically based measures that may involve fewer genes and constitute a more homogenous sample. A genome-wide association…

  19. Evaluation of the Efficiency of the Sample Inactivation Reagent in the Abbott RealTime MTB Assay for Inactivation of Mycobacterium tuberculosis

    PubMed Central

    Wallis, Carole; Pahalawatta, Vihanga; Frank, Andrea; Ramdin, Neeshan; Viana, Raquel; Abravaya, Klara; Leckie, Gregor; Tang, Ning

    2015-01-01

    The Abbott RealTime MTB assay is a nucleic acid amplification test (NAAT) for the detection of Mycobacterium tuberculosis complex DNA. The sample inactivation procedure used in the assay, consisting of one part sample treated with 3 parts inactivation reagent for 60 min, effectively reduced viscosity and inactivated M. tuberculosis in clinical specimens. PMID:26085611

  20. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…

  1. Grouping methods for estimating the prevalences of rare traits from complex survey data that preserve confidentiality of respondents.

    PubMed

    Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I

    2018-03-26

    Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    PubMed Central

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  3. Noise Estimation in Electroencephalogram Signal by Using Volterra Series Coefficients

    PubMed Central

    Hassani, Malihe; Karami, Mohammad Reza

    2015-01-01

    The Volterra model is widely used for nonlinearity identification in practical applications. In this paper, we employed Volterra model to find the nonlinearity relation between electroencephalogram (EEG) signal and the noise that is a novel approach to estimate noise in EEG signal. We show that by employing this method. We can considerably improve the signal to noise ratio by the ratio of at least 1.54. An important issue in implementing Volterra model is its computation complexity, especially when the degree of nonlinearity is increased. Hence, in many applications it is urgent to reduce the complexity of computation. In this paper, we use the property of EEG signal and propose a new and good approximation of delayed input signal to its adjacent samples in order to reduce the computation of finding Volterra series coefficients. The computation complexity is reduced by the ratio of at least 1/3 when the filter memory is 3. PMID:26284176

  4. Complex-formation between reduced xanthine oxidase and purine substrates demonstrated by electron paramagnetic resonance

    PubMed Central

    Pick, Frances M.; Bray, R. C.

    1969-01-01

    The origin of the Rapid molybdenum electron-paramagnetic-resonance signals, which are obtained on reducing xanthine oxidase with purine or with xanthine, and whose parameters were measured by Bray & Vänngård (1969), was studied. It is concluded that these signals represent complexes of reduced enzyme with substrate molecules. Xanthine forms one complex at high concentrations and a different one at low concentrations. Purine forms a complex indistinguishable from the low-concentration xanthine complex. There are indications that some other substrates also form complexes, but uric acid, a reaction product, does not appear to do so. The possible significance of the complexes in the catalytic cycle of the enzyme is discussed and it is suggested that they represent substrate molecules bound at the reduced active site, waiting their turn to react there, when the enzyme has been reoxidized. Support for this role for the complexes was deduced from experiments in which frozen samples of enzyme–xanthine mixtures, prepared by the rapid-freezing method, were warmed until the signals began to change. Under these conditions an increase in amplitude of the Very Rapid signal took place. Data bearing on the origin of the Slow molybdenum signal are also discussed. This signal disappears only slowly in the presence of oxygen, and its appearance rate is unaffected by change in the concentration of dithionite. It is concluded that, like other signals from the enzyme, it is due to Mov but that a slow change of ligand takes place before it is seen. The Slow species, like the Rapid, seems capable of forming complexes with purines. PMID:4310056

  5. Conventional and Advanced Separations in Mass Spectrometry-Based Metabolomics: Methodologies and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heyman, Heino M.; Zhang, Xing; Tang, Keqi

    2016-02-16

    Metabolomics is the quantitative analysis of all metabolites in a given sample. Due to the chemical complexity of the metabolome, optimal separations are required for comprehensive identification and quantification of sample constituents. This chapter provides an overview of both conventional and advanced separations methods in practice for reducing the complexity of metabolite extracts delivered to the mass spectrometer detector, and covers gas chromatography (GC), liquid chromatography (LC), capillary electrophoresis (CE), supercritical fluid chromatography (SFC) and ion mobility spectrometry (IMS) separation techniques coupled with mass spectrometry (MS) as both uni-dimensional and as multi-dimensional approaches.

  6. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  7. Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.

    PubMed

    Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas

    2010-06-25

    Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.

  8. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  9. Influence of Protein-Phenolic Complex on the Antioxidant Capacity of Flaxseed (Linum usitatissimum L.) Products.

    PubMed

    Guimarães Drummond E Silva, Fernanda; Miralles, Beatriz; Hernández-Ledesma, Blanca; Amigo, Lourdes; Iglesias, Amadeu Hoshi; Reyes Reyes, Felix Guillermo; Netto, Flavia Maria

    2017-02-01

    The impact of the naturally present phenolic compounds and/or proteins on the antioxidant capacity of flaxseed products (phenolic fraction, protein concentrates, and hydrolysates) before and after simulated gastrointestinal digestion was studied. For that, whole and phenolic reduced products were assessed. Four glycosylated phenolic compounds (secoisolariciresinol and ferulic, p-coumaric, and caffeic acids) were identified in flaxseed products. Phenolic fraction exerts the highest antioxidant capacity that increased by alkaline hydrolysis and by simulated gastrointestinal digestion. The action of Alcalase and digestive enzymes resulted in an increase of the antioxidant capacity of whole and phenolic reduced products. Principal component analysis showed that proteinaceous samples act as antioxidant is by H + transfer, while those samples containing phenolic compounds exert their effects by both electron donation and H + transfer mechanisms. Protein/peptide-phenolic complexation, confirmed by fluorescence spectra, exerted a positive effect on the antioxidant capacity, mainly in protein concentrates.

  10. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  11. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  12. The hydroxyl-functionalized magnetic particles for purification of glycan-binding proteins.

    PubMed

    Sun, Xiuxuan; Yang, Ganglong; Sun, Shisheng; Quan, Rui; Dai, Weiwei; Li, Bin; Chen, Chao; Li, Zheng

    2009-12-01

    Glycan-protein interactions play important biological roles in biological processes. Although there are some methods such as glycan arrays that may elucidate recognition events between carbohydrates and protein as well as screen the important glycan-binding proteins, there is a lack of simple effectively separate method to purify them from complex samples. In proteomics studies, fractionation of samples can help to reduce their complexity and to enrich specific classes of proteins for subsequent downstream analyses. Herein, a rapid simple method for purification of glycan-binding proteins from proteomic samples was developed using hydroxyl-coated magnetic particles coupled with underivatized carbohydrate. Firstly, the epoxy-coated magnetic particles were further hydroxyl functionalized with 4-hydroxybenzhydrazide, then the carbohydrates were efficiently immobilized on hydroxyl functionalized surface of magnetic particles by formation of glycosidic bond with the hemiacetal group at the reducing end of the suitable carbohydrates via condensation. All conditions of this method were optimized. The magnetic particle-carbohydrate conjugates were used to purify the glycan-binding proteins from human serum. The fractionated glycan-binding protein population was displayed by SDS-PAGE. The result showed that the amount of 1 mg magnetic particles coupled with mannose in acetate buffer (pH 5.4) was 10 micromol. The fractionated glycan-binding protein population in human serum could be eluted from the magnetic particle-mannose conjugates by 0.1% SDS. The methodology could work together with the glycan microarrays for screening and purification of the important GBPs from complex protein samples.

  13. Predicting protein interactions by Brownian dynamics simulations.

    PubMed

    Meng, Xuan-Yu; Xu, Yu; Zhang, Hong-Xing; Mezei, Mihaly; Cui, Meng

    2012-01-01

    We present a newly adapted Brownian-Dynamics (BD)-based protein docking method for predicting native protein complexes. The approach includes global BD conformational sampling, compact complex selection, and local energy minimization. In order to reduce the computational costs for energy evaluations, a shell-based grid force field was developed to represent the receptor protein and solvation effects. The performance of this BD protein docking approach has been evaluated on a test set of 24 crystal protein complexes. Reproduction of experimental structures in the test set indicates the adequate conformational sampling and accurate scoring of this BD protein docking approach. Furthermore, we have developed an approach to account for the flexibility of proteins, which has been successfully applied to reproduce the experimental complex structure from the structure of two unbounded proteins. These results indicate that this adapted BD protein docking approach can be useful for the prediction of protein-protein interactions.

  14. Evaluation of the Efficiency of the Sample Inactivation Reagent in the Abbott RealTime MTB Assay for Inactivation of Mycobacterium tuberculosis.

    PubMed

    Qi, Chao; Wallis, Carole; Pahalawatta, Vihanga; Frank, Andrea; Ramdin, Neeshan; Viana, Raquel; Abravaya, Klara; Leckie, Gregor; Tang, Ning

    2015-09-01

    The Abbott RealTime MTB assay is a nucleic acid amplification test (NAAT) for the detection of Mycobacterium tuberculosis complex DNA. The sample inactivation procedure used in the assay, consisting of one part sample treated with 3 parts inactivation reagent for 60 min, effectively reduced viscosity and inactivated M. tuberculosis in clinical specimens. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  15. Characterizing the Structure and Oligomerization of Major Royal Jelly Protein 1 (MRJP1) by Mass Spectrometry and Complementary Biophysical Tools.

    PubMed

    Mandacaru, Samuel C; do Vale, Luis H F; Vahidi, Siavash; Xiao, Yiming; Skinner, Owen S; Ricart, Carlos A O; Kelleher, Neil L; de Sousa, Marcelo Valle; Konermann, Lars

    2017-03-21

    Royal jelly (RJ) triggers the development of female honeybee larvae into queens. This effect has been attributed to the presence of major royal jelly protein 1 (MRJP1) in RJ. MRJP1 isolated from royal jelly is tightly associated with apisimin, a 54-residue α-helical peptide that promotes the noncovalent assembly of MRJP1 into multimers. No high-resolution structural data are available for these complexes, and their binding stoichiometry remains uncertain. We examined MRJP1/apisimin using a range of biophysical techniques. We also investigated the behavior of deglycosylated samples, as well as samples with reduced apisimin content. Our mass spectrometry (MS) data demonstrate that the native complexes predominantly exist in a (MRJP1 4 apisimin 4 ) stoichiometry. Hydrogen/deuterium exchange MS reveals that MRJP1 within these complexes is extensively disordered in the range of residues 20-265. Marginally stable secondary structure (likely antiparallel β-sheet) exists around residues 266-432. These weakly structured regions interchange with conformers that are extensively unfolded, giving rise to bimodal (EX1) isotope distributions. We propose that the native complexes have a "dimer of dimers" quaternary structure in which MRJP1 chains are bridged by apisimin. Specifically, our data suggest that apisimin acts as a linker that forms hydrophobic contacts involving the MRJP1 segment 316 VLFFGLV 322 . Deglycosylation produces large soluble aggregates, highlighting the role of glycans as aggregation inhibitors. Samples with reduced apisimin content form dimeric complexes with a (MRJP1 2 apisimin 1 ) stoichiometry. The information uncovered in this work will help pave the way toward a better understanding of the unique physiological role played by MRJP1 during queen differentiation.

  16. Recombinant Reflectin-Based Optical Materials

    DTIC Science & Technology

    2012-01-01

    sili- con substrates were placed in a sealed plastic box. The RH was controlled using a Dydra electronic cigar humidifier and monitored using a Fisher...diffraction gratings to generate diffraction patterns. Nano-spheres and la- mellar microstructures of refCBA samples were observed by scanning electron ...samples were observed by scanning electron microscopy and atomic force microscopy. Despite the reduced complexity of the refCBA protein compared to natural

  17. Longitudinal On-Column Thermal Modulation for Comprehensive Two-Dimensional Liquid Chromatography.

    PubMed

    Creese, Mari E; Creese, Mathew J; Foley, Joe P; Cortes, Hernan J; Hilder, Emily F; Shellie, Robert A; Breadmore, Michael C

    2017-01-17

    Longitudinal on-column thermal modulation for comprehensive two-dimensional liquid chromatography is introduced. Modulation optimization involved a systematic investigation of heat transfer, analyte retention, and migration velocity at a range of temperatures. Longitudinal on-column thermal modulation was realized using a set of alkylphenones and compared to a conventional valve-modulator employing sample loops. The thermal modulator showed a reduced modulation-induced pressure impact than valve modulation, resulting in reduced baseline perturbation by a factor of 6; yielding a 6-14-fold improvement in signal-to-noise. A red wine sample was analyzed to demonstrate the potential of the longitudinal on-column thermal modulator for separation of a complex sample. Discrete peaks in the second dimension using the thermal modulator were 30-55% narrower than with the valve modulator. The results shown herein demonstrate the benefits of an active focusing modulator, such as reduced detection limits and increased total peak capacity.

  18. Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.

    2016-06-01

    Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.

  19. A novel differential pulse voltammetric (DPV) method for measuring the antioxidant capacity of polyphenols-reducing cupric neocuproine complex.

    PubMed

    Tufan, Ayşe Nur; Baki, Sefa; Güçlü, Kubilay; Özyürek, Mustafa; Apak, Reşat

    2014-07-23

    A novel differential pulse voltammetric (DPV) method is presented, using a chromogenic oxidizing reagent, cupric neocuproine complex (Cu(Nc)2(2+)), for the assessment of antioxidant capacity of polyphenolic compounds (i.e., flavonoids, simple phenolic acids, and hydroxycinnamic acids), ascorbic acid, and real samples for the first time. The electrochemical behavior of the Cu(Nc)2(2+) complex was studied by cyclic voltammetry at a glassy carbon (GC) electrode. The electroanalytical method was based on the reduction of Cu(Nc)2(2+) to Cu(Nc)2(+) by antioxidants and electrochemical detection of the remaining Cu(II)-Nc (unreacted complex), the difference being correlated to antioxidant capacity of the analytes. The calibration curves of individual compounds comprising polyphenolics and vitamin C were constructed, and their response sensitivities and linear concentration ranges were determined. The reagent on the GC electrode retained its reactivity toward antioxidants, and the measured trolox equivalent antioxidant capacity (TEAC) values of various antioxidants suggested that the reactivity of the Cu(II)-Nc reagent is comparable to that of the solution-based spectrophotometric cupric ion reducing antioxidant capacity (CUPRAC) assay. This electroanalytical method better tolerated sample turbidity and provided higher sensitivity (i.e., lower detection limits) in antioxidant determination than the spectrophotometric assay. The proposed method was successfully applied to the measurement of total antioxidant capacity (TAC) in some herbal tea samples such as green tea, sage, marjoram, and alchemilla. Results demonstrated that the proposed voltammetric method has precision and accuracy comparable to those of the spectrophotometric CUPRAC assay.

  20. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  1. Searching for microbial protein over-expression in a complex matrix using automated high throughput MS-based proteomics tools.

    PubMed

    Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob

    2013-03-10

    In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE PAGES

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...

    2017-12-27

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  3. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  4. Reduced Pms2 expression in non-neoplastic flat mucosa from patients with colon cancer correlates with reduced apoptosis competence.

    PubMed

    Bernstein, Harris; Prasad, Anil; Holubec, Hana; Bernstein, Carol; Payne, Claire M; Ramsey, Lois; Dvorakova, Katerina; Wilson, Megan; Warneke, James A; Garewal, Harinder

    2006-06-01

    Pms2 protein is a component of the DNA mismatch repair complex responsible both for post-replication correction of DNA nucleotide mispairs and for early steps in apoptosis. Germline mutations in DNA mismatch repair genes give rise to hereditary non-polyposis colon cancer, which accounts for about 4% of colon cancers. However, little is known about the expression of mismatch repair proteins in relation to sporadic colon cancer, which accounts for the great majority of colon cancers. Multiple samples were taken from the non-neoplastic flat mucosa of colon resections from patients with no colonic neoplasia, a tubulovillous adenoma, or an adenocarcinoma. Expression of Pms2 was assessed using semiquantitative immunohistochemistry. Apoptosis was assessed in polychrome-stained epoxy sections using morphologic criteria. Samples from patients without colonic neoplasia had moderate to strong staining for Pms2 in cell nuclei at the base of crypts, while samples from 2 of the 3 colons with a tubulovillous adenoma, and from 6 of the 10 colons with adenocarcinomas, showed reduced Pms2 expression. Samples from patients with an adenocarcinoma that had reduced Pms2 expression also exhibited reduced apoptosis capability in nearby tissue samples, evidenced when this paired tissue was stressed ex vivo with bile acid. Reduced Pms2 expression in the colonic mucosa may be an early step in progression to colon cancer. This reduction may cause decreased mismatch repair, increased genetic instability, and/or reduced apoptotic capability. Immunohistochemical determination of reduced Pms2 expression, upon further testing, may prove to be a promising early biomarker of risk of progression to malignancy.

  5. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  6. Digital adaptive optics line-scanning confocal imaging system.

    PubMed

    Liu, Changgeng; Kim, Myung K

    2015-01-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.

  7. Quantitation of proteins using a dye-metal-based colorimetric protein assay.

    PubMed

    Antharavally, Babu S; Mallia, Krishna A; Rangaraj, Priya; Haney, Paul; Bell, Peter A

    2009-02-15

    We describe a dye-metal (polyhydroxybenzenesulfonephthalein-type dye and a transition metal) complex-based total protein determination method. The binding of the complex to protein causes a shift in the absorption maximum of the dye-metal complex from 450 to 660 nm. The dye-metal complex has a reddish brown color that changes to green on binding to protein. The color produced from this reaction is stable and increases in a proportional manner over a broad range of protein concentrations. The new Pierce 660 nm Protein Assay is very reproducible, rapid, and more linear compared with the Coomassie dye-based Bradford assay. The assay reagent is room temperature stable, and the assay is a simple and convenient mix-and-read format. The assay has a moderate protein-to-protein variation and is compatible with most detergents, reducing agents, and other commonly used reagents. This is an added advantage for researchers needing to determine protein concentrations in samples containing both detergents and reducing agents.

  8. Measurement of complex permittivities of biological materials and human skin in vivo in the frequency band

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghodgaonkar, D.K.

    1987-01-01

    A new method, namely, modified infinite sample method, has been developed which is particularly suitable for millimeter-wave dielectric measurements of biological materials. In this method, an impedance transformer is used which reduces the reflectivity of the biological sample. Because of the effect of introducing impendance transformer, the measured reflection coefficients are more sensitive to the complex permittivities of biological samples. For accurate measurement of reflection coefficients, two automated measurment systems were developed which cover the frequencies range of 26.5-60 GHz. An uncertainty analysis was performed to get an estimate of the errors in the measured complex permittivities. The dielectric propertiesmore » were measured for 10% saline solution, whole human blood, 200 mg/ml bovine serum albumin (BSA) solution and suspension of Saccharomyces cerevisiae cells. The Maxwell-Fricke equation, which is derived from dielectric mixture theory, was used for determination bound water in BSA solution. The results of all biological samples were interpreted by fitting Debye relaxation and Cole-Cole model. It is observed that the dielectric data for the biological materials can be explained on the basis of Debye relaxation of water molecule.« less

  9. A practical approach to language complexity: a Wikipedia case study.

    PubMed

    Yasseri, Taha; Kornai, András; Kertész, János

    2012-01-01

    In this paper we present statistical analysis of English texts from Wikipedia. We try to address the issue of language complexity empirically by comparing the simple English Wikipedia (Simple) to comparable samples of the main English Wikipedia (Main). Simple is supposed to use a more simplified language with a limited vocabulary, and editors are explicitly requested to follow this guideline, yet in practice the vocabulary richness of both samples are at the same level. Detailed analysis of longer units (n-grams of words and part of speech tags) shows that the language of Simple is less complex than that of Main primarily due to the use of shorter sentences, as opposed to drastically simplified syntax or vocabulary. Comparing the two language varieties by the Gunning readability index supports this conclusion. We also report on the topical dependence of language complexity, that is, that the language is more advanced in conceptual articles compared to person-based (biographical) and object-based articles. Finally, we investigate the relation between conflict and language complexity by analyzing the content of the talk pages associated to controversial and peacefully developing articles, concluding that controversy has the effect of reducing language complexity.

  10. Preparation of protein samples for mass spectrometry and N-terminal sequencing.

    PubMed

    Glenn, Gary

    2014-01-01

    The preparation of protein samples for mass spectrometry and N-terminal sequencing is a key step in successfully identifying proteins. Mass spectrometry is a very sensitive technique, and as such, samples must be prepared carefully since they can be subject to contamination of the sample (e.g., due to incomplete subcellular fractionation or purification of a multiprotein complex), overwhelming of the sample by highly abundant proteins, and contamination from skin or hair (keratin can be a very common hit). One goal of sample preparation for mass spec is to reduce the complexity of the sample - in the example presented here, mitochondria are purified, solubilized, and fractionated by sucrose density gradient sedimentation prior to preparative 1D SDS-PAGE. It is important to verify the purity and integrity of the sample so that you can have confidence in the hits obtained. More protein is needed for N-terminal sequencing and ideally it should be purified to a single band when run on an SDS-polyacrylamide gel. The example presented here involves stably expressing a tagged protein in HEK293 cells and then isolating the protein by affinity purification and SDS-PAGE. © 2014 Elsevier Inc. All rights reserved.

  11. A reduced transcriptome approach to assess environmental toxicants using zebrafish embryo tests

    EPA Science Inventory

    This paper reports on the pilot testing of a new bioassay platform that monitors expression of 1600 genes in zebrafish embryos exposed to either single chemicals or complex water samples. The method provides a more cost effective, high throughput means to broadly evaluate the pot...

  12. Sample preparation techniques for the determination of trace residues and contaminants in foods.

    PubMed

    Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M

    2007-06-15

    The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.

  13. Complex fine-scale diffusion coating formed at low temperature on high-speed steel substrate

    NASA Astrophysics Data System (ADS)

    Chaus, A. S.; Pokorný, P.; Čaplovič, Ľ.; Sitkevich, M. V.; Peterka, J.

    2018-04-01

    A complex B-C-N diffusion coating was produced at 580 °C for 1 h on AISI M35 steel substrate and compared with a reference coating formed at 880 °C for 2.5 h. The surface and the cross-sections of the samples were subjected to detailed characterisation. The surface roughness, hardness, residual stresses and adhesion of the coatings were also evaluated together with cutting tests using drills on coated and uncoated samples while monitoring cutting force and torque. The surface of the steel treated at 580 °C revealed Fe2B, boron nitride and boron iron carbide, but FeB was noted to be absent. The 580 °C coating had the fine-scale microstructure, which resulted in the excellent adhesion and enhanced wear resistance, relative to reference samples that contained coarse borides. The results established that a complex fine-scale diffusion coating enhanced the wear resistance and reduces the cutting force and torque during drilling, thereby increasing the drill life by a factor of 2.2.

  14. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  15. Optimizing liquid effluent monitoring at a large nuclear complex.

    PubMed

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  16. [A novel protein equalizer based on single chain variable fragment display M13 phage library for nephropathy patient urine study].

    PubMed

    Zhao, Peng; Tao, Dingyin; Liang, Zhen; Zhang, Lihua; Zhang, Yukui

    2009-05-01

    A novel protein equalizer was developed with single chain variable fragment (scFv) library displaying M13 phage covalently bonded on monolithic cryogel. Due to the great number and various kinds of displayed scFv fragments, as well as strong and specific binding capacity between scFv fragments and proteins, a new protein equalizer technology is preferable in the pretreatment of complex protein samples. After the sample dissolved in phosphate buffer solution (PBS), it was repeatedly loaded onto the equalizer for five times, the bound proteins were in sequence eluted by 2 mol/L NaCl and 50 mmol/L Gly-HC1 (pH 2.5) solution, followed by digestion with thrombin. All proteins or peptides collected from each fraction were further analyzed by high performance liquid chromatography-electrospray tandem mass spectrometry (RPLC-ESI-MS/MS) with a serially coupled long microcolumn. Compared with the untreated samples, the identified protein number was increased from 142 to 396. Furthermore, from sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis results, it was found that the protein concentration difference was reduced obviously in the eluant of direct sample loading, and most high abundance proteins were identified in the eluant of NaCl. All these results demonstrate that the novel protein equalizer with scFv display M13 phage library immobilized on cyrogel could effectively reduce the dynamic range of proteins in complex samples, enabling the identification of more low abundance proteins.

  17. Mars Sample Return Using Commercial Capabilities: Propulsive Entry, Descent and Landing

    NASA Technical Reports Server (NTRS)

    Lemke, Lawrence G.; Gonzales, Andrew A.; Huynh, Loc C.

    2014-01-01

    Mars Sample Return (MSR) is the highest priority science mission for the next decade as recommended by the recent Decadal Survey of Planetary Science. The objective of the study was to determine whether emerging commercial capabilities can be integrated into to such a mission. The premise of the study is that commercial capabilities can be more efficient than previously described systems, and by using fewer systems and fewer or less extensive launches, overall mission cost can be reduced. This presentation describes an EDL technique using planned upgrades to the Dragon capsule to perform a Supersonic Retropulsion Entry - Red Dragon concept. Landed Payload capability meets mission requirements for a MSR Architecture that reduces complexity.

  18. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    PubMed

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  19. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    PubMed Central

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-01-01

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765

  20. Optimization of the intravenous glucose tolerance test in T2DM patients using optimal experimental design.

    PubMed

    Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O

    2009-06-01

    Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.

  1. Ex-vivo holographic microscopy and spectroscopic analysis of head and neck cancer

    NASA Astrophysics Data System (ADS)

    Holler, Stephen; Wurtz, Robert; Auyeung, Kelsey; Auyeung, Kris; Paspaley-Grbavac, Milan; Mulroe, Brigid; Sobrero, Maximiliano; Miles, Brett

    2015-03-01

    Optical probes to identify tumor margins in vivo would greatly reduce the time, effort and complexity in the surgical removal of malignant tissue in head and neck cancers. Current approaches involve visual microscopy of stained tissue samples to determine cancer margins, which results in the excision of excess of tissue to assure complete removal of the cancer. Such surgical procedures and follow-on chemotherapy can adversely affect the patient's recovery and subsequent quality of life. In order to reduce the complexity of the process and minimize adverse effects on the patient, we investigate ex vivo tissue samples (stained and unstained) using digital holographic microscopy in conjunction with spectroscopic analyses (reflectance and transmission spectroscopy) in order to determine label-free, optically identifiable characteristic features that may ultimately be used for in vivo processing of cancerous tissues. The tissue samples studied were squamous cell carcinomas and associated controls from patients of varying age, gender and race. Holographic microscopic imaging scans across both cancerous and non-cancerous tissue samples yielded amplitude and phase reconstructions that were correlated with spectral signatures. Though the holographic reconstructions and measured spectra indicate variations even among the same class of tissue, preliminary results indicate the existence of some discriminating features. Further analyses are presently underway to further this work and extract additional information from the imaging and spectral data that may prove useful for in vivo surgical identification.

  2. Effect-directed analysis supporting monitoring of aquatic ...

    EPA Pesticide Factsheets

    Aquatic environments are often contaminated with complex mixtures of chemicals that may pose a risk to ecosystems and human health. This contamination cannot be addressed with target analysis alone but tools are required to reduce this complexity and identify those chemicals that might cause adverse effects. Effect-directed analysis (EDA) is designed to meet this challenge and faces increasing interest in water and sediment quality monitoring. Thus, the present paper summarizes current experience with the EDA approach and the tools required,and provides practical advice on their application. The paper highlights the need for proper problem formulation and gives general advice for study design. As the EDA approach is directed by toxicity, basic principles for the selection of bioassays are given as well as a comprehensive compilation of appropriate assays, includingtheir strengths andweaknesses. A specific focus is given to strategies for sampling, extraction and bioassay dosing since they strongly impact prioritization of toxicants in EDA. Reduction of sample complexity mainly relies onfractionation procedures, which are discussed in this paper, including quality assurance and quality control. Automated combinations of fractionation, biotesting and chemical analysis using so-called hyphenated tools can enhance the throughput and might reduce the risk of artifacts in laboratory work. The key to determiningthe chemical structures causing effects is analytical toxi

  3. Optofluidic analysis system for amplification-free, direct detection of Ebola infection

    NASA Astrophysics Data System (ADS)

    Cai, H.; Parks, J. W.; Wall, T. A.; Stott, M. A.; Stambaugh, A.; Alfson, K.; Griffiths, A.; Mathies, R. A.; Carrion, R.; Patterson, J. L.; Hawkins, A. R.; Schmidt, H.

    2015-09-01

    The massive outbreak of highly lethal Ebola hemorrhagic fever in West Africa illustrates the urgent need for diagnostic instruments that can identify and quantify infections rapidly, accurately, and with low complexity. Here, we report on-chip sample preparation, amplification-free detection and quantification of Ebola virus on clinical samples using hybrid optofluidic integration. Sample preparation and target preconcentration are implemented on a PDMS-based microfluidic chip (automaton), followed by single nucleic acid fluorescence detection in liquid-core optical waveguides on a silicon chip in under ten minutes. We demonstrate excellent specificity, a limit of detection of 0.2 pfu/mL and a dynamic range of thirteen orders of magnitude, far outperforming other amplification-free methods. This chip-scale approach and reduced complexity compared to gold standard RT-PCR methods is ideal for portable instruments that can provide immediate diagnosis and continued monitoring of infectious diseases at the point-of-care.

  4. A microcontroller-based microwave free-space measurement system for permittivity determination of lossy liquid materials.

    PubMed

    Hasar, U C

    2009-05-01

    A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.

  5. Discriminant WSRC for Large-Scale Plant Species Recognition.

    PubMed

    Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong

    2017-01-01

    In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  6. Analysis of food polyphenols by ultra high-performance liquid chromatography coupled to mass spectrometry: an overview.

    PubMed

    Motilva, Maria-José; Serra, Aida; Macià, Alba

    2013-05-31

    Phenolic compounds, which are widely distributed in plant-derived foods, recently attracted much attention due to their health benefits, so their determination in food samples is a topic of increasing interest. In the last few years, the development of chromatographic columns packed with sub-2μm particles and the modern high resolution mass spectrometry (MS) have opened up new possibilities for improving the analytical methods for complex sample matrices, such as ingredients, foods and biological samples. In addition, they have emerged as an ideal tool for profiling complex samples due to its speed, efficiency, sensitivity and selectivity. The present review addresses the use of the improved liquid chromatography (LC), ultra-high performance LC (UHPLC), coupled to MS or tandem MS (MS/MS) as the detector system for the determination of phenolic compounds in food samples. Additionally, the different strategies to extract, quantify the phenolic compounds and to reduce the matrix effect (%ME) are also reviewed. Finally, a briefly outline future trends of UHPLC-MS methods is commented. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Acceleration of natural-abundance solid-state MAS NMR measurements on bone by paramagnetic relaxation from gadolinium-DTPA

    NASA Astrophysics Data System (ADS)

    Mroue, Kamal H.; Zhang, Rongchun; Zhu, Peizhi; McNerny, Erin; Kohn, David H.; Morris, Michael D.; Ramamoorthy, Ayyalusamy

    2014-07-01

    Reducing the data collection time without affecting the signal intensity and spectral resolution is one of the major challenges for the widespread application of multidimensional nuclear magnetic resonance (NMR) spectroscopy, especially in experiments conducted on complex heterogeneous biological systems such as bone. In most of these experiments, the NMR data collection time is ultimately governed by the proton spin-lattice relaxation times (T1). For over two decades, gadolinium(III)-DTPA (Gd-DTPA, DTPA = Diethylene triamine pentaacetic acid) has been one of the most widely used contrast-enhancement agents in magnetic resonance imaging (MRI). In this study, we demonstrate that Gd-DTPA can also be effectively used to enhance the longitudinal relaxation rates of protons in solid-state NMR experiments conducted on bone without significant line-broadening and chemical-shift-perturbation side effects. Using bovine cortical bone samples incubated in different concentrations of Gd-DTPA complex, the 1H T1 values were calculated from data collected by 1H spin-inversion recovery method detected in natural-abundance 13C cross-polarization magic angle spinning (CPMAS) NMR experiments. Our results reveal that the 1H T1 values can be successfully reduced by a factor of 3.5 using as low as 10 mM Gd-DTPA without reducing the spectral resolution and thus enabling faster data acquisition of the 13C CPMAS spectra. These results obtained from 13C-detected CPMAS experiments were further confirmed using 1H-detected ultrafast MAS experiments on Gd-DTPA doped bone samples. This approach considerably improves the signal-to-noise ratio per unit time of NMR experiments applied to bone samples by reducing the experimental time required to acquire the same number of scans.

  8. Acceleration of natural-abundance solid-state MAS NMR measurements on bone by paramagnetic relaxation from gadolinium-DTPA.

    PubMed

    Mroue, Kamal H; Zhang, Rongchun; Zhu, Peizhi; McNerny, Erin; Kohn, David H; Morris, Michael D; Ramamoorthy, Ayyalusamy

    2014-07-01

    Reducing the data collection time without affecting the signal intensity and spectral resolution is one of the major challenges for the widespread application of multidimensional nuclear magnetic resonance (NMR) spectroscopy, especially in experiments conducted on complex heterogeneous biological systems such as bone. In most of these experiments, the NMR data collection time is ultimately governed by the proton spin-lattice relaxation times (T1). For over two decades, gadolinium(III)-DTPA (Gd-DTPA, DTPA=Diethylene triamine pentaacetic acid) has been one of the most widely used contrast-enhancement agents in magnetic resonance imaging (MRI). In this study, we demonstrate that Gd-DTPA can also be effectively used to enhance the longitudinal relaxation rates of protons in solid-state NMR experiments conducted on bone without significant line-broadening and chemical-shift-perturbation side effects. Using bovine cortical bone samples incubated in different concentrations of Gd-DTPA complex, the (1)H T1 values were calculated from data collected by (1)H spin-inversion recovery method detected in natural-abundance (13)C cross-polarization magic angle spinning (CPMAS) NMR experiments. Our results reveal that the (1)H T1 values can be successfully reduced by a factor of 3.5 using as low as 10mM Gd-DTPA without reducing the spectral resolution and thus enabling faster data acquisition of the (13)C CPMAS spectra. These results obtained from (13)C-detected CPMAS experiments were further confirmed using (1)H-detected ultrafast MAS experiments on Gd-DTPA doped bone samples. This approach considerably improves the signal-to-noise ratio per unit time of NMR experiments applied to bone samples by reducing the experimental time required to acquire the same number of scans. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Capitalizing Resolving Power of Density Gradient Ultracentrifugation by Freezing and Precisely Slicing Centrifuged Solution: Enabling Identification of Complex Proteins from Mitochondria by Matrix Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry

    PubMed Central

    Yu, Haiqing; Lu, Joann J.; Rao, Wei

    2016-01-01

    Density gradient centrifugation is widely utilized for various high purity sample preparations, and density gradient ultracentrifugation (DGU) is often used for more resolution-demanding purification of organelles and protein complexes. Accurately locating different isopycnic layers and precisely extracting solutions from these layers play a critical role in achieving high-resolution DGU separations. In this technique note, we develop a DGU procedure by freezing the solution rapidly (but gently) after centrifugation to fix the resolved layers and by slicing the frozen solution to fractionate the sample. Because the thickness of each slice can be controlled to be as thin as 10 micrometers, we retain virtually all the resolution produced by DGU. To demonstrate the effectiveness of this method, we fractionate complex V from HeLa mitochondria using a conventional technique and this freezing-slicing (F-S) method. The comparison indicates that our F-S method can reduce complex V layer thicknesses by ~40%. After fractionation, we analyze complex V proteins directly on a matrix assisted laser desorption/ionization, time-of-flight mass spectrometer. Twelve out of fifteen subunits of complex V are positively identified. Our method provides a practical protocol to identify proteins from complexes, which is useful to investigate biomolecular complexes and pathways in various conditions and cell types. PMID:27668122

  10. Classifier-Guided Sampling for Complex Energy System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less

  11. Improved graphite furnace atomizer

    DOEpatents

    Siemer, D.D.

    1983-05-18

    A graphite furnace atomizer for use in graphite furnace atomic absorption spectroscopy is described wherein the heating elements are affixed near the optical path and away from the point of sample deposition, so that when the sample is volatilized the spectroscopic temperature at the optical path is at least that of the volatilization temperature, whereby analyteconcomitant complex formation is advantageously reduced. The atomizer may be elongated along its axis to increase the distance between the optical path and the sample deposition point. Also, the atomizer may be elongated along the axis of the optical path, whereby its analytical sensitivity is greatly increased.

  12. Broad supernatural punishment but not moralizing high gods precede the evolution of political complexity in Austronesia

    PubMed Central

    Watts, Joseph; Greenhill, Simon J.; Atkinson, Quentin D.; Currie, Thomas E.; Bulbulia, Joseph; Gray, Russell D.

    2015-01-01

    Supernatural belief presents an explanatory challenge to evolutionary theorists—it is both costly and prevalent. One influential functional explanation claims that the imagined threat of supernatural punishment can suppress selfishness and enhance cooperation. Specifically, morally concerned supreme deities or ‘moralizing high gods' have been argued to reduce free-riding in large social groups, enabling believers to build the kind of complex societies that define modern humanity. Previous cross-cultural studies claiming to support the MHG hypothesis rely on correlational analyses only and do not correct for the statistical non-independence of sampled cultures. Here we use a Bayesian phylogenetic approach with a sample of 96 Austronesian cultures to test the MHG hypothesis as well as an alternative supernatural punishment hypothesis that allows punishment by a broad range of moralizing agents. We find evidence that broad supernatural punishment drives political complexity, whereas MHGs follow political complexity. We suggest that the concept of MHGs diffused as part of a suite of traits arising from cultural exchange between complex societies. Our results show the power of phylogenetic methods to address long-standing debates about the origins and functions of religion in human society. PMID:25740888

  13. Annual Storm Water Report for the Y-12 National Security Complex Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This is the second annual storm water report prepared in accordance with the National Pollutant Discharge Elimination System (NPDES) permit issued to the Y-12 National Security Complex (Y-12 Complex) on December 1, 2011, and the corresponding Y-12 Storm Water Pollution Prevention Plan (SWP3) which became effective on September 7, 2012. However, Appendix A does contain some analytical data gathered under the previous NPDES permit and SWP3 for comparison purposes. The quality of storm water exiting the Y-12 Complex via East Fork Poplar Creek remained relatively stable from 2012 to 2013. However, there was one largely unexpected high concentration of mercurymore » noted in an area that is not known to have previously been a mercury use area. This was noted in Sector AA, Outfall 014. This outfall is normally sampled on a rotating basis but, due this elevated concentration, will be sampled again in 2014. The Y-12 Complex will continue to implement appropriate BMPs and reduce outside material storage ares where possible. Emphasis will continue to be placed on site inspections and timely implementation of proper storm water control measures.« less

  14. Chirality sensing with stereodynamic biphenolate zinc complexes.

    PubMed

    Bentley, Keith W; de Los Santos, Zeus A; Weiss, Mary J; Wolf, Christian

    2015-10-01

    Two bidentate ligands consisting of a fluxional polyarylacetylene framework with terminal phenol groups were synthesized. Reaction with diethylzinc gives stereodynamic complexes that undergo distinct asymmetric transformation of the first kind upon binding of chiral amines and amino alcohols. The substrate-to-ligand chirality imprinting at the zinc coordination sphere results in characteristic circular dichroism signals that can be used for direct enantiomeric excess (ee) analysis. This chemosensing approach bears potential for high-throughput ee screening with small sample amounts and reduced solvent waste compared to traditional high-performance liquid chromatography methods. © 2015 Wiley Periodicals, Inc.

  15. Anti-aliasing filter design on spaceborne digital receiver

    NASA Astrophysics Data System (ADS)

    Yu, Danru; Zhao, Chonghui

    2009-12-01

    In recent years, with the development of satellite observation technologies, more and more active remote sensing technologies are adopted in spaceborne system. The spaceborne precipitation radar will depend heavily on high performance digital processing to collect meaningful rain echo data. It will increase the complexity of the spaceborne system and need high-performance and reliable digital receiver. This paper analyzes the frequency aliasing in the intermediate frequency signal sampling of digital down conversion in spaceborne radar, and gives an effective digital filter. By analysis and calculation, we choose reasonable parameters of the half-band filters to suppress the frequency aliasing on DDC. Compared with traditional filter, the FPGA resources cost in our system are reduced by over 50%. This can effectively reduce the complexity in the spaceborne digital receiver and improve the reliability of system.

  16. Global loss of a nuclear lamina component, lamin A/C, and LINC complex components SUN1, SUN2, and nesprin-2 in breast cancer.

    PubMed

    Matsumoto, Ayaka; Hieda, Miki; Yokoyama, Yuhki; Nishioka, Yu; Yoshidome, Katsuhide; Tsujimoto, Masahiko; Matsuura, Nariaki

    2015-10-01

    Cancer cells exhibit a variety of features indicative of atypical nuclei. However, the molecular mechanisms underlying these phenomena remain to be elucidated. The linker of nucleoskeleton and cytoskeleton (LINC) complex, a nuclear envelope protein complex consisting mainly of the SUN and nesprin proteins, connects nuclear lamina and cytoskeletal filaments and helps to regulate the size and shape of the nucleus. Using immunohistology, we found that a nuclear lamina component, lamin A/C and all of the investigated LINC complex components, SUN1, SUN2, and nesprin-2, were downregulated in human breast cancer tissues. In the majority of cases, we observed lower expression levels of these analytes in samples' cancerous regions as compared to their cancer-associated noncancerous regions (in cancerous regions, percentage of tissue samples exhibiting low protein expression: lamin A/C, 85% [n = 73]; SUN1, 88% [n = 43]; SUN2, 74% [n = 43]; and nesprin-2, 79% [n = 53]). Statistical analysis showed that the frequencies of recurrence and HER2 expression were negatively correlated with lamin A/C expression (P < 0.05), and intrinsic subtype and ki-67 level were associated with nesprin-2 expression (P < 0.05). In addition, combinatorial analysis using the above four parameters showed that all patients exhibited reduced expression of at least one of four components despite the tumor's pathological classification. Furthermore, several cultured breast cancer cell lines expressed less SUN1, SUN2, nesprin-2 mRNA, and lamin A/C compared to noncancerous mammary gland cells. Together, these results suggest that the strongly reduced expression of LINC complex and nuclear lamina components may play fundamental pathological functions in breast cancer progression. © 2015 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  17. Feedback Augmented Sub-Ranging (FASR) Quantizer

    NASA Technical Reports Server (NTRS)

    Guilligan, Gerard

    2012-01-01

    This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.

  18. Automated image analysis for quantitative fluorescence in situ hybridization with environmental samples.

    PubMed

    Zhou, Zhi; Pons, Marie Noëlle; Raskin, Lutgarde; Zilles, Julie L

    2007-05-01

    When fluorescence in situ hybridization (FISH) analyses are performed with complex environmental samples, difficulties related to the presence of microbial cell aggregates and nonuniform background fluorescence are often encountered. The objective of this study was to develop a robust and automated quantitative FISH method for complex environmental samples, such as manure and soil. The method and duration of sample dispersion were optimized to reduce the interference of cell aggregates. An automated image analysis program that detects cells from 4',6'-diamidino-2-phenylindole (DAPI) micrographs and extracts the maximum and mean fluorescence intensities for each cell from corresponding FISH images was developed with the software Visilog. Intensity thresholds were not consistent even for duplicate analyses, so alternative ways of classifying signals were investigated. In the resulting method, the intensity data were divided into clusters using fuzzy c-means clustering, and the resulting clusters were classified as target (positive) or nontarget (negative). A manual quality control confirmed this classification. With this method, 50.4, 72.1, and 64.9% of the cells in two swine manure samples and one soil sample, respectively, were positive as determined with a 16S rRNA-targeted bacterial probe (S-D-Bact-0338-a-A-18). Manual counting resulted in corresponding values of 52.3, 70.6, and 61.5%, respectively. In two swine manure samples and one soil sample 21.6, 12.3, and 2.5% of the cells were positive with an archaeal probe (S-D-Arch-0915-a-A-20), respectively. Manual counting resulted in corresponding values of 22.4, 14.0, and 2.9%, respectively. This automated method should facilitate quantitative analysis of FISH images for a variety of complex environmental samples.

  19. Plasma adiponectin complexes have distinct biochemical characteristics.

    PubMed

    Schraw, Todd; Wang, Zhao V; Halberg, Nils; Hawkins, Meredith; Scherer, Philipp E

    2008-05-01

    Adipocytes release the secretory protein adiponectin in a number of different higher-order complexes. Once synthesized and assembled in the secretory pathway of the adipocyte, these complexes circulate as biochemically distinct and stable entities with little evidence of interchange between the different forms that include a high-molecular-weight (HMW) species, a hexamer (low-molecular-weight form), and a trimeric form of the complexes. Here, we validate a high-resolution gel filtration method that reproducibly separates the three complexes in recombinant adiponectin and adiponectin from human and murine samples. We demonstrate that the HMW form is prominently reduced in male vs. female subjects and in obese, insulin-resistant vs. lean, insulin-sensitive individuals. A direct comparison of human and mouse adiponectin demonstrates that the trimer is generally more abundant in human serum. Furthermore, when the production of adiponectin is reduced, either by obesity or in mice carrying only a single functional allele of the adiponectin locus, then the amount of the HMW form is selectively reduced in circulation. The complex distribution of adiponectin can be regulated in several ways. Both mouse and human HMW adiponectin are very stable under basic conditions but are exquisitely labile under acidic conditions below pH 7. Murine and human adiponectin HMW forms also display differential susceptibility to the presence of calcium in the buffer. A mutant form of adiponectin unable to bind calcium is less susceptible to changes in calcium concentrations. However, the lack of calcium binding results in a destabilization of the structure. Disulfide bond formation (at position C39) is also important for complex formation. A mutant form of adiponectin lacking C39 prominently forms HMW and trimer but not the low-molecular-weight form. Injection of adiponectin with a fluorescent label reveals that over time, the various complexes do not interconvert in vivo. The stability of adiponectin complexes highlights that the production and secretion of these forms from fat cells has a major influence on the circulating levels of each complex.

  20. Consumer perception of salt-reduced breads: Comparison of single and two-bites evaluation.

    PubMed

    Antúnez, Lucía; Giménez, Ana; Alcaire, Florencia; Vidal, Leticia; Ares, Gastón

    2017-10-01

    Salt-reduction in processed products has been proposed as a high-impact intervention for reducing the sodium intake at population level. A major limitation for this approach is its potential negative impact on the sensory characteristics of products. The current practice in sensory and consumer science involves single sip/bite evaluations, which may not properly reflect the sensory experience that occurs during product consumption. In this context, the aim of the present work was to compare single and two bite evaluations of consumer sensory and hedonic perception of salt-reduced breads. Five studies with a total of 499 consumers were carried out, in which overall-liking scores of five salt-reduced bread samples were collected after the first and the second bite evaluation. In one of the studies consumers also answered a CATA (check-all-that-apply) question after the first and the second bite. Neither bite nor the interaction between samples and bite had a significant effect on hedonic scores. However, when hedonic scores were analysed separately for each bite, the overall liking scores from the second bite evaluation better reflected differences among samples according to their salt content in two of the five studies. The sensory characterization of the samples did not largely vary between the first and the second bite. Results suggest that consumers' perception of salt reduced bread samples did not largely vary between a single and a two bites evaluation. Further research is warranted in this regard, in particular considering more complex products. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Computational modeling of carbohydrate recognition in protein complex

    NASA Astrophysics Data System (ADS)

    Ishida, Toyokazu

    2017-11-01

    To understand the mechanistic principle of carbohydrate recognition in proteins, we propose a systematic computational modeling strategy to identify complex carbohydrate chain onto the reduced 2D free energy surface (2D-FES), determined by MD sampling combined with QM/MM energy corrections. In this article, we first report a detailed atomistic simulation study of the norovirus capsid proteins with carbohydrate antigens based on ab initio QM/MM combined with MD-FEP simulations. The present result clearly shows that the binding geometries of complex carbohydrate antigen are determined not by one single, rigid carbohydrate structure, but rather by the sum of averaged conformations mapped onto the minimum free energy region of QM/MM 2D-FES.

  2. Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method

    NASA Astrophysics Data System (ADS)

    Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong

    2017-09-01

    This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.

  3. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  4. Chemical Applications for Enhanced World Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leibman, Christopher Patrick

    The purpose of this project is to reduce complexity of chemical analysis by combining chemical and physical processing steps into on package; develop instrumentation that cost less and is easy to use in a field laboratory by non-experts; and develop this "chemical application" so uranium enrichment can be measured onsite, eliminating the need for radioactive sample transport.

  5. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  6. Sorption and speciation of iodine in groundwater system: The roles of organic matter and organic-mineral complexes.

    PubMed

    Li, Junxia; Zhou, Hailing; Wang, Yanxin; Xie, Xianjun; Qian, Kun

    2017-06-01

    Characterizing the properties of main host of iodine in soil/sediment and the geochemical behaviors of iodine species are critical to understand the mechanisms of iodine mobilization in groundwater systems. Four surface soil and six subsurface sediment samples were collected from the iodine-affected area of Datong basin in northern China to conduct batch experiments and to evaluate the effects of NOM and/or organic-mineral complexes on iodide/iodate geochemical behaviors. The results showed that both iodine contents and k f -iodate values had positive correlations with solid TOC contents, implying the potential host of NOM for iodine in soil/sediment samples. The results of chemical removal of easily extracted NOM indicated that the NOM of surface soils is mainly composed of surface embedded organic matter, while sediment NOM mainly occurs in the form of organic-mineral complexes. After the removal of surface sorbed NOM, the decrease in k f -iodate value of treated surface soils indicates that surface sorbed NOM enhances iodate adsorption onto surface soil. By contrast, k f -iodate value increases in several H 2 O 2 -treated sediment samples, which was considered to result from exposed rod-like minerals rich in Fe/Al oxyhydroxide/oxides. After chemical removal of organic-mineral complexes, the lowest k f -iodate value for both treated surface soils and sediments suggests the dominant role of organic-mineral complexes on controlling the iodate geochemical behavior. In comparison with iodate, iodide exhibited lower affinities on all (un)treated soil/sediment samples. The understanding of different geochemical behaviors of iodine species helps to explain the occurrence of high iodine groundwater with iodate and iodide as the main species in shallow (oxidizing conditions) and deep (reducing conditions) groundwater. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Sorption and speciation of iodine in groundwater system: The roles of organic matter and organic-mineral complexes

    NASA Astrophysics Data System (ADS)

    Li, Junxia; Zhou, Hailing; Wang, Yanxin; Xie, Xianjun; Qian, Kun

    2017-06-01

    Characterizing the properties of main host of iodine in soil/sediment and the geochemical behaviors of iodine species are critical to understand the mechanisms of iodine mobilization in groundwater systems. Four surface soil and six subsurface sediment samples were collected from the iodine-affected area of Datong basin in northern China to conduct batch experiments and to evaluate the effects of NOM and/or organic-mineral complexes on iodide/iodate geochemical behaviors. The results showed that both iodine contents and kf-iodate values had positive correlations with solid TOC contents, implying the potential host of NOM for iodine in soil/sediment samples. The results of chemical removal of easily extracted NOM indicated that the NOM of surface soils is mainly composed of surface embedded organic matter, while sediment NOM mainly occurs in the form of organic-mineral complexes. After the removal of surface sorbed NOM, the decrease in kf-iodate value of treated surface soils indicates that surface sorbed NOM enhances iodate adsorption onto surface soil. By contrast, kf-iodate value increases in several H2O2-treated sediment samples, which was considered to result from exposed rod-like minerals rich in Fe/Al oxyhydroxide/oxides. After chemical removal of organic-mineral complexes, the lowest kf-iodate value for both treated surface soils and sediments suggests the dominant role of organic-mineral complexes on controlling the iodate geochemical behavior. In comparison with iodate, iodide exhibited lower affinities on all (un)treated soil/sediment samples. The understanding of different geochemical behaviors of iodine species helps to explain the occurrence of high iodine groundwater with iodate and iodide as the main species in shallow (oxidizing conditions) and deep (reducing conditions) groundwater.

  8. Quantification of land cover and land use within the rural complex of the Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    Molinario, G.; Hansen, M. C.; Potapov, P. V.; Tyukavina, A.; Stehman, S.; Barker, B.; Humber, M.

    2017-10-01

    The rural complex is the inhabited agricultural land cover mosaic found along the network of rivers and roads in the forest of the Democratic Republic of Congo. It is a product of traditional small-holder shifting cultivation. To date, thanks to its distinction from primary forest, this area has been mapped as relatively homogenous, leaving the proportions of land cover heterogeneity within it unknown. However, the success of strategies for sustainable development, including land use planning and payment for ecosystem services, such as Reduced Emissions from Deforestation and Degradation, depends on the accurate characterization of the impacts of land use on natural resources, including within the rural complex. We photo-interpreted a simple random sample of 1000 points in the established rural complex, using 3106 high resolution satellite images obtained from the National Geospatial-Intelligence Agency, together with 406 images from Google Earth, spanning the period 2008-2016. Results indicate that nationally the established rural complex includes 5% clearings, 10% active fields, 26% fallows, 34% secondary forest, 2% wetland forest, 11% primary forest, 6% grasslands, 3% roads and settlements and 2% commercial plantations. Only a small proportion of sample points were plantations, while other commercial dynamics, such as logging and mining, were not detected in the sample. The area of current shifting cultivation accounts for 76% of the established rural complex. Added to primary forest (11%), this means that 87% of the rural complex is available for shifting cultivation. At the current clearing rate, it would take ~18 years for a complete rotation of the rural complex to occur. Additional pressure on land results in either the cultivation of non-preferred land types within the rural complex (such as wetland forest), or expansion of agriculture into nearby primary forests, with attendant impacts on emissions, habitat loss and other ecosystems services.

  9. A quantitative approach for pesticide analysis in grape juice by direct interfacing of a matrix compatible SPME phase to dielectric barrier discharge ionization-mass spectrometry.

    PubMed

    Mirabelli, Mario F; Gionfriddo, Emanuela; Pawliszyn, Janusz; Zenobi, Renato

    2018-02-12

    We evaluated the performance of a dielectric barrier discharge ionization (DBDI) source for pesticide analysis in grape juice, a fairly complex matrix due to the high content of sugars (≈20% w/w) and pigments. A fast sample preparation method based on direct immersion solid-phase microextraction (SPME) was developed, and novel matrix compatible SPME fibers were used to reduce in-source matrix suppression effects. A high resolution LTQ Orbitrap mass spectrometer allowed for rapid quantification in full scan mode. This direct SPME-DBDI-MS approach was proven to be effective for the rapid and direct analysis of complex sample matrices, with limits of detection in the parts-per-trillion (ppt) range and inter- and intra-day precision below 30% relative standard deviation (RSD) for samples spiked at 1, 10 and 10 ng ml -1 , with overall performance comparable or even superior to existing chromatographic approaches.

  10. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    PubMed

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  11. The value of support for aid workers in complex emergencies: a phenomenological study.

    PubMed

    Hearns, Annette; Deeny, Pat

    2007-01-01

    More disasters worldwide are now classified as complex emergencies, thereby increasing the threat to life and limb and potentially increasing the psychosocial impact of the experience for aid workers. This study examines the concept of support as perceived by aid workers who had recent experience in complex emergencies. Using a phenomenological approach, 6 professional aid workers were interviewed about their experience. Aid workers who work in complex emergencies do not feel supported at the pre-deployment, during deployment, and after deployment phases. Failure to provide this support may cause disappointment, reduced self-worth, anger with the organization, and feeling of lack of achievement regarding self and the mission. While the study may be limited by the volunteer sample and potential bias in data collection, the findings reiterate a proverbial but important issue in relation to aid relief staff in complex emergencies.

  12. Water Quality Research Program: Recent Developments in the Analysis of Metals in Water, Wastewater, and Other Matrices.

    DTIC Science & Technology

    1987-11-01

    for various types of samples and acids. The system features PFA -closed vessels that will tolerate up to 100 psi, a Teflon-lined cavity that reduces...complexes. Selec- tivity is accomplished by masking agents and pH adjustment and sorption on a small XAD-4 resin column. Evaporation of solvent 119...same pH, Cr(VI) is selectively reduced to Cr(III) and accumulated by adsorption at -0.3 V versus SCE (Batley and Matousek 1980). 150. Sorption , ion

  13. Phase from defocus

    NASA Astrophysics Data System (ADS)

    Mandula, Ondrej; Allier, Cédric; Hervé, Lionel; Denarier, Eric; Fourest-Lieuvin, Anne; Gory-Fauré, Sylvie; Vinit, Angélique; Morales, Sophie

    2018-02-01

    We present a simple and compact phase imaging microscope for long-term observation of non-absorbing biological samples such as unstained cells in nutritive media. The phase image is obtained from a single defocused image taken with a standard wide-field microscope. Using a semi-coherent light source allows us to computationally re-focus image post-acquisition and recover both phase and transmission of the complex specimen. The simplicity of the system reduces both the cost and its physical size and allows a long-term observation of samples directly in a standard biological incubator. The low cost of the system can contribute to the democratization of science by allowing to perform complex long-term biological experiments to the laboratories with constrained budget. In this proceeding we present several results taken with our prototype and discuss the possibilities and limitations of our system.

  14. Bacterial community changes in an industrial algae production system.

    PubMed

    Fulbright, Scott P; Robbins-Pianka, Adam; Berg-Lyons, Donna; Knight, Rob; Reardon, Kenneth F; Chisholm, Stephen T

    2018-04-01

    While microalgae are a promising feedstock for production of fuels and other chemicals, a challenge for the algal bioproducts industry is obtaining consistent, robust algae growth. Algal cultures include complex bacterial communities and can be difficult to manage because specific bacteria can promote or reduce algae growth. To overcome bacterial contamination, algae growers may use closed photobioreactors designed to reduce the number of contaminant organisms. Even with closed systems, bacteria are known to enter and cohabitate, but little is known about these communities. Therefore, the richness, structure, and composition of bacterial communities were characterized in closed photobioreactor cultivations of Nannochloropsis salina in F/2 medium at different scales, across nine months spanning late summer-early spring, and during a sequence of serially inoculated cultivations. Using 16S rRNA sequence data from 275 samples, bacterial communities in small, medium, and large cultures were shown to be significantly different. Larger systems contained richer bacterial communities compared to smaller systems. Relationships between bacterial communities and algae growth were complex. On one hand, blooms of a specific bacterial type were observed in three abnormal, poorly performing replicate cultivations, while on the other, notable changes in the bacterial community structures were observed in a series of serial large-scale batch cultivations that had similar growth rates. Bacteria common to the majority of samples were identified, including a single OTU within the class Saprospirae that was found in all samples. This study contributes important information for crop protection in algae systems, and demonstrates the complex ecosystems that need to be understood for consistent, successful industrial algae cultivation. This is the first study to profile bacterial communities during the scale-up process of industrial algae systems.

  15. Broad supernatural punishment but not moralizing high gods precede the evolution of political complexity in Austronesia.

    PubMed

    Watts, Joseph; Greenhill, Simon J; Atkinson, Quentin D; Currie, Thomas E; Bulbulia, Joseph; Gray, Russell D

    2015-04-07

    Supernatural belief presents an explanatory challenge to evolutionary theorists-it is both costly and prevalent. One influential functional explanation claims that the imagined threat of supernatural punishment can suppress selfishness and enhance cooperation. Specifically, morally concerned supreme deities or 'moralizing high gods' have been argued to reduce free-riding in large social groups, enabling believers to build the kind of complex societies that define modern humanity. Previous cross-cultural studies claiming to support the MHG hypothesis rely on correlational analyses only and do not correct for the statistical non-independence of sampled cultures. Here we use a Bayesian phylogenetic approach with a sample of 96 Austronesian cultures to test the MHG hypothesis as well as an alternative supernatural punishment hypothesis that allows punishment by a broad range of moralizing agents. We find evidence that broad supernatural punishment drives political complexity, whereas MHGs follow political complexity. We suggest that the concept of MHGs diffused as part of a suite of traits arising from cultural exchange between complex societies. Our results show the power of phylogenetic methods to address long-standing debates about the origins and functions of religion in human society. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. High-density PhyloChip profiling of stimulated aquifer microbial communities reveals a complex response to acetate amendment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handley, Kim M.; Wrighton, Kelly C.; Piceno, Yvette M.

    2012-04-13

    There is increasing interest in harnessing the functional capacities of indigenous microbial communities to transform and remediate a wide range of environmental contaminants. Information about which community members respond to stimulation can guide the interpretation and development of remediation approaches. To comprehensively determine community membership and abundance patterns among a suite of samples associated with uranium bioremediation experiments we employed a high-density microarray (PhyloChip). Samples were unstimulated, naturally reducing, or collected during Fe(III) (early) and sulfate reduction (late biostimulation) from an acetate re-amended/amended aquifer in Rifle, Colorado, and from laboratory experiments using field-collected materials. Deep community sampling with PhyloChip identifiedmore » hundreds-to-thousands of operational taxonomic units (OTUs) present during amendment, and revealed close similarity among highly enriched taxa from drill-core and groundwater well-deployed column sediment. Overall, phylogenetic data suggested stimulated community membership was most affected by a carryover effect between annual stimulation events. Nevertheless, OTUs within the Fe(III)- and sulfate-reducing lineages, Desulfuromonadales and Desulfobacterales, were repeatedly stimulated. Less consistent, co-enriched taxa represented additional lineages associated with Fe(III) and sulfate reduction (for example, Desulfovibrionales; Syntrophobacterales; Peptococcaceae) and autotrophic sulfur oxidation (Sulfurovum; Campylobacterales). These data imply complex membership among highly stimulated taxa, and by inference biogeochemical responses to acetate, a non-fermentable substrate.« less

  17. The effect of complexing phosphoproteins to decalcified collagen on in vitro calcification.

    PubMed

    Endo, A; Glimcher, M J

    1989-01-01

    Decalcified samples of chicken bone containing phosphoproteins of varying concentrations were used to assess the effect of phosphoproteins and of protein-bound Ser(P) and Thr(P) in the in vitro nucleation of a Ca-P solid phase from metastable solutions of Ca and P. Phosphoproteins of bone as well as the phosphoproteins from egg yolk (phosvitin) were used. Increasing concentrations of phosphoprotein [as measured by the amount of protein bound Ser(P) and Thr(P)] in the decalcified bone particles significantly reduced the time required for nucleation to occur after exposure to metastable solutions of Ca and P (decreased operational lag times). Treatment with wheat germ acid phosphatase markedly reduced the concentration of Ser(P) and Thr(P) in the decalcified bone samples and in the decalcified bone collagen samples complexed with phosphoproteins (almost to zero). The loss of the organic phosphate groups significantly increased the operational lag time, but did not abolish nucleation of apatite crystals by the bone collagen fibrils essentially devoid of Ser(P) and Thr(P). Bone phosphoproteins were not specific; substitution of phosvitin for bone phosphoproteins as complexes with bone collagen also proved to be effective facilitators of nucleation, which was interesting since both types of phosphoproteins have certain common chemical and structural characteristics. Noncollagenous components other than phosphoproteins were present in the decalcified bone samples. However, the marked dependence of the lag time on the Ser(P) and Thr(P) concentrations and the very marked diminution in the efficacy of the nucleation phenomenon as a result of treatment with wheat germ acid phosphatase, clearly suggests that the organic phosphate residues of the phosphoproteins play a direct and significant role in the process of in vitro nucleation of a solid phase of Ca and P (apatite) by bone collagen, and by implication, possibly in in vivo mineralization as well.

  18. Anti-inflammatory and anti-periductal fibrosis effects of an anthocyanin complex in Opisthorchis viverrini-infected hamsters.

    PubMed

    Intuyod, Kitti; Priprem, Aroonsri; Limphirat, Wanwisa; Charoensuk, Lakhanawan; Pinlaor, Porntip; Pairojkul, Chawalit; Lertrat, Kamol; Pinlaor, Somchai

    2014-12-01

    The pharmacological activities of herbal extracts can be enhanced by complex formation. In this study, we manipulated cyanidin and delphinidin-rich extracts to form an anthocyanin complex (AC) with turmeric and evaluated activity against inflammation and periductal fibrosis in Opisthorchis viverrini-infected hamsters. The AC was prepared from anthocyanins extracted from cobs of purple waxy corn (70%), petals of blue butterfly pea (20%) and turmeric extract (10%), resulting in an enhanced free-radical scavenging capacity. Oral administration of AC (175 and 700 mg/kg body weight) every day for 1 month to O. viverrini-infected hamsters resulted in reduced inflammatory cells and periductal fibrosis. Fourier transform infrared spectroscopy and partial least square discriminant analysis suggested nucleic acid changes in the O. viverrini-infected liver samples, which were partially prevented by the AC treatment. AC reduced 8-oxodG formation, an oxidative DNA damage marker, significantly decreased levels of nitrite in the plasma and alanine aminotransferase activity and increased the ferric reducing ability of plasma. AC also decreased the expression of oxidant-related genes (NF-κB and iNOS) and increased the expression of antioxidant-related genes (CAT, SOD, and GPx). Thus, AC increases free-radical scavenging capacity, decreases inflammation, suppresses oxidative/nitrative stress, and reduces liver injury and periductal fibrosis in O. viverrini-infected hamsters.

  19. Multiscale sample entropy and cross-sample entropy based on symbolic representation and similarity of stock markets

    NASA Astrophysics Data System (ADS)

    Wu, Yue; Shang, Pengjian; Li, Yilong

    2018-03-01

    A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.

  20. Case Management for Patients with Complex Multimorbidity: Development and Validation of a Coordinated Intervention between Primary and Hospital Care

    PubMed Central

    Giménez-Campos, María Soledad; Villar-López, Julia; Faubel-Cava, Raquel; Donat-Castelló, Lucas; Valdivieso-Martínez, Bernardo; Soriano-Melchor, Elisa; Bahamontes-Mulió, Amparo; García-Gómez, Juan M.

    2017-01-01

    In the past few years, healthcare systems have been facing a growing demand related to the high prevalence of chronic diseases. Case management programs have emerged as an integrated care approach for the management of chronic disease. Nevertheless, there is little scientific evidence on the impact of using a case management program for patients with complex multimorbidity regarding hospital resource utilisation. We evaluated an integrated case management intervention set up by community-based care at outpatient clinics with nurse case managers from a telemedicine unit. The hypothesis to be tested was whether improved continuity of care resulting from the integration of community-based and hospital services reduced the use of hospital resources amongst patients with complex multimorbidity. A retrospective cohort study was performed using a sample of 714 adult patients admitted to the program between January 2012 and January 2015. We found a significant decrease in the number of emergency room visits, unplanned hospitalizations, and length of stay, and an expected increase in the home care hospital-based episodes. These results support the hypothesis that case management interventions can reduce the use of unplanned hospital admissions when applied to patients with complex multimorbidity. PMID:28970745

  1. Application of simplified Complexity Theory concepts for healthcare social systems to explain the implementation of evidence into practice.

    PubMed

    Chandler, Jacqueline; Rycroft-Malone, Jo; Hawkes, Claire; Noyes, Jane

    2016-02-01

    To examine the application of core concepts from Complexity Theory to explain the findings from a process evaluation undertaken in a trial evaluating implementation strategies for recommendations about reducing surgical fasting times. The proliferation of evidence-based guidance requires a greater focus on its implementation. Theory is required to explain the complex processes across the multiple healthcare organizational levels. This social healthcare context involves the interaction between professionals, patients and the organizational systems in care delivery. Complexity Theory may provide an explanatory framework to explain the complexities inherent in implementation in social healthcare contexts. A secondary thematic analysis of qualitative process evaluation data informed by Complexity Theory. Seminal texts applying Complexity Theory to the social context were annotated, key concepts extracted and core Complexity Theory concepts identified. These core concepts were applied as a theoretical lens to provide an explanation of themes from a process evaluation of a trial evaluating the implementation of strategies to reduce surgical fasting times. Sampled substantive texts provided a representative spread of theoretical development and application of Complexity Theory from late 1990's-2013 in social science, healthcare, management and philosophy. Five Complexity Theory core concepts extracted were 'self-organization', 'interaction', 'emergence', 'system history' and 'temporality'. Application of these concepts suggests routine surgical fasting practice is habituated in the social healthcare system and therefore it cannot easily be reversed. A reduction to fasting times requires an incentivised new approach to emerge in the surgical system's priority of completing the operating list. The application of Complexity Theory provides a useful explanation for resistance to change fasting practice. Its utility in implementation research warrants further attention and evaluation. © 2015 John Wiley & Sons Ltd.

  2. Long-term effects of peatland cultivation on soil physical and hydraulic properties: Case study in Canada

    Treesearch

    Dennis W. Hallema; Jonathan A. Lafond; Yann Périard; Silvio J. Gumiere; Ge Sun; Jean Caron

    2015-01-01

    Organic soils are an excellent substrate for commercial lettuce (Lactuca sativa L.) farming; however, drainage accelerates oxidation of the surface layer and reduces the water holding capacity, which is often lethal for crops that are sensitive to water stress. In this case study, we analyzed 942 peat samples from a large cultivated peatland complex...

  3. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  4. Complex Feeding Tracks of the Sessile Herbivorous Insect Ophiomyia maura as a Function of the Defense against Insect Parasitoids

    PubMed Central

    Ayabe, Yoshiko; Ueno, Takatoshi

    2012-01-01

    Because insect herbivores generally suffer from high mortality due to their natural enemies, reducing the risk of being located by natural enemies is of critical importance for them, forcing them to develop a variety of defensive measures. Larvae of leaf-mining insects lead a sedentary life inside a leaf and make conspicuous feeding tracks called mines, exposing themselves to the potential risk of parasitism. We investigated the defense strategy of the linear leafminer Ophiomyia maura Meigen (Diptera: Agromyzidae), by focusing on its mining patterns. We examined whether the leafminer could reduce the risk of being parasitized (1) by making cross structures in the inner area of a leaf to deter parasitoids from tracking the mines due to complex pathways, and (2) by mining along the edge of a leaf to hinder visually searching parasitoids from finding mined leaves due to effective background matching of the mined leaves among intact leaves. We quantified fractal dimension as mine complexity and area of mine in the inner area of the leaf as interior mine density for each sample mine, and analyzed whether these mine traits affected the susceptibility of O. maura to parasitism. Our results have shown that an increase in mine complexity with the development of occupying larvae decreases the probability of being parasitized, while interior mine density has no influence on parasitism. These results suggest that the larval development increases the host defense ability through increasing mine complexity. Thus the feeding pattern of these sessile insects has a defensive function by reducing the risk of parasitism. PMID:22393419

  5. Theoretical and Experimental Study of Inclusion Complexes of β-Cyclodextrins with Chalcone and 2',4'-Dihydroxychalcone.

    PubMed

    Sancho, Matias I; Andujar, Sebastian; Porasso, Rodolfo D; Enriz, Ricardo D

    2016-03-31

    The inclusion complexes formed by chalcone and 2',4'-dihydroxychalcone with β-cyclodextrin have been studied combining experimental (phase solubility diagrams, Fourier transform infrared spectroscopy) and molecular modeling (molecular dynamics, quantum mechanics/molecular mechanics calculations) techniques. The formation constants of the complexes were determined at different temperatures, and the thermodynamic parameters of the process were obtained. The inclusion of chalcone in β-cyclodextrin is an exothermic process, while the inclusion of 2',4'-dihydroxychalcone is endothermic. Free energy profiles, derived from umbrella sampling using molecular dynamics simulations, were constructed to analyze the binding affinity and the complexation reaction at a molecular level. Hybrid QM/MM calculations were also employed to obtain a better description of the energetic and structural aspects of the complexes. The intermolecular interactions that stabilize both inclusion complexes were characterized by means of quantum atoms in molecules theory and reduce density gradient method. The calculated interactions were experimentally observed using FTIR.

  6. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    PubMed

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  7. Trends in hard X-ray fluorescence mapping: environmental applications in the age of fast detectors.

    PubMed

    Lombi, E; de Jonge, M D; Donner, E; Ryan, C G; Paterson, D

    2011-06-01

    Environmental samples are extremely diverse but share a tendency for heterogeneity and complexity. This heterogeneity poses methodological challenges when investigating biogeochemical processes. In recent years, the development of analytical tools capable of probing element distribution and speciation at the microscale have allowed this challenge to be addressed. Of these available tools, laterally resolved synchrotron techniques such as X-ray fluorescence mapping are key methods for the in situ investigation of micronutrients and inorganic contaminants in environmental samples. This article demonstrates how recent advances in X-ray fluorescence detector technology are bringing new possibilities to environmental research. Fast detectors are helping to circumvent major issues such as X-ray beam damage of hydrated samples, as dwell times during scanning are reduced. They are also helping to reduce temporal beamtime requirements, making particularly time-consuming techniques such as micro X-ray fluorescence (μXRF) tomography increasingly feasible. This article focuses on μXRF mapping of nutrients and metalloids in environmental samples, and suggests that the current divide between mapping and speciation techniques will be increasingly blurred by the development of combined approaches.

  8. Improved lossless intra coding for H.264/MPEG-4 AVC.

    PubMed

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  9. Impaired Muscle Mitochondrial Biogenesis and Myogenesis in Spinal Muscular Atrophy

    PubMed Central

    Ripolone, Michela; Ronchi, Dario; Violano, Raffaella; Vallejo, Dionis; Fagiolari, Gigliola; Barca, Emanuele; Lucchini, Valeria; Colombo, Irene; Villa, Luisa; Berardinelli, Angela; Balottin, Umberto; Morandi, Lucia; Mora, Marina; Bordoni, Andreina; Fortunato, Francesco; Corti, Stefania; Parisi, Daniela; Toscano, Antonio; Sciacco, Monica; DiMauro, Salvatore; Comi, Giacomo P.; Moggio, Maurizio

    2016-01-01

    IMPORTANCE The important depletion of mitochondrial DNA (mtDNA) and the general depression of mitochondrial respiratory chain complex levels (including complex II) have been confirmed, implying an increasing paucity of mitochondria in the muscle from patients with types I, II, and III spinal muscular atrophy (SMA-I, -II, and -III, respectively). OBJECTIVE To investigate mitochondrial dysfunction in a large series of muscle biopsy samples from patients with SMA. DESIGN, SETTING, AND PARTICIPANTS We studied quadriceps muscle samples from 24 patients with genetically documented SMA and paraspinal muscle samples from 3 patients with SMA-II undergoing surgery for scoliosis correction. Postmortem muscle samples were obtained from 1 additional patient. Age-matched controls consisted of muscle biopsy specimens from healthy children aged 1 to 3 years who had undergone analysis for suspected myopathy. Analyses were performed at the Neuromuscular Unit, Istituto di Ricovero e Cura a Carattere Scientifico Foundation Ca’ Granda Ospedale Maggiore Policlinico-Milano, from April 2011 through January 2015. EXPOSURES We used histochemical, biochemical, and molecular techniques to examine the muscle samples. MAIN OUTCOMES AND MEASURES Respiratory chain activity and mitochondrial content. RESULTS Results of histochemical analysis revealed that cytochrome-c oxidase (COX) deficiency was more evident in muscle samples from patients with SMA-I and SMA-II. Residual activities for complexes I, II, and IV in muscles from patients with SMA-I were 41%, 27%, and 30%, respectively, compared with control samples (P < .005). Muscle mtDNA content and cytrate synthase activity were also reduced in all 3 SMA types (P < .05). We linked these alterations to downregulation of peroxisome proliferator–activated receptor coactivator 1α, the transcriptional activators nuclear respiratory factor 1 and nuclear respiratory factor 2, mitochondrial transcription factor A, and their downstream targets, implying depression of the entire mitochondrial biogenesis. Results of Western blot analysis confirmed the reduced levels of the respiratory chain subunits that included mitochondrially encoded COX1 (47.5%; P = .004), COX2 (32.4%; P < .001), COX4 (26.6%; P < .001), and succinate dehydrogenase complex subunit A (65.8%; P = .03) as well as the structural outer membrane mitochondrial porin (33.1%; P < .001). Conversely, the levels of expression of 3 myogenic regulatory factors—muscle-specificmyogenic factor 5, myoblast determination 1, and myogenin—were higher in muscles from patients with SMA compared with muscles from age-matched controls (P < .05). CONCLUSIONS AND RELEVANCE Our results strongly support the conclusion that an altered regulation of myogenesis and a downregulated mitochondrial biogenesis contribute to pathologic change in the muscle of patients with SMA. Therapeutic strategies should aim at counteracting these changes. PMID:25844556

  10. Sample-Based Surface Coloring

    PubMed Central

    Bürger, Kai; Krüger, Jens; Westermann, Rüdiger

    2011-01-01

    In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)—an extension of the Layered Depth Cube—as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392

  11. Online-LASIL: Laser Ablation of Solid Samples in Liquid with online-coupled ICP-OES detection for direct determination of the stoichiometry of complex metal oxide thin layers.

    PubMed

    Bonta, Maximilian; Frank, Johannes; Taibl, Stefanie; Fleig, Jürgen; Limbeck, Andreas

    2018-02-13

    Advanced materials such as complex metal oxides are used in a wide range of applications and have further promising perspectives in the form of thin films. The exact chemical composition essentially influences the electronic properties of these materials which makes correct assessment of their composition necessary. However, due to high chemical resistance and in the case of thin films low absolute analyte amounts, this procedure is in most cases not straightforward and extremely time-demanding. Commonly applied techniques either lack in ease of use (i.e., solution-based analysis with preceding sample dissolution), or adequately accurate quantification (i.e., solid sampling techniques). An analysis approach which combines the beneficial aspects of solution-based analysis as well as direct solid sampling is Laser Ablation of a Sample in Liquid (LASIL). In this work, it is shown that the analysis of major as well as minor sample constituents is possible using a novel online-LASIL setup, allowing sample analysis without manual sample handling after placing it in an ablation chamber. Strontium titanate (STO) thin layers with different compositions were analyzed in the course of this study. Precision of the newly developed online-LASIL method is comparable to conventional wet chemical approaches. With only about 15-20 min required for the analysis per sample, time demand is significantly reduced compared to often necessary fusion procedures lasting multiple hours. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Spectrophotometric determination of vanadium in rutile and in mafic igneous rocks

    USGS Publications Warehouse

    Marinenko, John; Mei, Leung

    1974-01-01

    Minor and major levels of vanadium in rutile are separated from titanium and iron by sample fusion with sodium carbonate followed by water leach and filtration. The filtrate is then acidified with hydrochloric acid. Silicates are decomposed with a mixture of hydrofluoric and hydrochloric acids, and iron is separated by extraction of its chloride with diethyl ether. Sample vanadium in hydrochloric acid is then quantitatively reduced to vanadium(IV) with sulfurous acid. The remaining sulfur dioxide is expelled by heating. Vanadium (IV) then is reacted with excess of iron(III) at reduced acidity (pH 5) in the presence of 1,10-phenanthroline to yield the orange-red iron(II) 1,10-phenanthroline complex. Iron(II) generated by vanadium(IV) is a measure of total vanadium in the sample. The proposed method is free from elemental interferences because the color development cannot take place without the two redox reactions described above, and these are, under the outlined experimental conditions, quantitative only for vanadium.

  13. Complex mixture analysis by photoionization mass spectrometry with a VUV hydrogen laser source

    NASA Astrophysics Data System (ADS)

    Huth, T. C.; Denton, M. B.

    1985-12-01

    Trace organic analysis in complex matrix presents one of the most challenging problems in analytical mass spectrometry. When ionization is accomplished non-selectively using electron impact, extensive sample clean-up is often necessary in order to isolate the analyte from the matrix. Sample preparation can be greatly reduced when the VUV H2 laser is used to selectively photoionize only a small fraction of compounds introduced into the ion source. This device produces parent ions only for all compounds whose ionization potentials lie below a threshold value determined by the photon energy of 7.8 eV. The only observed interference arises from electron impact ionization, when scattered laser radiation interacts with metal surfaces, producing electrons which are then accelerated by potential fields inside the source. These can be suppressed to levels acceptable for practical analysis through proper instrumental design. Results are presented which indicate the ability of this ion source to discriminate against interfering matrix components, in simple extracts from a variety of complex real world matrices, such as brewed coffee, beer, and urine.

  14. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  15. Taking a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wild, M.; Rouhani, S.

    1995-02-01

    A typical site investigation entails extensive sampling and monitoring. In the past, sampling plans have been designed on purely ad hoc bases, leading to significant expenditures and, in some cases, collection of redundant information. In many instances, sampling costs exceed the true worth of the collected data. The US Environmental Protection Agency (EPA) therefore has advocated the use of geostatistics to provide a logical framework for sampling and analysis of environmental data. Geostatistical methodology uses statistical techniques for the spatial analysis of a variety of earth-related data. The use of geostatistics was developed by the mining industry to estimate oremore » concentrations. The same procedure is effective in quantifying environmental contaminants in soils for risk assessments. Unlike classical statistical techniques, geostatistics offers procedures to incorporate the underlying spatial structure of the investigated field. Sample points spaced close together tend to be more similar than samples spaced further apart. This can guide sampling strategies and determine complex contaminant distributions. Geostatistic techniques can be used to evaluate site conditions on the basis of regular, irregular, random and even spatially biased samples. In most environmental investigations, it is desirable to concentrate sampling in areas of known or suspected contamination. The rigorous mathematical procedures of geostatistics allow for accurate estimates at unsampled locations, potentially reducing sampling requirements. The use of geostatistics serves as a decision-aiding and planning tool and can significantly reduce short-term site assessment costs, long-term sampling and monitoring needs, as well as lead to more accurate and realistic remedial design criteria.« less

  16. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Single input state, single–mode fiber–based polarization sensitive optical frequency domain imaging by eigenpolarization referencing

    PubMed Central

    Lippok, Norman; Villiger, Martin; Jun, Chang–Su; Bouma, Brett E.

    2015-01-01

    Fiber–based polarization sensitive OFDI is more challenging than free–space implementations. Using multiple input states, fiber–based systems provide sample birefringence information with the benefit of a flexible sample arm but come at the cost of increased system and acquisition complexity, and either reduce acquisition speed or require increased acquisition bandwidth. Here we show that with the calibration of a single polarization state, fiber–based configurations can approach the conceptual simplicity of traditional free–space configurations. We remotely control the polarization state of the light incident at the sample using the eigenpolarization states of a wave plate as a reference, and determine the Jones matrix of the output fiber. We demonstrate this method for polarization sensitive imaging of biological samples. PMID:25927775

  18. Graphene oxide-dependent growth and self-aggregation into a hydrogel complex of exoelectrogenic bacteria

    PubMed Central

    Yoshida, Naoko; Miyata, Yasushi; Doi, Kasumi; Goto, Yuko; Nagao, Yuji; Tero, Ryugo; Hiraishi, Akira

    2016-01-01

    Graphene oxide (GO) is reduced by certain exoelectrogenic bacteria, but its effects on bacterial growth and metabolism are a controversial issue. This study aimed to determine whether GO functions as the terminal electron acceptor to allow specific growth of and electricity production by exoelectrogenic bacteria. Cultivation of environmental samples with GO and acetate as the sole substrate could specifically enrich exoelectrogenic bacteria with Geobacter species predominating (51–68% of the total populations). Interestingly, bacteria in these cultures self-aggregated into a conductive hydrogel complex together with biologically reduced GO (rGO). A novel GO-respiring bacterium designated Geobacter sp. strain R4 was isolated from this hydrogel complex. This organism exhibited stable electricity production at >1000 μA/cm3 (at 200 mV vs Ag/AgCl) for more than 60 d via rGO while temporary electricity production using graphite felt. The better electricity production depends upon the characteristics of rGO such as a large surface area for biofilm growth, greater capacitance, and smaller internal resistance. This is the first report to demonstrate GO-dependent growth of exoelectrogenic bacteria while forming a conductive hydrogel complex with rGO. The simple put-and-wait process leading to the formation of hydrogel complexes of rGO and exoelectrogens will enable wider applications of GO to bioelectrochemical systems. PMID:26899353

  19. Meta-Transcriptomic Analysis of a Chromate-Reducing Aquifer Microbial Community

    NASA Astrophysics Data System (ADS)

    Beller, H. R.; Brodie, E. L.; Han, R.; Karaoz, U.

    2010-12-01

    A major challenge for microbial ecology that has become more tractable in the advent of new molecular techniques is characterizing gene expression in complex microbial communities. We are using meta-transcriptomic analysis to characterize functional changes in an aquifer-derived, chromate-reducing microbial community as it transitions through various electron-accepting conditions. We inoculated anaerobic microcosms with groundwater from the Cr-contaminated Hanford 100H site and supplemented them with lactate and electron acceptors present at the site, namely, nitrate, sulfate, and Fe(III). The microcosms progressed successively through various electron-accepting conditions (e.g., denitrifying, sulfate-reducing, and ferric iron-reducing conditions, as well as nitrate-dependent, chemolithotrophic Fe(II)-oxidizing conditions). Cr(VI) was rapidly reduced initially and again upon further Cr(VI) amendments. Extensive geochemical sampling and analysis (e.g., lactate, acetate, chloride, nitrate, nitrite, sulfate, dissolved Cr(VI), total Fe(II)), RNA/DNA harvesting, and PhyloChip analyses were conducted. Methods were developed for removal of rRNA from total RNA in preparation for meta-transcriptome sequencing. To date, samples representing denitrifying and fermentative/sulfate-reducing conditions have been sequenced using 454 Titanium technology. Of the non-rRNA related reads for the denitrifying sample (which was also actively reducing chromate), ca. 8% were associated with denitrification and ca. 0.9% were associated with chromate resistance/transport, in contrast to the fermentative/sulfate-reducing sample (in which chromate had already been reduced), which had zero reads associated with either of these categories but many predicted proteins associated with sulfate-reducing bacteria. We observed sequences for key functional transcripts that were unique at the nucleotide level compared to the GenBank non-redundant database [such as L-lactate dehydrogenase (iron-sulfur-cluster-binding subunit), cytochrome cd1 nitrite reductase (nirS) (from the denitrifying phase), and dissimilatory sulfite reductase (dsrA, dsrB) (from the sulfate-reducing phase)]. One potential advantage of this approach is that such important genes may not have been detected using more traditional techniques, including PCR-based methods and a priori functional microarrays.

  20. Dose Response of Endotoxin on Hepatocyte and Muscle Mitochondrial Respiration In Vitro

    PubMed Central

    Brandt, Sebastian; Porta, Francesca; Jakob, Stephan M.; Takala, Jukka; Djafarzadeh, Siamak

    2015-01-01

    Introduction. Results on mitochondrial dysfunction in sepsis are controversial. We aimed to assess effects of LPS at wide dose and time ranges on hepatocytes and isolated skeletal muscle mitochondria. Methods. Human hepatocellular carcinoma cells (HepG2) were exposed to placebo or LPS (0.1, 1, and 10 μg/mL) for 4, 8, 16, and 24 hours and primary human hepatocytes to 1 μg/mL LPS or placebo (4, 8, and 16 hours). Mitochondria from porcine skeletal muscle samples were exposed to increasing doses of LPS (0.1–100 μg/mg) for 2 and 4 hours. Respiration rates of intact and permeabilized cells and isolated mitochondria were measured by high-resolution respirometry. Results. In HepG2 cells, LPS reduced mitochondrial membrane potential and cellular ATP content but did not modify basal respiration. Stimulated complex II respiration was reduced time-dependently using 1 μg/mL LPS. In primary human hepatocytes, stimulated mitochondrial complex II respiration was reduced time-dependently using 1 μg/mL LPS. In isolated porcine skeletal muscle mitochondria, stimulated respiration decreased at high doses (50 and 100 μg/mL LPS). Conclusion. LPS reduced cellular ATP content of HepG2 cells, most likely as a result of the induced decrease in membrane potential. LPS decreased cellular and isolated mitochondrial respiration in a time-dependent, dose-dependent and complex-dependent manner. PMID:25649304

  1. Prototype of an Interface for Hyphenating Distillation with Gas Chromatography and Mass Spectrometry

    PubMed Central

    Tang, Ya-Ru; Yang, Hui-Hsien; Urban, Pawel L.

    2017-01-01

    Chemical analysis of complex matrices—containing hundreds of compounds—is challenging. Two-dimensional separation techniques provide an efficient way to reduce complexity of mixtures analyzed by mass spectrometry (MS). For example, gasoline is a mixture of numerous compounds, which can be fractionated by distillation techniques. However, coupling conventional distillation with other separations as well as MS is not straightforward. We have established an automatic system for online coupling of simple microscale distillation with gas chromatography (GC) and electron ionization MS. The developed system incorporates an interface between the distillation condenser and the injector of a fused silica capillary GC column. Development of this multidimensional separation (distillation-GC-MS) was preceded by a series of preliminary off-line experiments. In the developed technique, the components with different boiling points are fractionated and instantly analyzed by GC-MS. The obtained data sets illustrate dynamics of the distillation process. An important advantage of the distillation-GC-MS technique is that raw samples can directly be analyzed without removal of the non-volatile matrix residues that could contaminate the GC injection port and the column. Distilling the samples immediately before the injection to the GC column may reduce possible matrix effects—especially in the early phase of separation, when molecules with different volatilities co-migrate. It can also reduce losses of highly volatile components (during fraction collection and transfer). The two separation steps are partly orthogonal, what can slightly increase selectivity of the entire analysis. PMID:28337400

  2. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  3. Movie denoising by average of warped lines.

    PubMed

    Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro

    2007-09-01

    Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.

  4. Brassica rapa plants adapted to microgravity with reduced photosystem I and its photochemical activity

    NASA Technical Reports Server (NTRS)

    Jiao, Shunxing; Hilaire, Emmanuel; Paulsen, Avelina Q.; Guikema, James A.

    2004-01-01

    The photosynthetic apparatus contains several protein complexes, many of which are regulated by environmental conditions. In this study, the influences of microgravity on PSI and PSII in Brassica rapa plants grown aboard the space shuttle were examined. We found that Brassica plants grown in space had a normal level of growth relative to controls under similar conditions on Earth. Upon return to Earth, cotyledons were harvested and thylakoid membranes were isolated. Analysis of chlorophyll contents showed that the Chl a/b ratio (3.5) in flight cotyledons was much higher than a ratio of 2.42 in the ground controls. The flight samples also had a reduction of PSI complexes and a corresponding 30% decrease of PSI photochemical activity. Immunoblotting showed that the reaction centre polypeptides of PSI were more apparently decreased (e.g. by 24-33% for PsaA and PsaB, and 57% for PsaC) than the light-harvesting complexes. In comparison, the accumulation of PSII complex was less affected in microgravity, thus only a slight reduction in D1, D2 and LHCII was observed in protein blots. However, there was a 32% decrease of OEC1 in the flight samples, indicating a defective OEC subcomplex. In addition, an average 54% increase of the 54 kDa CF1-beta isoform was found in the flight samples, suggesting that space-grown plants suffered from certain stresses, consistent with implications of the increased Chl a/b ratio. Taken together, the results demonstrated that Brassica plants can adapt to spaceflight microgravity, but with significant alterations in chloroplast structures and photosynthetic complexes, and especially reduction of PSI and its activity.

  5. Refined generalized multiscale entropy analysis for physiological signals

    NASA Astrophysics Data System (ADS)

    Liu, Yunxiao; Lin, Youfang; Wang, Jing; Shang, Pengjian

    2018-01-01

    Multiscale entropy analysis has become a prevalent complexity measurement and been successfully applied in various fields. However, it only takes into account the information of mean values (first moment) in coarse-graining procedure. Then generalized multiscale entropy (MSEn) considering higher moments to coarse-grain a time series was proposed and MSEσ2 has been implemented. However, the MSEσ2 sometimes may yield an imprecise estimation of entropy or undefined entropy, and reduce statistical reliability of sample entropy estimation as scale factor increases. For this purpose, we developed the refined model, RMSEσ2, to improve MSEσ2. Simulations on both white noise and 1 / f noise show that RMSEσ2 provides higher entropy reliability and reduces the occurrence of undefined entropy, especially suitable for short time series. Besides, we discuss the effect on RMSEσ2 analysis from outliers, data loss and other concepts in signal processing. We apply the proposed model to evaluate the complexity of heartbeat interval time series derived from healthy young and elderly subjects, patients with congestive heart failure and patients with atrial fibrillation respectively, compared to several popular complexity metrics. The results demonstrate that RMSEσ2 measured complexity (a) decreases with aging and diseases, and (b) gives significant discrimination between different physiological/pathological states, which may facilitate clinical application.

  6. Quantitative validation of a nonlinear histology-MRI coregistration method using Generalized Q-sampling Imaging in complex human cortical white matter

    PubMed Central

    Gangolli, Mihika; Holleran, Laurena; Kim, Joong Hee; Stein, Thor D.; Alvarez, Victor; McKee, Ann C.; Brody, David L.

    2017-01-01

    Advanced diffusion MRI methods have recently been proposed for detection of pathologies such as traumatic axonal injury and chronic traumatic encephalopathy which commonly affect complex cortical brain regions. However, radiological-pathological correlations in human brain tissue that detail the relationship between the multi-component diffusion signal and underlying pathology are lacking. We present a nonlinear voxel based two dimensional coregistration method that is useful for matching diffusion signals to quantitative metrics of high resolution histological images. When validated in ex vivo human cortical tissue at a 250 × 250 × 500 micron spatial resolution, the method proved robust in correlations between generalized q-sampling imaging and histologically based white matter fiber orientations, with r = 0.94 for the primary fiber direction and r = 0.88 for secondary fiber direction in each voxel. Importantly, however, the correlation was substantially worse with reduced spatial resolution or with fiber orientations derived using a diffusion tensor model. Furthermore, we have detailed a quantitative histological metric of white matter fiber integrity termed power coherence capable of distinguishing between architecturally complex but intact white matter from disrupted white matter regions. These methods may allow for more sensitive and specific radiological-pathological correlations of neurodegenerative diseases affecting complex gray and white matter. PMID:28365421

  7. Massively parallel digital high resolution melt for rapid and absolutely quantitative sequence profiling

    NASA Astrophysics Data System (ADS)

    Velez, Daniel Ortiz; Mack, Hannah; Jupe, Julietta; Hawker, Sinead; Kulkarni, Ninad; Hedayatnia, Behnam; Zhang, Yang; Lawrence, Shelley; Fraley, Stephanie I.

    2017-02-01

    In clinical diagnostics and pathogen detection, profiling of complex samples for low-level genotypes represents a significant challenge. Advances in speed, sensitivity, and extent of multiplexing of molecular pathogen detection assays are needed to improve patient care. We report the development of an integrated platform enabling the identification of bacterial pathogen DNA sequences in complex samples in less than four hours. The system incorporates a microfluidic chip and instrumentation to accomplish universal PCR amplification, High Resolution Melting (HRM), and machine learning within 20,000 picoliter scale reactions, simultaneously. Clinically relevant concentrations of bacterial DNA molecules are separated by digitization across 20,000 reactions and amplified with universal primers targeting the bacterial 16S gene. Amplification is followed by HRM sequence fingerprinting in all reactions, simultaneously. The resulting bacteria-specific melt curves are identified by Support Vector Machine learning, and individual pathogen loads are quantified. The platform reduces reaction volumes by 99.995% and achieves a greater than 200-fold increase in dynamic range of detection compared to traditional PCR HRM approaches. Type I and II error rates are reduced by 99% and 100% respectively, compared to intercalating dye-based digital PCR (dPCR) methods. This technology could impact a number of quantitative profiling applications, especially infectious disease diagnostics.

  8. Study of Maxwell–Wagner (M–W) relaxation behavior and hysteresis observed in bismuth titanate layered structure obtained by solution combustion synthesis using dextrose as fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subohi, Oroosa, E-mail: oroosa@gmail.com; Shastri, Lokesh; Kumar, G.S.

    2014-01-01

    Graphical abstract: X-ray diffraction studies show that phase formation and crystallinity was reached only after calcinations at 800 °C. Dielectric constant versus temperature curve shows ferroelectric to paraelectric transition temperature (T{sub c}) to be 650 °C. Complex impedance curves show deviation from Debye behavior. The material shows a thin PE Loop with low remnant polarization due to high conductivity in the as prepared sample. - Highlights: • Bi{sub 4}Ti{sub 3}O{sub 12} is synthesized using solution combustion technique with dextrose as fuel. • Dextrose has high reducing capacity (+24) and generates more no. of moles of gases. • Impedance studies showmore » that the sample follows Maxwell–Wagner relaxation behavior. • Shows lower remnant polarization due to higher c-axis ratio. - Abstract: Structural, dielectric and ferroelectric properties of bismuth titanate (Bi{sub 4}Ti{sub 3}O{sub 12}) obtained by solution combustion technique using dextrose as fuel is studied extensively in this paper. Dextrose is used as fuel as it has high reducing valancy and generates more number of moles of gases during the reaction. X-ray diffraction studies show that phase formation and crystallinity was reached only after calcinations at 800 °C. Dielectric constant versus temperature curve shows ferroelectric to paraelectric transition temperature (T{sub c}) to be 650 °C. The dielectric loss is very less (tan δ < 1) at lower temperatures but increases around T{sub c} due to structural changes in the sample. Complex impedance curves show deviation from Debye behavior. The material shows a thin PE Loop with low remnant polarization due to high conductivity in the as prepared sample.« less

  9. Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.

    PubMed

    Kraft, James M; Maloney, Shannon I; Brainard, David H

    2002-01-01

    Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.

  10. Adaptive frequency-domain equalization in digital coherent optical receivers.

    PubMed

    Faruk, Md Saifuddin; Kikuchi, Kazuro

    2011-06-20

    We propose a novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain adaptive equalizer based on finite-impulse-response (FIR) filters. The proposed equalizer can operate on the input sequence sampled by free-running analog-to-digital converters (ADCs) at the rate of two samples per symbol; therefore, the arbitrary initial sampling phase of ADCs can be adjusted so that the best symbol-spaced sequence is produced. The equalizer can also be configured in the butterfly structure, which enables demultiplexing of polarization tributaries apart from equalization of linear transmission impairments. The performance of the proposed equalization scheme is verified by 40-Gbits/s dual-polarization quadrature phase-shift keying (QPSK) transmission experiments.

  11. Sensitive detection of C-reactive protein in serum by immunoprecipitation-microchip capillary gel electrophoresis.

    PubMed

    Herwig, Ela; Marchetti-Deschmann, Martina; Wenz, Christian; Rüfer, Andreas; Redl, Heinz; Bahrami, Soheyl; Allmaier, Günter

    2015-06-01

    Sepsis represents a significant cause of mortality in intensive care units. Early diagnosis of sepsis is essential to increase the survival rate of patients. Among others, C-reactive protein (CRP) is commonly used as a sepsis marker. In this work we introduce immune precipitation combined with microchip capillary gel electrophoresis (IP-MCGE) for the detection and quantification of CRP in serum samples. First high-abundance proteins (HSA, IgG) are removed from serum samples using affinity spin cartridges, and then the remaining proteins are labeled with a fluorescence dye and incubated with an anti-CRP antibody, and the antigen/antibody complex is precipitated with protein G-coated magnetic beads. After precipitation the complex is eluted from the beads and loaded onto the MCGE system. CRP could be reliably detected and quantified, with a detection limit of 25 ng/μl in serum samples and 126 pg/μl in matrix-free samples. The overall sensitivity (LOQ = 75 ng/μl, R(2) = 0.9668) of the method is lower than that of some specially developed methods (e.g., immune radiometric assay) but is comparable to those of clinically accepted ELISA methods. The straightforward sample preparation (not prone to mistakes), reduced sample and reagent volumes (including the antibodies), and high throughput (10 samples/3 h) are advantages and therefore IP-MCGE bears potential for point-of-care diagnosis. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Detecting Kerogen as a Biosignature Using Colocated UV Time-Gated Raman and Fluorescence Spectroscopy.

    PubMed

    Shkolyar, Svetlana; Eshelman, Evan J; Farmer, Jack D; Hamilton, David; Daly, Michael G; Youngbull, Cody

    2018-04-01

    The Mars 2020 mission will analyze samples in situ and identify any that could have preserved biosignatures in ancient habitable environments for later return to Earth. Highest priority targeted samples include aqueously formed sedimentary lithologies. On Earth, such lithologies can contain fossil biosignatures as aromatic carbon (kerogen). In this study, we analyzed nonextracted kerogen in a diverse suite of natural, complex samples using colocated UV excitation (266 nm) time-gated (UV-TG) Raman and laser-induced fluorescence spectroscopies. We interrogated kerogen and its host matrix in samples to (1) explore the capabilities of UV-TG Raman and fluorescence spectroscopies for detecting kerogen in high-priority targets in the search for possible biosignatures on Mars; (2) assess the effectiveness of time gating and UV laser wavelength in reducing fluorescence in Raman spectra; and (3) identify sample-specific issues that could challenge rover-based identifications of kerogen using UV-TG Raman spectroscopy. We found that ungated UV Raman spectroscopy is suited to identify diagnostic kerogen Raman bands without interfering fluorescence and that UV fluorescence spectroscopy is suited to identify kerogen. These results highlight the value of combining colocated Raman and fluorescence spectroscopies, similar to those obtainable by SHERLOC on Mars 2020, to strengthen the confidence of kerogen detection as a potential biosignature in complex natural samples. Key Words: Raman spectroscopy-Laser-induced fluorescence spectroscopy-Mars Sample Return-Mars 2020 mission-Kerogen-Biosignatures. Astrobiology 18, 431-453.

  13. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  14. MALDI Q-TOF CID MS for Diagnostic Ion Screening of Human Milk Oligosaccharide Samples

    PubMed Central

    Jovanović, Marko; Tyldesley-Worster, Richard; Pohlentz, Gottfried; Peter-Katalinić, Jasna

    2014-01-01

    Human milk oligosaccharides (HMO) represent the bioactive components of human milk, influencing the infant’s gastrointestinal microflora and immune system. Structurally, they represent a highly complex class of analyte, where the main core oligosaccharide structures are built from galactose and N-acetylglucosamine, linked by 1–3 or 1–4 glycosidic linkages and potentially modified with fucose and sialic acid residues. The core structures can be linear or branched. Additional structural complexity in samples can be induced by endogenous exoglycosidase activity or chemical procedures during the sample preparation. Here, we show that using matrix-assisted laser desorption/ionization (MALDI) quadrupole-time-of-flight (Q-TOF) collision-induced dissociation (CID) as a fast screening method, diagnostic structural information about single oligosaccharide components present in a complex mixture can be obtained. According to sequencing data on 14 out of 22 parent ions detected in a single high molecular weight oligosaccharide chromatographic fraction, 20 different oligosaccharide structure types, corresponding to over 30 isomeric oligosaccharide structures and over 100 possible HMO isomers when biosynthetic linkage variations were taken into account, were postulated. For MS/MS data analysis, we used the de novo sequencing approach using diagnostic ion analysis on reduced oligosaccharides by following known biosynthetic rules. Using this approach, de novo characterization has been achieved also for the structures, which could not have been predicted. PMID:24743894

  15. Microextraction techniques combined with capillary electrophoresis in bioanalysis.

    PubMed

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-01-01

    Over the past two decades, many environmentally sustainable sample-preparation techniques have been proposed, with the objective of reducing the use of toxic organic solvents or substituting these with environmentally friendly alternatives. Microextraction techniques (MEs), in which only a small amount of organic solvent is used, have several advantages, including reduced sample volume, analysis time, and operating costs. Thus, MEs are well adapted in bioanalysis, in which sample preparation is mandatory because of the complexity of a sample that is available in small quantities (mL or even μL only). Capillary electrophoresis (CE) is a powerful and efficient separation technique in which no organic solvents are required for analysis. Combination of CE with MEs is regarded as a very attractive environmentally sustainable analytical tool, and numerous applications have been reported over the last few decades for bioanalysis of low-molecular-weight compounds or for peptide analysis. In this paper we review the use of MEs combined with CE in bioanalysis. The review is divided into two sections: liquid and solid-based MEs. A brief practical and theoretical description of each ME is given, and the techniques are illustrated by relevant applications.

  16. Microstructure anisotropy and its effect on mechanical properties of reduced activation ferritic/martensitic steel fabricated by selective laser melting

    NASA Astrophysics Data System (ADS)

    Huang, Bo; Zhai, Yutao; Liu, Shaojun; Mao, Xiaodong

    2018-03-01

    Selective laser melting (SLM) is a promising way for the fabrication of complex reduced activation ferritic/martensitic steel components. The microstructure of the SLM built China low activation martensitic (CLAM) steel plates was observed and analyzed. The hardness, Charpy impact and tensile testing of the specimens in different orientations were performed at room temperature. The results showed that the difference in the mechanical properties was related to the anisotropy in microstructure. The planer unmelted porosity in the interface of the adjacent layers induced opening/tensile mode when the tensile samples parallel to the build direction were tested whereas the samples vertical to the build direction fractured in the shear mode with the grains being sheared in a slant angle. Moreover, the impact absorbed energy (IAE) of all impact specimens was significantly lower than that of the wrought CLAM steel, and the IAE of the samples vertical to the build direction was higher than that of the samples parallel to the build direction. The impact fracture surfaces revealed that the load parallel to the build layers caused laminated tearing among the layers, and the load vertical to the layers induced intergranular fracture across the layers.

  17. Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments.

    PubMed

    Germine, Laura; Nakayama, Ken; Duchaine, Bradley C; Chabris, Christopher F; Chatterjee, Garga; Wilmer, Jeremy B

    2012-10-01

    With the increasing sophistication and ubiquity of the Internet, behavioral research is on the cusp of a revolution that will do for population sampling what the computer did for stimulus control and measurement. It remains a common assumption, however, that data from self-selected Web samples must involve a trade-off between participant numbers and data quality. Concerns about data quality are heightened for performance-based cognitive and perceptual measures, particularly those that are timed or that involve complex stimuli. In experiments run with uncompensated, anonymous participants whose motivation for participation is unknown, reduced conscientiousness or lack of focus could produce results that would be difficult to interpret due to decreased overall performance, increased variability of performance, or increased measurement noise. Here, we addressed the question of data quality across a range of cognitive and perceptual tests. For three key performance metrics-mean performance, performance variance, and internal reliability-the results from self-selected Web samples did not differ systematically from those obtained from traditionally recruited and/or lab-tested samples. These findings demonstrate that collecting data from uncompensated, anonymous, unsupervised, self-selected participants need not reduce data quality, even for demanding cognitive and perceptual experiments.

  18. GeLC-MRM quantitation of mutant KRAS oncoprotein in complex biological samples.

    PubMed

    Halvey, Patrick J; Ferrone, Cristina R; Liebler, Daniel C

    2012-07-06

    Tumor-derived mutant KRAS (v-Ki-ras-2 Kirsten rat sarcoma viral oncogene) oncoprotein is a critical driver of cancer phenotypes and a potential biomarker for many epithelial cancers. Targeted mass spectrometry analysis by multiple reaction monitoring (MRM) enables selective detection and quantitation of wild-type and mutant KRAS proteins in complex biological samples. A recently described immunoprecipitation approach (Proc. Nat. Acad. Sci.2011, 108, 2444-2449) can be used to enrich KRAS for MRM analysis, but requires large protein inputs (2-4 mg). Here, we describe sodium dodecyl sulfate-polyacrylamide gel electrophoresis-based enrichment of KRAS in a low molecular weight (20-25 kDa) protein fraction prior to MRM analysis (GeLC-MRM). This approach reduces background proteome complexity, thus, allowing mutant KRAS to be reliably quantified in low protein inputs (5-50 μg). GeLC-MRM detected KRAS mutant variants (G12D, G13D, G12V, G12S) in a panel of cancer cell lines. GeLC-MRM analysis of wild-type and mutant was linear with respect to protein input and showed low variability across process replicates (CV = 14%). Concomitant analysis of a peptide from the highly similar HRAS and NRAS proteins enabled correction of KRAS-targeted measurements for contributions from these other proteins. KRAS peptides were also quantified in fluid from benign pancreatic cysts and pancreatic cancers at concentrations from 0.08 to 1.1 fmol/μg protein. GeLC-MRM provides a robust, sensitive approach to quantitation of mutant proteins in complex biological samples.

  19. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.

  20. Absolute quantification of Dehalococcoides proteins: enzyme bioindicators of chlorinated ethene dehalorespiration.

    PubMed

    Werner, Jeffrey J; Ptak, A Celeste; Rahm, Brian G; Zhang, Sheng; Richardson, Ruth E

    2009-10-01

    The quantification of trace proteins in complex environmental samples and mixed microbial communities would be a valuable monitoring tool in countless applications, including the bioremediation of groundwater contaminated with chlorinated solvents. Measuring the concentrations of specific proteins provides unique information about the activity and physiological state of organisms in a sample. We developed sensitive (< 5 fmol), selective bioindicator assays for the absolute quantification of select proteins used by Dehalococcoides spp. when reducing carbon atoms in the common pollutants trichloroethene (TCE) and tetrachloroethene (PCE). From complex whole-sample digests of two different dechlorinating mixed communities, we monitored the chromatographic peaks of selected tryptic peptides chosen to represent 19 specific Dehalococcoides proteins. This was accomplished using multiple-reaction monitoring (MRM) assays using nano-liquid chromatography-tandem mass spectrometry (nLC-MS/MS), which provided the selectivity, sensitivity and reproducibility required to quantify Dehalococcoides proteins in complex samples. We observed reproducible peak areas (average CV = 0.14 over 4 days, n = 3) and linear responses in standard curves (n = 5, R(2) > 0.98) using synthetic peptide standards spiked into a background matrix of sediment peptides. We detected and quantified TCE reductive dehalogenase (TceA) at 7.6 +/- 1.7 x 10(3) proteins cell(-1) in the KB1 bioaugmentation culture, previously thought to be lacking TceA. Fragmentation data from MS/MS shotgun proteomics experiments were helpful in developing the MRM targets. Similar shotgun proteomics data are emerging in labs around the world for many environmentally relevant microbial proteins, and these data are a valuable resource for the future development of MRM assays. We expect targeted peptide quantification in environmental samples to be a useful tool in environmental monitoring.

  1. An on-line high-performance liquid chromatography-diode-array detector-electrospray ionization-ion-trap-time-of-flight-mass spectrometry-total antioxidant capacity detection system applying two antioxidant methods for activity evaluation of the edible flowers from Prunus mume.

    PubMed

    Zhang, Xiaoxia; Lin, Zongtao; Fang, Jinggui; Liu, Meixian; Niu, Yanyan; Chen, Shizhong; Wang, Hong

    2015-10-02

    An on-line high-performance liquid chromatography-diode-array detector-electrospray ionization-ion-trap-time-of-flight-mass spectrometry-total antioxidant capacity detection (HPLC-DAD-ESI-IT-TOF-MS-TACD) system was created for identification and evaluation of antioxidants in Prunus (P.) mume flowers. Applying this system, the HPLC fingerprint, ultraviolet (UV) spectra, mass fragmentations, active profiles against 1,1-diphenylpicryl-2-hydrazyl radical (DPPH•) scavenging activity and ferric reducing antioxidant power (FRAP) of each complex sample were obtained simultaneously after one injection. Synchronous structure identification and activities screening of complex samples were thus accomplished. In this study, 78 compounds were identified from P. mume flowers by their chromatographic behaviors, UV spectra and MS data with the assistance of standard compounds and literature reports. The DPPH and FRAP activity of 24 samples (23 different P. mume varieties and 1 related herbal medicine) were then quantified by their detailed activity profiles from the on-line system, and by the total activity of each sample extract from off-line 96-well plate method. As a result, 21 and 32 compounds in the on-line system showed anti-oxidative effects against DPPH and FRAP, respectively. The established on-line system is efficient, sensitive and reliable to tell the DPPH and FRAP antioxidant activities of individual compound in complex samples, and therefore would be a useful and promising technique for antioxidant screening from different food and medicinal matrices. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Phylogenetic species identification in Rattus highlights rapid radiation and morphological similarity of New Guinean species.

    PubMed

    Robins, Judith H; Tintinger, Vernon; Aplin, Ken P; Hingston, Melanie; Matisoo-Smith, Elizabeth; Penny, David; Lavery, Shane D

    2014-01-01

    The genus Rattus is highly speciose, the taxonomy is complex, and individuals are often difficult to identify to the species level. Previous studies have demonstrated the usefulness of phylogenetic approaches to identification in Rattus but some species, especially among the endemics of the New Guinean region, showed poor resolution. Possible reasons for this are simple misidentification, incomplete gene lineage sorting, hybridization, and phylogenetically distinct lineages that are unrecognised taxonomically. To assess these explanations we analysed 217 samples, representing nominally 25 Rattus species, collected in New Guinea, Asia, Australia and the Pacific. To reduce misidentification problems we sequenced museum specimens from earlier morphological studies and recently collected tissues from samples with associated voucher specimens. We also reassessed vouchers from previously sequenced specimens. We inferred combined and separate phylogenies from two mitochondrial DNA regions comprising 550 base pair D-loop sequences and both long (655 base pair) and short (150 base pair) cytochrome oxidase I sequences. Our phylogenetic species identification for 17 species was consistent with morphological designations and current taxonomy thus reinforcing the usefulness of this approach. We reduced misidentifications and consequently the number of polyphyletic species in our phylogenies but the New Guinean Rattus clades still exhibited considerable complexity. Only three of our eight New Guinean species were monophyletic. We found good evidence for either incomplete mitochondrial lineage sorting or hybridization between species within two pairs, R. leucopus/R. cf. verecundus and R. steini/R. praetor. Additionally, our results showed that R. praetor, R. niobe and R. verecundus each likely encompass more than one species. Our study clearly points to the need for a revised taxonomy of the rats of New Guinea, based on broader sampling and informed by both morphology and phylogenetics. The remaining taxonomic complexity highlights the recent and rapid radiation of Rattus in the Australo-Papuan region.

  3. Phylogenetic Species Identification in Rattus Highlights Rapid Radiation and Morphological Similarity of New Guinean Species

    PubMed Central

    Robins, Judith H.; Tintinger, Vernon; Aplin, Ken P.; Hingston, Melanie; Matisoo-Smith, Elizabeth; Penny, David; Lavery, Shane D.

    2014-01-01

    The genus Rattus is highly speciose, the taxonomy is complex, and individuals are often difficult to identify to the species level. Previous studies have demonstrated the usefulness of phylogenetic approaches to identification in Rattus but some species, especially among the endemics of the New Guinean region, showed poor resolution. Possible reasons for this are simple misidentification, incomplete gene lineage sorting, hybridization, and phylogenetically distinct lineages that are unrecognised taxonomically. To assess these explanations we analysed 217 samples, representing nominally 25 Rattus species, collected in New Guinea, Asia, Australia and the Pacific. To reduce misidentification problems we sequenced museum specimens from earlier morphological studies and recently collected tissues from samples with associated voucher specimens. We also reassessed vouchers from previously sequenced specimens. We inferred combined and separate phylogenies from two mitochondrial DNA regions comprising 550 base pair D-loop sequences and both long (655 base pair) and short (150 base pair) cytochrome oxidase I sequences. Our phylogenetic species identification for 17 species was consistent with morphological designations and current taxonomy thus reinforcing the usefulness of this approach. We reduced misidentifications and consequently the number of polyphyletic species in our phylogenies but the New Guinean Rattus clades still exhibited considerable complexity. Only three of our eight New Guinean species were monophyletic. We found good evidence for either incomplete mitochondrial lineage sorting or hybridization between species within two pairs, R. leucopus/R. cf. verecundus and R. steini/R. praetor. Additionally, our results showed that R. praetor, R. niobe and R. verecundus each likely encompass more than one species. Our study clearly points to the need for a revised taxonomy of the rats of New Guinea, based on broader sampling and informed by both morphology and phylogenetics. The remaining taxonomic complexity highlights the recent and rapid radiation of Rattus in the Australo-Papuan region. PMID:24865350

  4. Metrological activity determination of 133Ba by sum-peak absolute method

    NASA Astrophysics Data System (ADS)

    da Silva, R. L.; de Almeida, M. C. M.; Delgado, J. U.; Poledna, R.; Santos, A.; de Veras, E. V.; Rangel, J.; Trindade, O. L.

    2016-07-01

    The National Laboratory for Metrology of Ionizing Radiation provides gamma sources of radionuclide and standardized in activity with reduced uncertainties. Relative methods require standards to determine the sample activity while the absolute methods, as sum-peak, not. The activity is obtained directly with good accuracy and low uncertainties. 133Ba is used in research laboratories and on calibration of detectors for analysis in different work areas. Classical absolute methods don't calibrate 133Ba due to its complex decay scheme. The sum-peak method using gamma spectrometry with germanium detector standardizes 133Ba samples. Uncertainties lower than 1% to activity results were obtained.

  5. Apollo Experiment Report: Lunar-Sample Processing in the Lunar Receiving Laboratory High-Vacuum Complex

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1976-01-01

    A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential.

  6. Rapid micro-scale proteolysis of proteins for MALDI-MS peptide mapping using immobilized trypsin

    NASA Astrophysics Data System (ADS)

    Gobom, Johan; Nordhoff, Eckhard; Ekman, Rolf; Roepstorff, Peter

    1997-12-01

    In this study we present a rapid method for tryptic digestion of proteins using micro-columns with enzyme immobilized on perfusion chromatography media. The performance of the method is exemplified with acyl-CoA-binding protein and reduced carbamidomethylated bovine serum albumin. The method proved to be significantly faster and yielded a better sequence coverage and an improved signal-to-noise ratio for the MALDI-MS peptide maps, compared to in-solution- and on-target digestion. Only a single sample transfer step is required, and therefore sample loss due to adsorption to surfaces is reduced, which is a critical issue when handling low picomole to femtomole amounts of proteins. An example is shown with on-column proteolytic digestion and subsequent elution of the digest into a reversed-phase micro-column. This is useful if the sample contains large amounts of salt or is too diluted for MALDI-MS analysis. Furthermore, by step-wise elution from the reversedphase column, a complex digest can be fractionated, which reduces signal suppression and facilitates data interpretation in the subsequent MS-analysis. The method also proved useful for consecutive digestions with enzymes of different cleavage specificity. This is exemplified with on-column tryptic digestion, followed by reversed-phase step-wise elution, and subsequent on-target V8 protease digestion.

  7. Structural mapping in statistical word problems: A relational reasoning approach to Bayesian inference.

    PubMed

    Johnson, Eric D; Tubau, Elisabet

    2017-06-01

    Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.

  8. Spectroscopic analysis of the powdery complex chitosan-iodine

    NASA Astrophysics Data System (ADS)

    Gegel, Natalia O.; Babicheva, Tatyana S.; Belyakova, Olga A.; Lugovitskaya, Tatyana N.; Shipovskaya, Anna B.

    2018-04-01

    A chitosan-iodine complex was obtained by modification of polymer powder in the vapor of an iodine-containing sorbate and studied by electron and IR spectroscopy, optical rotation dispersion. It was found that the electronic spectra of an aqueous solution of the modified chitosan (the source one and that stored for a year) showed intense absorption bands of triiodide and iodate ions, and also polyiodide ions, bound to the macromolecule by exciton bonding with charge transfer. Analysis of the IR spectra shows destruction of the network of intramolecular and intermolecular hydrogen bonds in the iodinated chitosan powder in comparison with the source polymer and the formation of a new chemical substance. E.g., the absorption band of deformation vibrations of the hydroxyl group disappears in the modified sample, and that of the protonated amino group shifts toward shorter wavelengths. The intensity of the stretching vibration band of the glucopyranose ring atoms significantly reduces. Heating of the modified sample at a temperature below the thermal degradation point of the polymer leads to stabilization of the chitosan-iodine complex. Based on our studies, the hydroxyl and amino groups of the aminopolysaccharide have been recognized as the centers of retention of polyiodide chains in the chitosan matrix.

  9. PEGylated Polyaniline Nanofibers: Antifouling and Conducting Biomaterial for Electrochemical DNA Sensing.

    PubMed

    Hui, Ni; Sun, Xiaotian; Niu, Shuyan; Luo, Xiliang

    2017-01-25

    Biofouling arising from nonspecific adsorption is a substantial outstanding challenge in diagnostics and disease monitoring, and antifouling sensing interfaces capable of reducing the nonspecific adsorption of proteins from biological complex samples are highly desirable. We present herein the preparation of novel composite nanofibers through the grafting of polyethylene glycol (PEG) polymer onto polyaniline (PANI) nanofibers and their application in the development of antifouling electrochemical biosensors. The PEGylated PANI (PANI/PEG) nanofibers possessed large surface area and remained conductive and at the same time demonstrated excellent antifouling performances in single protein solutions as well as complex human serum samples. Sensitive and low fouling electrochemical biosensors for the breast cancer susceptibility gene (BRCA1) can be easily fabricated through the attachment of DNA probes to the PANI/PEG nanofibers. The biosensor showed a very high sensitivity to target BRCA1 with a linear range from 0.01 pM to 1 nM and was also efficient enough to detect DNA mismatches with satisfactory selectivity. Moreover, the DNA biosensor based on the PEGylated PANI nanofibers supported the quantification of BRCA1 in complex human serum, indicating great potential of this novel biomaterial for application in biosensors and bioelectronics.

  10. General method of pattern classification using the two-domain theory

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E. (Inventor)

    1993-01-01

    Human beings judge patterns (such as images) by complex mental processes, some of which may not be known, while computing machines extract features. By representing the human judgements with simple measurements and reducing them and the machine extracted features to a common metric space and fitting them by regression, the judgements of human experts rendered on a sample of patterns may be imposed on a pattern population to provide automatic classification.

  11. General method of pattern classification using the two-domain theory

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E. (Inventor)

    1990-01-01

    Human beings judge patterns (such as images) by complex mental processes, some of which may not be known, while computing machines extract features. By representing the human judgements with simple measurements and reducing them and the machine extracted features to a common metric space and fitting them by regression, the judgements of human experts rendered on a sample of patterns may be imposed on a pattern population to provide automatic classification.

  12. The effect of stoichiometry on Cu-Zn ordering kinetics in Cu2ZnSnS4 thin films

    NASA Astrophysics Data System (ADS)

    Rudisch, Katharina; Davydova, Alexandra; Platzer-Björkman, Charlotte; Scragg, Jonathan

    2018-04-01

    Cu-Zn disorder in Cu2ZnSnS4 (CZTS) may be responsible for the large open circuit voltage deficit in CZTS based solar cells. In this study, it was investigated how composition-dependent defect complexes influence the order-disorder transition. A combinatorial CZTS thin film sample was produced with a cation composition gradient across the sample area. The graded sample was exposed to various temperature treatments and the degree of order was analyzed with resonant Raman spectroscopy for various compositions ranging from E- and A-type to B-, F-, and C-type CZTS. We observe that the composition has no influence on the critical temperature of the order-disorder transition, but strongly affects the activation energy. Reduced activation energy is achieved with compositions with Cu/Sn > 2 or Cu/Sn < 1.8 suggesting an acceleration of the cation ordering in the presence of vacancies or interstitials. This is rationalized with reference to the effect of point defects on exchange mechanisms. The implications for reducing disorder in CZTS thin films are discussed in light of the new findings.

  13. Design of a Single Channel Modulated Wideband Converter for Wideband Spectrum Sensing: Theory, Architecture and Hardware Implementation

    PubMed Central

    Liu, Weisong; Huang, Zhitao; Wang, Xiang; Sun, Weichao

    2017-01-01

    In a cognitive radio sensor network (CRSN), wideband spectrum sensing devices which aims to effectively exploit temporarily vacant spectrum intervals as soon as possible are of great importance. However, the challenge of increasingly high signal frequency and wide bandwidth requires an extremely high sampling rate which may exceed today’s best analog-to-digital converters (ADCs) front-end bandwidth. Recently, the newly proposed architecture called modulated wideband converter (MWC), is an attractive analog compressed sensing technique that can highly reduce the sampling rate. However, the MWC has high hardware complexity owing to its parallel channel structure especially when the number of signals increases. In this paper, we propose a single channel modulated wideband converter (SCMWC) scheme for spectrum sensing of band-limited wide-sense stationary (WSS) signals. With one antenna or sensor, this scheme can save not only sampling rate but also hardware complexity. We then present a new, SCMWC based, single node CR prototype System, on which the spectrum sensing algorithm was tested. Experiments on our hardware prototype show that the proposed architecture leads to successful spectrum sensing. And the total sampling rate as well as hardware size is only one channel’s consumption of MWC. PMID:28471410

  14. Sequential sampling and biorational chemistries for management of lepidopteran pests of vegetable amaranth in the Caribbean.

    PubMed

    Clarke-Harris, Dionne; Fleischer, Shelby J

    2003-06-01

    Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.

  15. Selective and sensitive speciation analysis of Cr(VI) and Cr(III), at sub-μgL-1 levels in water samples by electrothermal atomic absorption spectrometry after electromembrane extraction.

    PubMed

    Tahmasebi, Zeinab; Davarani, Saied Saeed Hosseiny

    2016-12-01

    In this work, electromembrane extraction in combination with electrothermal atomic absorption spectrometry (ET-AAS) was investigated for speciation, preconcentration and quantification of Cr(VI) and Cr(III) in water samples through the selective complexation of Cr(VI) with 1,5-diphenylcarbazide (DPC) as a complexing agent. DPC reduces Cr(VI) to Cr(III) ions and then Cr(III) species are extracted based on electrokinetic migration of their cationic complex (Cr(III)-DPC) toward the negative electrode placed in the hollow fiber. Also, once oxidized to Cr(VI), Cr(III) ions in initial sample were determined by this procedure. The influence of extraction parameters such as pH, type of organic solvent, chelating agent concentration, stirring rate, extraction time and applied voltage were evaluated following a one-at-a-time optimization approach. Under optimized conditions, the extracted analyte was quantified by ETAAS, with an acceptable linearity in the range of 0.05-5ngmL -1 (R 2 value=0.996), and a repeatability (%RSD) between 3.7% and 12.2% (n=4) for 5.0 and 1.0ngmL -1 of Cr(VI), respectively. Also, we obtained an enrichment factor of 110 that corresponded to the recovery of 66%. The detection limit (S/N ratio of 3:1) was 0.02ngmL -1 . Finally, this new method was successfully employed to determine Cr(III) and Cr(VI) species in real water samples. Copyright © 2016. Published by Elsevier B.V.

  16. Tile-Based Fisher-Ratio Software for Improved Feature Selection Analysis of Comprehensive Two-Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marney, Luke C.; Siegler, William C.; Parsons, Brendon A.

    Two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a highly capable instrumental platform that produces complex and information-rich multi-dimensional chemical data. The complex data can be overwhelming, especially when many samples (of various sample classes) are analyzed with multiple injections for each sample. Thus, the data must be analyzed in such a way to extract the most meaningful information. The pixel-based and peak table-based algorithmic use of Fisher ratios has been used successfully in the past to reduce the multi-dimensional data down to those chemical compounds that are changing between classes relative tomore » those that are not (i.e., chemical feature selection). We report on the initial development of a computationally fast novel tile-based Fisher-ratio software that addresses challenges due to 2D retention time misalignment without explicitly aligning the data, which is a problem for both pixel-based and peak table- based methods. Concurrently, the tile-based Fisher-ratio software maximizes the sensitivity contrast of true positives against a background of potential false positives and noise. To study this software, eight compounds, plus one internal standard, were spiked into diesel at various concentrations. The tile-based F-ratio software was able to discover all spiked analytes, within the complex diesel sample matrix with thousands of potential false positives, in each possible concentration comparison, even at the lowest absolute spiked analyte concentration ratio of 1.06.« less

  17. Effects of structural complexity on within-canopy light environments and leaf traits in a northern mixed deciduous forest

    NASA Astrophysics Data System (ADS)

    Fotis, A. T.; Curtis, P.

    2016-12-01

    Canopy structure influences forest productivity through its effects on the distribution of radiation and the light-induced changes in leaf physiological traits. Due to the difficulty of accessing and measuring forest canopies, few field-based studies have quantitatively linked these divergent scales of canopy functioning. The objective of our study was to investigate how canopy structure affects light profiles within a forest canopy and whether leaves of mature trees adjust morphologically and biochemically to the light environments characteristic of canopies with different structural complexity. We used a combination of light detection and ranging (LiDAR) data and hemispherical photographs to quantify canopy structure and light environments, respectively, and a telescoping pole to sample leaves. Leaf mass per area (LMA), nitrogen on an area basis (Narea) and chlorophyll on a mass basis (Chlmass) were measured in four co-dominant species (Acer rubrum, Fagus grandifolia, Pinus strobus and Quercus rubra) at different heights in plots with similar leaf area index (LAI) but contrasting canopy complexity (rugosity). We found that more complex canopies had greater porosity and reduced light variability in the midcanopy while total light interception was unchanged relative to less complex canopies. Leaves of F. grandifolia, Q. rubra, and P. strobus shifted towards sun-acclimation phenotypes with increasing canopy complexity while leaves of A. rubrum became more shade-acclimated (lower LMA) in the upper canopy of more complex stands, despite no differences in total light interception. Broadleaf species showed further acclimation by increasing Narea and reducing Chlmass as LMA increased, while P. strobus showed no change in Narea and Chlmass with increasing LMA. Our results provide new insight on how light distribution and leaf acclimation in mature trees might be altered when natural and anthropogenic disturbances cause structural changes in the canopy.

  18. Loss of knee extensor torque complexity during fatiguing isometric muscle contractions occurs exclusively above the critical torque.

    PubMed

    Pethick, Jamie; Winter, Samantha L; Burnley, Mark

    2016-06-01

    The complexity of knee extensor torque time series decreases during fatiguing isometric muscle contractions. We hypothesized that because of peripheral fatigue, this loss of torque complexity would occur exclusively during contractions above the critical torque (CT). Nine healthy participants performed isometric knee extension exercise (6 s of contraction, 4 s of rest) on six occasions for 30 min or to task failure, whichever occurred sooner. Four trials were performed above CT (trials S1-S4, S1 being the lowest intensity), and two were performed below CT (at 50% and 90% of CT). Global, central, and peripheral fatigue were quantified using maximal voluntary contractions (MVCs) with femoral nerve stimulation. The complexity of torque output was determined using approximate entropy (ApEn) and the detrended fluctuation analysis-α scaling exponent (DFA-α). The MVC torque was reduced in trials below CT [by 19 ± 4% (means ± SE) in 90%CT], but complexity did not decrease [ApEn for 90%CT: from 0.82 ± 0.03 to 0.75 ± 0.06, 95% paired-samples confidence intervals (CIs), 95% CI = -0.23, 0.10; DFA-α from 1.36 ± 0.01 to 1.32 ± 0.03, 95% CI -0.12, 0.04]. Above CT, substantial reductions in MVC torque occurred (of 49 ± 8% in S1), and torque complexity was reduced (ApEn for S1: from 0.67 ± 0.06 to 0.14 ± 0.01, 95% CI = -0.72, -0.33; DFA-α from 1.38 ± 0.03 to 1.58 ± 0.01, 95% CI 0.12, 0.29). Thus, in these experiments, the fatigue-induced loss of torque complexity occurred exclusively during contractions performed above the CT. Copyright © 2016 the American Physiological Society.

  19. Direct Imaging of Protein Organization in an Intact Bacterial Organelle Using High-Resolution Atomic Force Microscopy

    PubMed Central

    2016-01-01

    The function of bioenergetic membranes is strongly influenced by the spatial arrangement of their constituent membrane proteins. Atomic force microscopy (AFM) can be used to probe protein organization at high resolution, allowing individual proteins to be identified. However, previous AFM studies of biological membranes have typically required that curved membranes are ruptured and flattened during sample preparation, with the possibility of disruption of the native protein arrangement or loss of proteins. Imaging native, curved membranes requires minimal tip–sample interaction in both lateral and vertical directions. Here, long-range tip–sample interactions are reduced by optimizing the imaging buffer. Tapping mode AFM with high-resonance-frequency small and soft cantilevers, in combination with a high-speed AFM, reduces the forces due to feedback error and enables application of an average imaging force of tens of piconewtons. Using this approach, we have imaged the membrane organization of intact vesicular bacterial photosynthetic “organelles”, chromatophores. Despite the highly curved nature of the chromatophore membrane and lack of direct support, the resolution was sufficient to identify the photosystem complexes and quantify their arrangement in the native state. Successive imaging showed the proteins remain surprisingly static, with minimal rotation or translation over several-minute time scales. High-order assemblies of RC-LH1-PufX complexes are observed, and intact ATPases are successfully imaged. The methods developed here are likely to be applicable to a broad range of protein-rich vesicles or curved membrane systems, which are an almost ubiquitous feature of native organelles. PMID:28114766

  20. Used tire recycling to produce granulates: evaluation of occupational exposure to chemical agents.

    PubMed

    Savary, Barbara; Vincent, Raymond

    2011-10-01

    Exposure was assessed in four facilities where used tires are turned into rubber granulates. Particulate exposure levels were measured using filter samples and gravimetric analysis. In parallel, volatile organic compounds (VOCs) screening was carried out using samples taken on activated carbon supports, followed by an analysis using a gas chromatograph coupled to a spectrometric detector. The exposure level medians are between 0.58 and 3.95 mg m(-3). Clogging of the textile fiber separation systems can lead to worker exposure; in this case, the measured concentrations can reach 41 mg m(-3). However, in contrast to the data in the literature, VOC levels >1 p.p.m. were not detected. The particulate mixtures deposited on the installation surfaces are complex; some of the chemical agents are toxic to humans. The results of this study indicate significant exposure to complex mixtures of rubber dust. Optimizing exhaust ventilation systems inside the shredders, with a cyclone for example, is essential for reducing the exposure of workers in this rapidly developing sector.

  1. Ectopic beats in approximate entropy and sample entropy-based HRV assessment

    NASA Astrophysics Data System (ADS)

    Singh, Butta; Singh, Dilbag; Jaryal, A. K.; Deepak, K. K.

    2012-05-01

    Approximate entropy (ApEn) and sample entropy (SampEn) are the promising techniques for extracting complex characteristics of cardiovascular variability. Ectopic beats, originating from other than the normal site, are the artefacts contributing a serious limitation to heart rate variability (HRV) analysis. The approaches like deletion and interpolation are currently in use to eliminate the bias produced by ectopic beats. In this study, normal R-R interval time series of 10 healthy and 10 acute myocardial infarction (AMI) patients were analysed by inserting artificial ectopic beats. Then the effects of ectopic beats editing by deletion, degree-zero and degree-one interpolation on ApEn and SampEn have been assessed. Ectopic beats addition (even 2%) led to reduced complexity, resulting in decreased ApEn and SampEn of both healthy and AMI patient data. This reduction has been found to be dependent on level of ectopic beats. Editing of ectopic beats by interpolation degree-one method is found to be superior to other methods.

  2. Improvement of Quantitative Measurements in Multiplex Proteomics Using High-Field Asymmetric Waveform Spectrometry.

    PubMed

    Pfammatter, Sibylle; Bonneil, Eric; Thibault, Pierre

    2016-12-02

    Quantitative proteomics using isobaric reagent tandem mass tags (TMT) or isobaric tags for relative and absolute quantitation (iTRAQ) provides a convenient approach to compare changes in protein abundance across multiple samples. However, the analysis of complex protein digests by isobaric labeling can be undermined by the relative large proportion of co-selected peptide ions that lead to distorted reporter ion ratios and affect the accuracy and precision of quantitative measurements. Here, we investigated the use of high-field asymmetric waveform ion mobility spectrometry (FAIMS) in proteomic experiments to reduce sample complexity and improve protein quantification using TMT isobaric labeling. LC-FAIMS-MS/MS analyses of human and yeast protein digests led to significant reductions in interfering ions, which increased the number of quantifiable peptides by up to 68% while significantly improving the accuracy of abundance measurements compared to that with conventional LC-MS/MS. The improvement in quantitative measurements using FAIMS is further demonstrated for the temporal profiling of protein abundance of HEK293 cells following heat shock treatment.

  3. Insights on Antioxidant Assays for Biological Samples Based on the Reduction of Copper Complexes—The Importance of Analytical Conditions

    PubMed Central

    Marques, Sara S.; Magalhães, Luís M.; Tóth, Ildikó V.; Segundo, Marcela A.

    2014-01-01

    Total antioxidant capacity assays are recognized as instrumental to establish antioxidant status of biological samples, however the varying experimental conditions result in conclusions that may not be transposable to other settings. After selection of the complexing agent, reagent addition order, buffer type and concentration, copper reducing assays were adapted to a high-throughput scheme and validated using model biological antioxidant compounds of ascorbic acid, Trolox (a soluble analogue of vitamin E), uric acid and glutathione. A critical comparison was made based on real samples including NIST-909c human serum certified sample, and five study samples. The validated method provided linear range up to 100 µM Trolox, (limit of detection 2.3 µM; limit of quantification 7.7 µM) with recovery results above 85% and precision <5%. The validated developed method with an increased sensitivity is a sound choice for assessment of TAC in serum samples. PMID:24968275

  4. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  5. Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding

    DTIC Science & Technology

    2012-01-01

    Report No: CG-D-15-13 Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding...Distribution Statement A: Approved for public release; distribution is unlimited. January 2012 Automated Protist Analysis of Complex Samples...Chelsea Street New London, CT 06320 Automated Protist Analysis of Complex Samples iii UNCLAS//PUBLIC | CG-926 R&DC | B. Nelson, et al

  6. Tile-based Fisher ratio analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) data using a null distribution approach.

    PubMed

    Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E

    2015-04-07

    Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.

  7. Experimental demonstration of non-iterative interpolation-based partial ICI compensation in100G RGI-DP-CO-OFDM transport systems.

    PubMed

    Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V

    2012-07-02

    We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.

  8. Evaluation of partial coherence correction in X-ray ptychography

    DOE PAGES

    Burdet, Nicolas; Shi, Xiaowen; Parks, Daniel; ...

    2015-02-23

    Coherent X-ray Diffraction Imaging (CDI) and X-ray ptychography both heavily rely on the high degree of spatial coherence of the X-ray illumination for sufficient experimental data quality for reconstruction convergence. Nevertheless, the majority of the available synchrotron undulator sources have a limited degree of partial coherence, leading to reduced data quality and a lower speckle contrast in the coherent diffraction patterns. It is still an open question whether experimentalists should compromise the coherence properties of an X-ray source in exchange for a higher flux density at a sample, especially when some materials of scientific interest are relatively weak scatterers. Amore » previous study has suggested that in CDI, the best strategy for the study of strong phase objects is to maintain a high degree of coherence of the illuminating X-rays because of the broadening of solution space resulting from the strong phase structures. In this article, we demonstrate the first systematic analysis of the effectiveness of partial coherence correction in ptychography as a function of the coherence properties, degree of complexity of illumination (degree of phase diversity of the probe) and sample phase complexity. We have also performed analysis of how well ptychographic algorithms refine X-ray probe and complex coherence functions when those variables are unknown at the start of reconstructions, for noise-free simulated data, in the case of both real-valued and highly-complex objects.« less

  9. Activation of the edema factor of Bacillus anthracis by calmodulin: evidence of an interplay between the EF-calmodulin interaction and calcium binding.

    PubMed

    Laine, Elodie; Martínez, Leandro; Blondel, Arnaud; Malliavin, Thérèse E

    2010-10-06

    Calmodulin (CaM) is a remarkably flexible protein which can bind multiple targets in response to changes in intracellular calcium concentration. It contains four calcium-binding sites, arranged in two globular domains. The calcium affinity of CaM N-terminal domain (N-CaM) is dramatically reduced when the complex with the edema factor (EF) of Bacillus anthracis is formed. Here, an atomic explanation for this reduced affinity is proposed through molecular dynamics simulations and free energy perturbation calculations of the EF-CaM complex starting from different crystallographic models. The simulations show that electrostatic interactions between CaM and EF disfavor the opening of N-CaM domains usually induced by calcium binding. Relative calcium affinities of the N-CaM binding sites are probed by free energy perturbation, and dissociation probabilities are evaluated with locally enhanced sampling simulations. We show that EF impairs calcium binding on N-CaM through a direct conformational restraint on Site 1, by an indirect destabilization of Site 2, and by reducing the cooperativity between the two sites. Copyright © 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Automated Processing of Plasma Samples for Lipoprotein Separation by Rate-Zonal Ultracentrifugation.

    PubMed

    Peters, Carl N; Evans, Iain E J

    2016-12-01

    Plasma lipoproteins are the primary means of lipid transport among tissues. Defining alterations in lipid metabolism is critical to our understanding of disease processes. However, lipoprotein measurement is limited to specialized centers. Preparation for ultracentrifugation involves the formation of complex density gradients that is both laborious and subject to handling errors. We created a fully automated device capable of forming the required gradient. The design has been made freely available for download by the authors. It is inexpensive relative to commercial density gradient formers, which generally create linear gradients unsuitable for rate-zonal ultracentrifugation. The design can easily be modified to suit user requirements and any potential future improvements. Evaluation of the device showed reliable peristaltic pump accuracy and precision for fluid delivery. We also demonstrate accurate fluid layering with reduced mixing at the gradient layers when compared to usual practice by experienced laboratory personnel. Reduction in layer mixing is of critical importance, as it is crucial for reliable lipoprotein separation. The automated device significantly reduces laboratory staff input and reduces the likelihood of error. Overall, this device creates a simple and effective solution to formation of complex density gradients. © 2015 Society for Laboratory Automation and Screening.

  11. Disentangling taste and toxicity in aposematic prey

    PubMed Central

    Holen, Øistein Haugsten

    2013-01-01

    Many predators quickly learn to avoid attacking aposematic prey. If the prey vary in toxicity, the predators may alternatively learn to capture and taste-sample prey carefully before ingesting or rejecting them (go-slow behaviour). An increase in prey toxicity is generally thought to decrease predation on prey populations. However, while prey with a higher toxin load are more harmful to ingest, they may also be easier to recognize and reject owing to greater distastefulness, which can facilitate a taste-sampling foraging strategy. Here, the classic diet model is used to study the separate effects of taste and toxicity on predator preferences. The taste-sampling process is modelled using signal detection theory. The model is applicable to automimicry and Batesian mimicry. It shows that when the defensive toxin is sufficiently distasteful, a mimicry complex may be less profitable to the predator and better protected against predation if the models are moderately toxic than if they are highly toxic. Moreover, taste mimicry can reduce the profitability of the mimicry complex and increase protection against predation. The results are discussed in relation to the selection pressures acting on prey defences and the evolution of mimicry. PMID:23256198

  12. Disentangling taste and toxicity in aposematic prey.

    PubMed

    Holen, Øistein Haugsten

    2013-02-22

    Many predators quickly learn to avoid attacking aposematic prey. If the prey vary in toxicity, the predators may alternatively learn to capture and taste-sample prey carefully before ingesting or rejecting them (go-slow behaviour). An increase in prey toxicity is generally thought to decrease predation on prey populations. However, while prey with a higher toxin load are more harmful to ingest, they may also be easier to recognize and reject owing to greater distastefulness, which can facilitate a taste-sampling foraging strategy. Here, the classic diet model is used to study the separate effects of taste and toxicity on predator preferences. The taste-sampling process is modelled using signal detection theory. The model is applicable to automimicry and batesian mimicry. It shows that when the defensive toxin is sufficiently distasteful, a mimicry complex may be less profitable to the predator and better protected against predation if the models are moderately toxic than if they are highly toxic. Moreover, taste mimicry can reduce the profitability of the mimicry complex and increase protection against predation. The results are discussed in relation to the selection pressures acting on prey defences and the evolution of mimicry.

  13. Applications of reversible covalent chemistry in analytical sample preparation.

    PubMed

    Siegel, David

    2012-12-07

    Reversible covalent chemistry (RCC) adds another dimension to commonly used sample preparation techniques like solid-phase extraction (SPE), solid-phase microextraction (SPME), molecular imprinted polymers (MIPs) or immuno-affinity cleanup (IAC): chemical selectivity. By selecting analytes according to their covalent reactivity, sample complexity can be reduced significantly, resulting in enhanced analytical performance for low-abundance target analytes. This review gives a comprehensive overview of the applications of RCC in analytical sample preparation. The major reactions covered include reversible boronic ester formation, thiol-disulfide exchange and reversible hydrazone formation, targeting analyte groups like diols (sugars, glycoproteins and glycopeptides, catechols), thiols (cysteinyl-proteins and cysteinyl-peptides) and carbonyls (carbonylated proteins, mycotoxins). Their applications range from low abundance proteomics to reversible protein/peptide labelling to antibody chromatography to quantitative and qualitative food analysis. In discussing the potential of RCC, a special focus is on the conditions and restrictions of the utilized reaction chemistry.

  14. Automating data analysis for two-dimensional gas chromatography/time-of-flight mass spectrometry non-targeted analysis of comparative samples.

    PubMed

    Titaley, Ivan A; Ogba, O Maduka; Chibwe, Leah; Hoh, Eunha; Cheong, Paul H-Y; Simonich, Staci L Massey

    2018-03-16

    Non-targeted analysis of environmental samples, using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC/ToF-MS), poses significant data analysis challenges due to the large number of possible analytes. Non-targeted data analysis of complex mixtures is prone to human bias and is laborious, particularly for comparative environmental samples such as contaminated soil pre- and post-bioremediation. To address this research bottleneck, we developed OCTpy, a Python™ script that acts as a data reduction filter to automate GC × GC/ToF-MS data analysis from LECO ® ChromaTOF ® software and facilitates selection of analytes of interest based on peak area comparison between comparative samples. We used data from polycyclic aromatic hydrocarbon (PAH) contaminated soil, pre- and post-bioremediation, to assess the effectiveness of OCTpy in facilitating the selection of analytes that have formed or degraded following treatment. Using datasets from the soil extracts pre- and post-bioremediation, OCTpy selected, on average, 18% of the initial suggested analytes generated by the LECO ® ChromaTOF ® software Statistical Compare feature. Based on this list, 63-100% of the candidate analytes identified by a highly trained individual were also selected by OCTpy. This process was accomplished in several minutes per sample, whereas manual data analysis took several hours per sample. OCTpy automates the analysis of complex mixtures of comparative samples, reduces the potential for human error during heavy data handling and decreases data analysis time by at least tenfold. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Kinetics of CO2 Reduction over Nonstoichiometric Ceria

    PubMed Central

    2015-01-01

    The kinetics of CO2 reduction over nonstoichimetric ceria, CeO2−δ, a material of high potential for thermochemical conversion of sunlight to fuel, has been investigated for a wide range of nonstoichiometries (0.02 ≤ δ ≤ 0.25), temperatures (693 ≤ T ≤ 1273 K), and CO2 concentrations (0.005 ≤ pCO2 ≤ 0.4 atm). Samples were reduced thermally at 1773 K to probe low nonstoichiometries (δ < 0.05) and chemically at lower temperatures in a H2 atmosphere to prevent particle sintering and probe the effect of higher nonstoichiometries (δ < 0.25). For extents greater than δ = 0.2, oxidation rates at a given nonstoichiometry are hindered for the duration of the reaction, presumably because of near-order changes, such as lattice compression, as confirmed via Raman Spectroscopy. Importantly, this behavior is reversible and oxidation rates are not affected at lower δ. Following thermal reduction at very low δ, however, oxidation rates are an order of magnitude slower than those of chemically reduced samples, and rates monotonically increase with the initial nonstoichiometry (up to δ = 0.05). This dependence may be attributed to the formation of stable defect complexes formed between oxygen vacancies and polarons. When the same experiments are performed with 10 mol % Gd3+ doped ceria, in which defect complexes are less prevalent than in pure ceria, this dependence is not observed. PMID:26693270

  16. Metabolic and Physiological Characteristics of Novel Cultivars from Serpentinite Seep Fluids

    NASA Astrophysics Data System (ADS)

    Nelson, B.; Chowdhury, S.; Brazelton, W. J.; Schrenk, M. O.

    2011-12-01

    Subsurface waters associated with the alteration of ultramafic rocks become highly reducing and alkaline through a process known as serpentinization. As habitat, these fluids are in many ways metabolically constraining but can provide sufficient energy for chemolithotrophy. As part of an ongoing effort to characterize these communities, heterotrophic enrichment cultures and anaerobic microcosms were initiated with alkaline waters found at three geographically and geochemically distinct sites of active serpentinization. These include the Northern Apennine ophiolite in the Ligurian region of Italy, the Tablelands ophiolite at Gros Morne National Park, Canada and the Coast Range ophiolite at McLaughlin Natural Reserve, California. Enrichment cultures at pH 11 yielded numerous isolates related to Proteobacteria and Firmicutes, some of which are closely related to other cultivars from high pH and subsurface environments. Anaerobic water samples were amended with combinations of electron donors (hydrogen, complex organics, acetate) and acceptors (ferric iron, sulfate) in a block design. After several weeks of incubation, DNA was extracted from cell concentrations and community differences were compared by TRFLP. Of particular interest is the isolation of a putative iron reducing Firmicute from samples enriched with complex organic compounds and ferric citrate. Ongoing studies are aimed at characterizing the physiology of these isolates. These data provide important insights into the metabolic potential of serpentinite subsurface ecosystems, and are a complement to culture-independent genomic analyses.

  17. MetaSort untangles metagenome assembly by reducing microbial community complexity

    PubMed Central

    Ji, Peifeng; Zhang, Yanming; Wang, Jinfeng; Zhao, Fangqing

    2017-01-01

    Most current approaches to analyse metagenomic data rely on reference genomes. Novel microbial communities extend far beyond the coverage of reference databases and de novo metagenome assembly from complex microbial communities remains a great challenge. Here we present a novel experimental and bioinformatic framework, metaSort, for effective construction of bacterial genomes from metagenomic samples. MetaSort provides a sorted mini-metagenome approach based on flow cytometry and single-cell sequencing methodologies, and employs new computational algorithms to efficiently recover high-quality genomes from the sorted mini-metagenome by the complementary of the original metagenome. Through extensive evaluations, we demonstrated that metaSort has an excellent and unbiased performance on genome recovery and assembly. Furthermore, we applied metaSort to an unexplored microflora colonized on the surface of marine kelp and successfully recovered 75 high-quality genomes at one time. This approach will greatly improve access to microbial genomes from complex or novel communities. PMID:28112173

  18. Effects of carbon-based nanoparticles (CNPs) on the fate of endocrine disrupting chemicals (EDCs) in different agricultural soils.

    NASA Astrophysics Data System (ADS)

    Stumpe, Britta; Wolski, Sabrina; Marschner, Bernd

    2013-04-01

    Nanotechnology is a major innovative scientific and economic growth area. To date there is a lack about possible adverse effects that may be associated with manufactured nanomaterial in terrestrial environments. Since it is known that on the one hand carbon-based nanoparticles (CNPs) and endocrine disrupting chemicals (EDCs) strongly interact in wastewater and that on the other hand CNPs and EDCs are released together via wastewater irrigation to agricultural soils, knowledge of CNP effects on the EDC fate in the soil environment is needed for further risk assessments. The overall goal of this project is to gain a better understanding of interaction of CNPs with EDCs within the soil system. Three different soil samples were applied with different CNPs, EDCs and CNP-EDC complexes and incubated over a period of 6 weeks. The EDC mineralization as well as their uptake by soil microorganisms was monitored to describe impacts of the nanomaterial on the EDC fate. As quality control for the biological soil activity soil respiration, enzyme activities and the soil microbial biomass were monitored in all incubated soil samples. Clearly, EDCs bound in CNP complexes showed a decrease in mineralization. While the free EDCs showed a total mineralization of 34 to 45 %, the nano complexed EDCs were only mineralized to 12 to 15 %. Since no effects of the nanomaterial on the biological soil activity were observed, we conclude that the reduced EDC mineralization is directly linked to their interaction with the CNPs. Since additionally the EDC adsorption to CNPs reduced the EDC uptake by soil microorganism, we assume that CNPs generally form more or less recalcitrant aggregates which likely protect the associated EDCs from degradation.

  19. Effect-directed analysis supporting monitoring of aquatic environments--An in-depth overview.

    PubMed

    Brack, Werner; Ait-Aissa, Selim; Burgess, Robert M; Busch, Wibke; Creusot, Nicolas; Di Paolo, Carolina; Escher, Beate I; Mark Hewitt, L; Hilscherova, Klara; Hollender, Juliane; Hollert, Henner; Jonker, Willem; Kool, Jeroen; Lamoree, Marja; Muschket, Matthias; Neumann, Steffen; Rostkowski, Pawel; Ruttkies, Christoph; Schollee, Jennifer; Schymanski, Emma L; Schulze, Tobias; Seiler, Thomas-Benjamin; Tindall, Andrew J; De Aragão Umbuzeiro, Gisela; Vrana, Branislav; Krauss, Martin

    2016-02-15

    Aquatic environments are often contaminated with complex mixtures of chemicals that may pose a risk to ecosystems and human health. This contamination cannot be addressed with target analysis alone but tools are required to reduce this complexity and identify those chemicals that might cause adverse effects. Effect-directed analysis (EDA) is designed to meet this challenge and faces increasing interest in water and sediment quality monitoring. Thus, the present paper summarizes current experience with the EDA approach and the tools required, and provides practical advice on their application. The paper highlights the need for proper problem formulation and gives general advice for study design. As the EDA approach is directed by toxicity, basic principles for the selection of bioassays are given as well as a comprehensive compilation of appropriate assays, including their strengths and weaknesses. A specific focus is given to strategies for sampling, extraction and bioassay dosing since they strongly impact prioritization of toxicants in EDA. Reduction of sample complexity mainly relies on fractionation procedures, which are discussed in this paper, including quality assurance and quality control. Automated combinations of fractionation, biotesting and chemical analysis using so-called hyphenated tools can enhance the throughput and might reduce the risk of artifacts in laboratory work. The key to determining the chemical structures causing effects is analytical toxicant identification. The latest approaches, tools, software and databases for target-, suspect and non-target screening as well as unknown identification are discussed together with analytical and toxicological confirmation approaches. A better understanding of optimal use and combination of EDA tools will help to design efficient and successful toxicant identification studies in the context of quality monitoring in multiply stressed environments. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Orthogonal projection approach and continuous wavelet transform-feed forward neural networks for simultaneous spectrophotometric determination of some heavy metals in diet samples.

    PubMed

    Abbasi Tarighat, Maryam

    2016-02-01

    Simultaneous spectrophotometric determination of a mixture of overlapped complexes of Fe(3+), Mn(2+), Cu(2+), and Zn(2+) ions with 2-(3-hydroxy-1-phenyl-but-2-enylideneamino) pyridine-3-ol(HPEP) by orthogonal projection approach-feed forward neural network (OPA-FFNN) and continuous wavelet transform-feed forward neural network (CWT-FFNN) is discussed. Ionic complexes HPEP were formulated with varying reagent concentration, pH and time of color formation for completion of complexation reactions. It was found that, at 5.0 × 10(-4) mol L(-1) of HPEP, pH 9.5 and 10 min after mixing the complexation reactions were completed. The spectral data were analyzed using partial response plots, and identified non-linearity modeled using FFNN. Reducing the number of OPA-FFNN and CWT-FFNN inputs were simplified using dissimilarity pure spectra of OPA and selected wavelet coefficients. Once the pure dissimilarity plots ad optimal wavelet coefficients are selected, different ANN models were employed for the calculation of the final calibration models. The performance of these two approaches were tested with regard to root mean square errors of prediction (RMSE %) values, using synthetic solutions. Under the working conditions, the proposed methods were successfully applied to the simultaneous determination of metal ions in different vegetable and foodstuff samples. The results show that, OPA-FFNN and CWT-FFNN were effective in simultaneously determining Fe(3+), Mn(2+), Cu(2+), and Zn(2+) concentration. Also, concentrations of metal ions in the samples were determined by flame atomic absorption spectrometry (FAAS). The amounts of metal ions obtained by the proposed methods were in good agreement with those obtained by FAAS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Dietary arginine silicate inositol complex inhibits periodontal tissue loss in rats with ligature-induced periodontitis.

    PubMed

    Dundar, Serkan; Eltas, Abubekir; Hakki, Sema S; Malkoc, Sıddık; Uslu, M Ozay; Tuzcu, Mehmet; Komorowski, James; Ozercan, I Hanifi; Akdemir, Fatih; Sahin, Kazim

    2016-01-01

    The purpose of this study was to induce experimental periodontitis in rats previously fed diets containing arginine silicate inositol (ASI) complex and examine the biochemical, immunological, and radiological effects. Fifty two 8-week-old female Sprague Dawley rats were equally divided into four groups. The control group included those fed a standard rat diet with no operation performed during the experiment. The periodontitis, ASI I, and ASI II groups were subjected to experimental periodontitis induction for 11 days after being fed a standard rat diet alone, a diet containing 1.81 g/kg ASI complex, or a diet containing 3.62 g/kg ASI complex, respectively, for 8 weeks. Throughout the 11-day duration of periodontitis induction, all rats were fed standard feed. The rats were euthanized on the eleventh day, and their tissue and blood samples were collected. In the periodontitis group, elevated tissue destruction parameters and reduced tissue formation parameters were found, as compared to the ASI groups. Levels of enzymes, cytokines, and mediators associated with periodontal tissue destruction were lower in rats fed a diet containing ASI complex after experimental periodontitis. These results indicate that ASI complex could be an alternative agent for host modulation.

  2. The Construction and Validation of All-Atom Bulk-Phase Models of Amorphous Polymers Using the TIGER2/TIGER3 Empirical Sampling Method

    PubMed Central

    Li, Xianfeng; Murthy, Sanjeeva; Latour, Robert A.

    2011-01-01

    A new empirical sampling method termed “temperature intervals with global exchange of replicas and reduced radii” (TIGER3) is presented and demonstrated to efficiently equilibrate entangled long-chain molecular systems such as amorphous polymers. The TIGER3 algorithm is a replica exchange method in which simulations are run in parallel over a range of temperature levels at and above a designated baseline temperature. The replicas sampled at temperature levels above the baseline are run through a series of cycles with each cycle containing four stages – heating, sampling, quenching, and temperature level reassignment. The method allows chain segments to pass through one another at elevated temperature levels during the sampling stage by reducing the van der Waals radii of the atoms, thus eliminating chain entanglement problems. Atomic radii are then returned to their regular values and re-equilibrated at elevated temperature prior to quenching to the baseline temperature. Following quenching, replicas are compared using a Metropolis Monte Carlo exchange process for the construction of an approximate Boltzmann-weighted ensemble of states and then reassigned to the elevated temperature levels for additional sampling. Further system equilibration is performed by periodic implementation of the previously developed TIGER2 algorithm between cycles of TIGER3, which applies thermal cycling without radii reduction. When coupled with a coarse-grained modeling approach, the combined TIGER2/TIGER3 algorithm yields fast equilibration of bulk-phase models of amorphous polymer, even for polymers with complex, highly branched structures. The developed method was tested by modeling the polyethylene melt. The calculated properties of chain conformation and chain segment packing agreed well with published data. The method was also applied to generate equilibrated structural models of three increasingly complex amorphous polymer systems: poly(methyl methacrylate), poly(butyl methacrylate), and DTB-succinate copolymer. Calculated glass transition temperature (Tg) and structural parameter profile (S(q)) for each resulting polymer model were found to be in close agreement with experimental Tg values and structural measurements obtained by x-ray diffraction, thus validating that the developed methods provide realistic models of amorphous polymer structure. PMID:21769156

  3. Physical Samples and Persistent Identifiers: The Implementation of the International Geo Sample Number (IGSN) Registration Service in CSIRO, Australia

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya; Klump, Jens; Tey, Victor; Fraser, Ryan

    2016-04-01

    Physical samples such as minerals, soil, rocks, water, air and plants are important observational units for understanding the complexity of our environment and its resources. They are usually collected and curated by different entities, e.g., individual researchers, laboratories, state agencies, or museums. Persistent identifiers may facilitate access to physical samples that are scattered across various repositories. They are essential to locate samples unambiguously and to share their associated metadata and data systematically across the Web. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying physical samples. The IGSNs of physical samples are registered by end-users (e.g., individual researchers, data centers and projects) through allocating agents. Allocating agents are the institutions acting on behalf of the implementing organization (IGSN e.V.). The Commonwealth Scientific and Industrial Research Organisation CSIRO) is one of the allocating agents in Australia. To implement IGSN in our organisation, we developed a RESTful service and a metadata model. The web service enables a client to register sub-namespaces and multiple samples, and retrieve samples' metadata programmatically. The metadata model provides a framework in which different types of samples may be represented. It is generic and extensible, therefore it may be applied in the context of multi-disciplinary projects. The metadata model has been implemented as an XML schema and a PostgreSQL database. The schema is used to handle sample registrations requests and to disseminate their metadata, whereas the relational database is used to preserve the metadata records. The metadata schema leverages existing controlled vocabularies to minimize the scope for error and incorporates some simplifications to reduce complexity of the schema implementation. The solutions developed have been applied and tested in the context of two sample repositories in CSIRO, the Capricorn Distal Footprints project and the Rock Store.

  4. Electrical Chips for Biological Point-of-Care Detection.

    PubMed

    Reddy, Bobby; Salm, Eric; Bashir, Rashid

    2016-07-11

    As the future of health care diagnostics moves toward more portable and personalized techniques, there is immense potential to harness the power of electrical signals for biological sensing and diagnostic applications at the point of care. Electrical biochips can be used to both manipulate and sense biological entities, as they can have several inherent advantages, including on-chip sample preparation, label-free detection, reduced cost and complexity, decreased sample volumes, increased portability, and large-scale multiplexing. The advantages of fully integrated electrical biochip platforms are particularly attractive for point-of-care systems. This review summarizes these electrical lab-on-a-chip technologies and highlights opportunities to accelerate the transition from academic publications to commercial success.

  5. Reduced Mitochondrial Activity is Early and Steady in the Entorhinal Cortex but it is Mainly Unmodified in the Frontal Cortex in Alzheimer's Disease.

    PubMed

    Armand-Ugon, Mercedes; Ansoleaga, Belen; Berjaoui, Sara; Ferrer, Isidro

    2017-01-01

    It is well established that mitochondrial damage plays a role in the pathophysiology of Alzheimer's disease (AD). However, studies carried out in humans barely contemplate regional differences with disease progression. To study the expression of selected nuclear genes encoding subunits of the mitochondrial complexes and the activity of mitochondrial complexes in AD, in two regions: the entorhinal cortex (EC) and frontal cortex area 8 (FC). Frozen samples from 148 cases processed for gene expression by qRT-PCR and determination of individual activities of mitochondrial complexes I, II, IV and V using commercial kits and home-made assays. Decreased expression of NDUFA2, NDUFB3, UQCR11, COX7C, ATPD, ATP5L and ATP50, covering subunits of complex I, II, IV and V, occurs in total homogenates of the EC in AD stages V-VI when compared with stages I-II. However reduced activity of complexes I, II and V of isolated mitochondria occurs as early as stages I-II when compared with middle-aged individuals in the EC. In contrast, no alterations in the expression of the same genes and no alterations in the activity of mitochondrial complexes are found in the FC in the same series. Different mechanisms of impaired energy metabolism may occur in AD, one of them, represented by the EC, is the result of primary and early alteration of mitochondria; the other one is probably the result, at least in part, of decreased functional input and is represented by hypometabolism in the FC in AD patients aged 86 or younger. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Practical experiences with an extended screening strategy for genetically modified organisms (GMOs) in real-life samples.

    PubMed

    Scholtens, Ingrid; Laurensse, Emile; Molenaar, Bonnie; Zaaijer, Stephanie; Gaballo, Heidi; Boleij, Peter; Bak, Arno; Kok, Esther

    2013-09-25

    Nowadays most animal feed products imported into Europe have a GMO (genetically modified organism) label. This means that they contain European Union (EU)-authorized GMOs. For enforcement of these labeling requirements, it is necessary, with the rising number of EU-authorized GMOs, to perform an increasing number of analyses. In addition to this, it is necessary to test products for the potential presence of EU-unauthorized GMOs. Analysis for EU-authorized and -unauthorized GMOs in animal feed has thus become laborious and expensive. Initial screening steps may reduce the number of GMO identification methods that need to be applied, but with the increasing diversity also screening with GMO elements has become more complex. For the present study, the application of an informative detailed 24-element screening and subsequent identification strategy was applied in 50 animal feed samples. Almost all feed samples were labeled as containing GMO-derived materials. The main goal of the study was therefore to investigate if a detailed screening strategy would reduce the number of subsequent identification analyses. An additional goal was to test the samples in this way for the potential presence of EU-unauthorized GMOs. Finally, to test the robustness of the approach, eight of the samples were tested in a concise interlaboratory study. No significant differences were found between the results of the two laboratories.

  7. LIGHT-SABRE enables efficient in-magnet catalytic hyperpolarization

    NASA Astrophysics Data System (ADS)

    Theis, Thomas; Truong, Milton; Coffey, Aaron M.; Chekmenev, Eduard Y.; Warren, Warren S.

    2014-11-01

    Nuclear spin hyperpolarization overcomes the sensitivity limitations of traditional NMR and MRI, but the most general method demonstrated to date (dynamic nuclear polarization) has significant limitations in scalability, cost, and complex apparatus design. As an alternative, signal amplification by reversible exchange (SABRE) of parahydrogen on transition metal catalysts can hyperpolarize a variety of substrates, but to date this scheme has required transfer of the sample to low magnetic field or very strong RF irradiation. Here we demonstrate "Low-Irradiation Generation of High Tesla-SABRE" (LIGHT-SABRE) which works with simple pulse sequences and low power deposition; it should be usable at any magnetic field and for hyperpolarization of many different nuclei. This approach could drastically reduce the cost and complexity of producing hyperpolarized molecules.

  8. Damage Detection in Rotorcraft Composite Structures Using Thermography and Laser-Based Ultrasound

    NASA Technical Reports Server (NTRS)

    Anastasi, Robert F.; Zalameda, Joseph N.; Madaras, Eric I.

    2004-01-01

    New rotorcraft structural composite designs incorporate lower structural weight, reduced manufacturing complexity, and improved threat protection. These new structural concepts require nondestructive evaluation inspection technologies that can potentially be field-portable and able to inspect complex geometries for damage or structural defects. Two candidate technologies were considered: Thermography and Laser-Based Ultrasound (Laser UT). Thermography and Laser UT have the advantage of being non-contact inspection methods, with Thermography being a full-field imaging method and Laser UT a point scanning technique. These techniques were used to inspect composite samples that contained both embedded flaws and impact damage of various size and shape. Results showed that the inspection techniques were able to detect both embedded and impact damage with varying degrees of success.

  9. Characterization of uranium redox state in organic-rich Eocene sediments.

    PubMed

    Cumberland, Susan A; Etschmann, Barbara; Brugger, Joël; Douglas, Grant; Evans, Katy; Fisher, Louise; Kappen, Peter; Moreau, John W

    2018-03-01

    The presence of organic matter (OM) has a profound impact on uranium (U) redox cycling, either limiting or promoting the mobility of U via binding, reduction, or complexation. To understand the interactions between OM and U, we characterised U oxidation state and speciation in nine OM-rich sediment cores (18 samples), plus a lignite sample from the Mulga Rock polymetallic deposit in Western Australia. Uranium was unevenly dispersed within the analysed samples with 84% of the total U occurring in samples containing >21 wt % OM. Analyses of U speciation, including x-ray absorption spectroscopy and bicarbonate extractions, revealed that U existed predominately (∼71%) as U(VI), despite the low pH (4.5) and nominally reducing conditions within the sediments. Furthermore, low extractability by water, but high extractability by a bi-carbonate solution, indicated a strong association of U with particulate OM. The unexpectedly high proportion of U(VI) relative to U(IV) within the OM-rich sediments implies that OM itself does not readily reduce U, and the reduction of U is not a requirement for immobilizing uranium in OM-rich deposits. The fact that OM can play a significant role in limiting the mobility and reduction of U(VI) in sediments is important for both U-mining and remediation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Combustion of organic matter in Mars analogs using SAM-like techniques

    NASA Astrophysics Data System (ADS)

    Stern, J. C.; McAdam, A.; Mahaffy, P. R.; Steele, A.

    2012-12-01

    The combustion experiment on the Sample Analysis at Mars (SAM) suite on Curiosity will heat a sample of Mars regolith in the presence of oxygen and measure the carbon isotopic composition (δ13C) of the evolved CO2 using the Tunable Laser Spectrometer (TLS). The degree to which the δ13C of the sample is representative of any organic carbon present depends on a) whether complete combustion has been achieved, and b) the simultaneous presence of inorganic, or mineralogical carbon in the sample, and our ability to quantify its contribution to the bulk δ13C. To optimize and characterize combustion of a variety of organic molecules in a range of rock matrices, combustion experiments simulating those to be performed on SAM were conducted at NASA Goddard. CO2 gas generated by heating Mars analogs in a SAM-like oven in the presence of oxygen on a laboratory breadboard was captured and analyzed via IRMS for δ13C. These values were compared to bulk and total organic carbon (TOC) abundance and δ13C values using commercial flash combustion EA- IRMS techniques to determine whether quantitative conversion of reduced carbon to CO2 was achieved. Factors contributing to incomplete combustion and isotopic fractionation include structural complexity of reduced organics, their thermal decomposition temperatures, and mineral-organic associations. An additional consideration must be made for unintentional combustion by oxidizing salts (perchlorates), which may partially or totally oxidize reduced organic compounds to CO2, depending on soil perchlorate concentration, sample matrix, and how refractory the organics are. Thus, to investigate the oxidizing potential of a salt known to exist on the Martian surface, laboratory breadboard experiments heating simple and complex organics in the presence of Mg perchlorate were performed using a SAM-like oven coupled to a Hiden Mass Spectrometer and gas collection manifold. Samples were heated in the absence and presence of Mg perchlorate to ~900 °C and mass spectral data were monitored for O2, CO2, CO, and chlorinated hydrocarbons. If CO2 was produced by perchlorate-induced oxidation of organics, a second experiment was conducted and CO2 was captured for δ13C analysis. These results could help determine whether δ13C of CO2 evolved during decomposition of organics could provide useful information in lieu of the organics themselves, in the case of the coexistence of organics with highly oxidizing materials in the regolith.

  11. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  12. A method for feature selection of APT samples based on entropy

    NASA Astrophysics Data System (ADS)

    Du, Zhenyu; Li, Yihong; Hu, Jinsong

    2018-05-01

    By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.

  13. Navigating complex sample analysis using national survey data.

    PubMed

    Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo

    2012-01-01

    The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.

  14. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    PubMed

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  15. Low light adaptation: energy transfer processes in different types of light harvesting complexes from Rhodopseudomonas palustris.

    PubMed

    Moulisová, Vladimíra; Luer, Larry; Hoseinkhani, Sajjad; Brotosudarmo, Tatas H P; Collins, Aaron M; Lanzani, Guglielmo; Blankenship, Robert E; Cogdell, Richard J

    2009-12-02

    Energy transfer processes in photosynthetic light harvesting 2 (LH2) complexes isolated from purple bacterium Rhodopseudomonas palustris grown at different light intensities were studied by ground state and transient absorption spectroscopy. The decomposition of ground state absorption spectra shows contributions from B800 and B850 bacteriochlorophyll (BChl) a rings, the latter component splitting into a low energy and a high energy band in samples grown under low light (LL) conditions. A spectral analysis reveals strong inhomogeneity of the B850 excitons in the LL samples that is well reproduced by an exponential-type distribution. Transient spectra show a bleach of both the low energy and high energy bands, together with the respective blue-shifted exciton-to-biexciton transitions. The different spectral evolutions were analyzed by a global fitting procedure. Energy transfer from B800 to B850 occurs in a mono-exponential process and the rate of this process is only slightly reduced in LL compared to high light samples. In LL samples, spectral relaxation of the B850 exciton follows strongly nonexponential kinetics that can be described by a reduction of the bleach of the high energy excitonic component and a red-shift of the low energetic one. We explain these spectral changes by picosecond exciton relaxation caused by a small coupling parameter of the excitonic splitting of the BChl a molecules to the surrounding bath. The splitting of exciton energy into two excitonic bands in LL complex is most probably caused by heterogenous composition of LH2 apoproteins that gives some of the BChls in the B850 ring B820-like site energies, and causes a disorder in LH2 structure.

  16. The effect of muscle fatigue and low back pain on lumbar movement variability and complexity.

    PubMed

    Bauer, C M; Rast, F M; Ernst, M J; Meichtry, A; Kool, J; Rissanen, S M; Suni, J H; Kankaanpää, M

    2017-04-01

    Changes in movement variability and complexity may reflect an adaptation strategy to fatigue. One unresolved question is whether this adaptation is hampered by the presence of low back pain (LBP). This study investigated if changes in movement variability and complexity after fatigue are influenced by the presence of LBP. It is hypothesised that pain free people and people suffering from LBP differ in their response to fatigue. The effect of an isometric endurance test on lumbar movement was tested in 27 pain free participants and 59 participants suffering from LBP. Movement variability and complexity were quantified with %determinism and sample entropy of lumbar angular displacement and velocity. Generalized linear models were fitted for each outcome. Bayesian estimation of the group-fatigue effect with 95% highest posterior density intervals (95%HPDI) was performed. After fatiguing %determinism decreased and sample entropy increased in the pain free group, compared to the LBP group. The corresponding group-fatigue effects were 3.7 (95%HPDI: 2.3-7.1) and -1.4 (95%HPDI: -2.7 to -0.1). These effects manifested in angular velocity, but not in angular displacement. The effects indicate that pain free participants showed more complex and less predictable lumbar movement with a lower degree of structure in its variability following fatigue while participants suffering from LBP did not. This may be physiological responses to avoid overload of fatigued tissue, increase endurance, or a consequence of reduced movement control caused by fatigue. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  18. Experimental study of the complex resistivity and dielectric constant of chrome-contaminated soil

    NASA Astrophysics Data System (ADS)

    Liu, Haorui; Yang, Heli; Yi, Fengyan

    2016-08-01

    Heavy metals such as arsenic and chromium often contaminate soils near industrialized areas. Soil samples, made with different water content and chromate pollutant concentrations, are often needed to test soil quality. Because complex resistivity and complex dielectric characteristics of these samples need to be measured, the relationship between these measurement results and chromium concentration as well as water content was studied. Based on soil sample observations, the amplitude of the sample complex resistivity decreased with an increase of contamination concentration and water content. The phase of complex resistivity takes on a tendency of initially decrease, and then increase with the increasing of contamination concentration and water content. For a soil sample with the same resistivity, the higher the amplitude of complex resistivity, the lower the water content and the higher the contamination concentration. The real and imaginary parts of the complex dielectric constant increase with an increase in contamination concentration and water content. Note that resistivity and complex resistivity methods are necessary to adequately evaluate pollution at various sites.

  19. Synthesis and growth mechanism of sponge-like nickel using a hydrothermal method

    NASA Astrophysics Data System (ADS)

    Shao, Bin; Yin, Xueguo; Hua, Weidong; Ma, Yilong; Sun, Jianchun; Li, Chunhong; Chen, Dengming; Guo, Donglin; Li, Kejian

    2018-05-01

    Sponge-like nickel composed of micro-chains with a diameter of 1-5 μm was selectively synthesized by the hydrothermal method, using sodium hydroxide (NaOH) as the alkaline reagent, aqueous hydrazine as reducing agent and citric acid as a coordination agent. The time-dependent samples prepared at different NaOH concentrations were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and Fourier transform infrared spectrometer (FTIR). The results showed that the agglomerates of nickel citrate hydrazine complex nanoplates were first precipitated and then reduced to prickly nickel micro-chains at a lower NaOH concentration, which played a role in the further formation of sponge-like nickel. Also, the probable growth mechanism of the sponge-like nickel was proposed. The magnetic properties of sponge-like nickel were studied using a vibrating sample magnetometer. The sponge-like nickel exhibited a ferromagnetic behavior with a saturation magnetization value of 43.8 emu g-1 and a coercivity value of 120.7 Oe.

  20. In-vivo third-harmonic generation microscopy at 1550nm three-dimensional long-term time-lapse studies in living C. elegans embryos

    NASA Astrophysics Data System (ADS)

    Aviles-Espinosa, Rodrigo; Santos, Susana I. C. O.; Brodschelm, Andreas; Kaenders, Wilhelm G.; Alonso-Ortega, Cesar; Artigas, David; Loza-Alvarez, Pablo

    2011-03-01

    In-vivo microscopic long term time-lapse studies require controlled imaging conditions to preserve sample viability. Therefore it is crucial to meet specific exposure conditions as these may limit the applicability of established techniques. In this work we demonstrate the use of third harmonic generation (THG) microscopy for long term time-lapse three-dimensional studies (4D) in living Caenorhabditis elegans embryos employing a 1550 nm femtosecond fiber laser. We take advantage of the fact that THG only requires the existence of interfaces to generate signal or a change in the refractive index or in the χ3 nonlinear coefficient, therefore no markers are required. In addition, by using this wavelength the emitted THG signal is generated at visible wavelengths (516 nm) enabling the use of standard collection optics and detectors operating near their maximum efficiency. This enables the reduction of the incident light intensity at the sample plane allowing to image the sample for several hours. THG signal is obtained through all embryo development stages, providing different tissue/structure information. By means of control samples, we demonstrate that the expected water absorption at this wavelength does not severely compromise sample viability. Certainly, this technique reduces the complexity of sample preparation (i.e. genetic modification) required by established linear and nonlinear fluorescence based techniques. We demonstrate the non-invasiveness, reduced specimen interference, and strong potential of this particular wavelength to be used to perform long-term 4D recordings.

  1. Electronic Tongue—A Tool for All Tastes?

    PubMed Central

    Podrażka, Marta; Bączyńska, Ewa; Kundys, Magdalena; Jeleń, Paulina S.; Witkowska Nery, Emilia

    2017-01-01

    Electronic tongue systems are traditionally used to analyse: food products, water samples and taste masking technologies for pharmaceuticals. In principle, their applications are almost limitless, as they are able to almost completely reduce the impact of interferents and can be applied to distinguish samples of extreme complexity as for example broths from different stages of fermentation. Nevertheless, their applications outside the three principal sample types are, in comparison, rather scarce. In this review, we would like to take a closer look on what are real capabilities of electronic tongue systems, what can be achieved using mixed sensor arrays and by introduction of biosensors or molecularly imprinted polymers in the matrix. We will discuss future directions both in the sense of applications as well as system development in the ever-growing trend of low cost analysis. PMID:29301230

  2. A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.

    PubMed

    Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman

    2018-03-01

    This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.

  3. New approaches to the analysis of complex samples using fluorescence lifetime techniques and organized media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertz, P.R.

    Fluorescence spectroscopy is a highly sensitive and selective tool for the analysis of complex systems. In order to investigate the efficacy of several steady state and dynamic techniques for the analysis of complex systems, this work focuses on two types of complex, multicomponent samples: petrolatums and coal liquids. It is shown in these studies dynamic, fluorescence lifetime-based measurements provide enhanced discrimination between complex petrolatum samples. Additionally, improved quantitative analysis of multicomponent systems is demonstrated via incorporation of organized media in coal liquid samples. This research provides the first systematic studies of (1) multifrequency phase-resolved fluorescence spectroscopy for dynamic fluorescence spectralmore » fingerprinting of complex samples, and (2) the incorporation of bile salt micellar media to improve accuracy and sensitivity for characterization of complex systems. In the petroleum studies, phase-resolved fluorescence spectroscopy is used to combine spectral and lifetime information through the measurement of phase-resolved fluorescence intensity. The intensity is collected as a function of excitation and emission wavelengths, angular modulation frequency, and detector phase angle. This multidimensional information enhances the ability to distinguish between complex samples with similar spectral characteristics. Examination of the eigenvalues and eigenvectors from factor analysis of phase-resolved and steady state excitation-emission matrices, using chemometric methods of data analysis, confirms that phase-resolved fluorescence techniques offer improved discrimination between complex samples as compared with conventional steady state methods.« less

  4. Health technology assessment review: Computerized glucose regulation in the intensive care unit - how to create artificial control

    PubMed Central

    2009-01-01

    Current care guidelines recommend glucose control (GC) in critically ill patients. To achieve GC, many ICUs have implemented a (nurse-based) protocol on paper. However, such protocols are often complex, time-consuming, and can cause iatrogenic hypoglycemia. Computerized glucose regulation protocols may improve patient safety, efficiency, and nurse compliance. Such computerized clinical decision support systems (Cuss) use more complex logic to provide an insulin infusion rate based on previous blood glucose levels and other parameters. A computerized CDSS for glucose control has the potential to reduce overall workload, reduce the chance of human cognitive failure, and improve glucose control. Several computer-assisted glucose regulation programs have been published recently. In order of increasing complexity, the three main types of algorithms used are computerized flowcharts, Proportional-Integral-Derivative (PID), and Model Predictive Control (MPC). PID is essentially a closed-loop feedback system, whereas MPC models the behavior of glucose and insulin in ICU patients. Although the best approach has not yet been determined, it should be noted that PID controllers are generally thought to be more robust than MPC systems. The computerized Cuss that are most likely to emerge are those that are fully a part of the routine workflow, use patient-specific characteristics and apply variable sampling intervals. PMID:19849827

  5. Medicare home health payment reform may jeopardize access for clinically complex and socially vulnerable patients.

    PubMed

    Rosati, Robert J; Russell, David; Peng, Timothy; Brickner, Carlin; Kurowski, Daniel; Christopher, Mary Ann; Sheehan, Kathleen M

    2014-06-01

    The Affordable Care Act directed Medicare to update its home health prospective payment system to reflect more recent data on costs and use of services-an exercise known as rebasing. As a result, the Centers for Medicare and Medicaid Services will reduce home health payments 3.5 percent per year in the period 2014-17. To determine the impact that these reductions could have on beneficiaries using home health care, we examined the Medicare reimbursement margins and the use of services in a national sample of 96,621 episodes of care provided by twenty-six not-for-profit home health agencies in 2011. We found that patients with clinically complex conditions and social vulnerability factors, such as living alone, had substantially higher service delivery costs than other home health patients. Thus, the socially vulnerable patients with complex conditions represent less profit-lower-to-negative Medicare margins-for home health agencies. This financial disincentive could reduce such patients' access to care as Medicare payments decline. Policy makers should consider the unique characteristics of these patients and ensure their continued access to Medicare's home health services when planning rebasing and future adjustments to the prospective payment system. Project HOPE—The People-to-People Health Foundation, Inc.

  6. Single Piezo-Actuator Rotary-Hammering (SPaRH) Drill

    NASA Technical Reports Server (NTRS)

    Sherrit, Stewart; Domm, Lukas; Bao, Xiaoqi; Bar-Cohen, Yoseph; Chang, Zensheu; Badescu, Mircea

    2012-01-01

    The search for present or past life in the Universe is one of the most important objectives of NASA's exploration missions. Drills for subsurface sampling of rocks, ice and permafrost are an essential tool for astrobiology studies on other planets. Increasingly, it is recognized that drilling via a combination of rotation and hammering offers an efficient and effective rapid penetration mechanism. The rotation provides an intrinsic method for removal of cuttings from the borehole while the impact and shear forces aids in the fracturing of the penetrated medium. Conventional drills that use a single actuator are based on a complex mechanism with many parts and their use in future mission involves greater risk of failure and/or may require lubrication that can introduce contamination. In this paper, a compact drill is reported that uses a single piezoelectric actuator to produce hammering and rotation of the bit. A horn with asymmetric grooves was design to impart a longitudinal (hammering) and transverse force (rotation) to a keyed free mass. The drill requires low axial pre-load since the hammering-impacts fracture the rock under the bit kerf and rotate the bit to remove the powdered cuttings while augmenting the rock fracture via shear forces. The vibrations 'fluidize' the powdered cuttings inside the flutes reducing the friction with the auger surface. This action reduces the consumed power and heating of the drilled medium helping to preserve the pristine content of the acquired samples. The drill consists of an actuator that simultaneously impacts and rotates the bit by applying force and torque via a single piezoelectric stack actuator without the need for a gearbox or lever mechanism. This can reduce the development/fabrication cost and complexity. In this paper, the drill mechanism will be described and the test results will be reported and discussed.

  7. Highly efficient peptide separations in proteomics. Part 2: bi- and multidimensional liquid-based separation techniques.

    PubMed

    Sandra, Koen; Moshir, Mahan; D'hondt, Filip; Tuytten, Robin; Verleysen, Katleen; Kas, Koen; François, Isabelle; Sandra, Pat

    2009-04-15

    Multidimensional liquid-based separation techniques are described for maximizing the resolution of the enormous number of peptides generated upon tryptic digestion of proteomes, and hence, reduce the spatial and temporal complexity of the sample to a level that allows successful mass spectrometric analysis. This review complements the previous contribution on unidimensional high performance liquid chromatography (HPLC). Both chromatography and electrophoresis will be discussed albeit with reversed-phase HPLC (RPLC) as the final separation dimension prior to MS analysis.

  8. Closing the data gap: Creating an open data environment

    NASA Astrophysics Data System (ADS)

    Hester, J. R.

    2014-02-01

    Poor data management brought on by increasing volumes of complex data undermines both the integrity of the scientific process and the usefulness of datasets. Researchers should endeavour both to make their data citeable and to cite data whenever possible. The reusability of datasets is improved by community adoption of comprehensive metadata standards and public availability of reversibly reduced data. Where standards are not yet defined, as much information as possible about the experiment and samples should be preserved in datafiles written in a standard format.

  9. Participation in Decision Making as a Property of Complex Adaptive Systems: Developing and Testing a Measure

    PubMed Central

    Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R.; McDaniel, Reuben R.

    2013-01-01

    Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them. PMID:24349771

  10. Resting state fMRI entropy probes complexity of brain activity in adults with ADHD.

    PubMed

    Sokunbi, Moses O; Fung, Wilson; Sawlani, Vijay; Choppin, Sabine; Linden, David E J; Thome, Johannes

    2013-12-30

    In patients with attention deficit hyperactivity disorder (ADHD), quantitative neuroimaging techniques have revealed abnormalities in various brain regions, including the frontal cortex, striatum, cerebellum, and occipital cortex. Nonlinear signal processing techniques such as sample entropy have been used to probe the regularity of brain magnetoencephalography signals in patients with ADHD. In the present study, we extend this technique to analyse the complex output patterns of the 4 dimensional resting state functional magnetic resonance imaging signals in adult patients with ADHD. After adjusting for the effect of age, we found whole brain entropy differences (P=0.002) between groups and negative correlation (r=-0.45) between symptom scores and mean whole brain entropy values, indicating lower complexity in patients. In the regional analysis, patients showed reduced entropy in frontal and occipital regions bilaterally and a significant negative correlation between the symptom scores and the entropy maps at a family-wise error corrected cluster level of P<0.05 (P=0.001, initial threshold). Our findings support the hypothesis of abnormal frontal-striatal-cerebellar circuits in ADHD and the suggestion that sample entropy is a useful tool in revealing abnormalities in the brain dynamics of patients with psychiatric disorders. © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Ectomycorrhizal Cortinarius species participate in enzymatic oxidation of humus in northern forest ecosystems.

    PubMed

    Bödeker, Inga T M; Clemmensen, Karina E; de Boer, Wietse; Martin, Francis; Olson, Åke; Lindahl, Björn D

    2014-07-01

    In northern forests, belowground sequestration of nitrogen (N) in complex organic pools restricts nutrient availability to plants. Oxidative extracellular enzymes produced by ectomycorrhizal fungi may aid plant N acquisition by providing access to N in macromolecular complexes. We test the hypotheses that ectomycorrhizal Cortinarius species produce Mn-dependent peroxidases, and that the activity of these enzymes declines at elevated concentrations of inorganic N. In a boreal pine forest and a sub-arctic birch forest, Cortinarius DNA was assessed by 454-sequencing of ITS amplicons and related to Mn-peroxidase activity in humus samples with- and without previous N amendment. Transcription of Cortinarius Mn-peroxidase genes was investigated in field samples. Phylogenetic analyses of Cortinarius peroxidase amplicons and genome sequences were performed. We found a significant co-localization of high peroxidase activity and DNA from Cortinarius species. Peroxidase activity was reduced by high ammonium concentrations. Amplification of mRNA sequences indicated transcription of Cortinarius Mn-peroxidase genes under field conditions. The Cortinarius glaucopus genome encodes 11 peroxidases - a number comparable to many white-rot wood decomposers. These results support the hypothesis that some ectomycorrhizal fungi--Cortinarius species in particular--may play an important role in decomposition of complex organic matter, linked to their mobilization of organically bound N. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  12. Participation in decision making as a property of complex adaptive systems: developing and testing a measure.

    PubMed

    Anderson, Ruth A; Plowman, Donde; Corazzini, Kirsten; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R; McDaniel, Reuben R

    2013-01-01

    Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them.

  13. Quantitative assessment of the effectiveness of phase 1 orthodontic treatment using the American Board of Orthodontics Discrepancy Index.

    PubMed

    Vasilakou, Nefeli; Araujo, Eustaquio A; Kim, Ki Beom; Oliver, Donald R

    2016-12-01

    This retrospective study included a sample of 300 randomly selected patients from the archived records of Saint Louis University's graduate orthodontic clinic, St. Louis, Mo, from 1990 to 2012. The objective of this study was to quantify the changes obtained in phase 1 of orthodontic treatment and determine how much improvement, if any, has occurred before the initiation of the second phase. For the purpose of this study, prephase 1 and prephase 2 records of 300 subjects were gathered. All were measured using the American Board of Ortodontics Discrepancy Index (DI), and a score was given for each phase. The difference of the 2 scores indicated the quantitative change of the complexity of the treatment. Paired t tests were used to compare the scores. Additionally, the sample was categorized into 3 groups according to the Angle classifications, and the same statistics were used to identify significant changes between the 2 scores. Analysis of variance was applied to compare the 3 groups and determine which had the most change. Percentages of change were calculated for the significant scores. The total DI score overall and the scores of all 3 groups were significantly reduced from before to after phase 1. Overall, 42% improvement was observed. The Class I group showed 49.3% improvement, the Class II group 34.5% and the Class III group 58.5%. Most components of the DI improved significantly, but a few showed negative changes. Significant reductions of DI scores were observed in the total sample and in all Angle classification groups. This indicates that early treatment reduces the complexity of the malocclusions. Only 2 components of the DI showed statistically significant negative changes. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  14. Miniaturizing and automation of free acidity measurements for uranium (VI)-HNO3 solutions: Development of a new sequential injection analysis for a sustainable radio-analytical chemistry.

    PubMed

    Néri-Quiroz, José; Canto, Fabrice; Guillerme, Laurent; Couston, Laurent; Magnaldo, Alastair; Dugas, Vincent

    2016-10-01

    A miniaturized and automated approach for the determination of free acidity in solutions containing uranium (VI) is presented. The measurement technique is based on the concept of sequential injection analysis with on-line spectroscopic detection. The proposed methodology relies on the complexation and alkalimetric titration of nitric acid using a pH 5.6 sodium oxalate solution. The titration process is followed by UV/VIS detection at 650nm thanks to addition of Congo red as universal pH indicator. Mixing sequence as well as method validity was investigated by numerical simulation. This new analytical design allows fast (2.3min), reliable and accurate free acidity determination of low volume samples (10µL) containing uranium/[H(+)] moles ratio of 1:3 with relative standard deviation of <7.0% (n=11). The linearity range of the free nitric acid measurement is excellent up to 2.77molL(-1) with a correlation coefficient (R(2)) of 0.995. The method is specific, presence of actinide ions up to 0.54molL(-1) does not interfere on the determination of free nitric acid. In addition to automation, the developed sequential injection analysis method greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight fold. These analytical parameters are important especially in nuclear-related applications to improve laboratory safety, personnel exposure to radioactive samples and to drastically reduce environmental impacts or analytical radioactive waste. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Molecular subtypes of osteosarcoma identified by reducing tumor heterogeneity through an interspecies comparative approach

    PubMed Central

    Scott, Milcah C.; Sarver, Aaron L.; Gavin, Katherine J.; Thayanithy, Venugopal; Getzy, David M.; Newman, Robert A.; Cutter, Gary R.; Lindblad-Toh, Kerstin; Kisseberth, William C.; Hunter, Lawrence E.; Subramanian, Subbaya; Breen, Matthew; Modiano, Jaime F.

    2011-01-01

    The heterogeneous and chaotic nature of osteosarcoma has confounded accurate molecular classification, prognosis, and prediction for this tumor. The occurrence of spontaneous osteosarcoma is largely confined to humans and dogs. While the clinical features are remarkably similar in both species, the organization of dogs into defined breeds provides a more homogeneous genetic background that may increase the likelihood to uncover molecular subtypes for this complex disease. We thus hypothesized that molecular profiles derived from canine osteosarcoma would aid in molecular subclassification of this disease when applied to humans. To test the hypothesis, we performed genome wide gene expression profiling in a cohort of dogs with osteosarcoma, primarily from high-risk breeds. To further reduce inter-sample heterogeneity, we assessed tumor-intrinsic properties through use of an extensive panel of osteosarcoma-derived cell lines. We observed strong differential gene expression that segregated samples into two groups with differential survival probabilities. Groupings were characterized by the inversely correlated expression of genes associated with G2/M transition and DNA damage checkpoint and microenvironment-interaction categories. This signature was preserved in data from whole tumor samples of three independent dog osteosarcoma cohorts, with stratification into the two expected groups. Significantly, this restricted signature partially overlapped a previously defined, predictive signature for soft tissue sarcomas, and it unmasked orthologous molecular subtypes and their corresponding natural histories in five independent data sets from human patients with osteosarcoma. Our results indicate that the narrower genetic diversity of dogs can be utilized to group complex human osteosarcoma into biologically and clinically relevant molecular subtypes. This in turn may enhance prognosis and prediction, and identify relevant therapeutic targets. PMID:21621658

  16. Offline pentafluorophenyl (PFP)-RP prefractionation as an alternative to high-pH RP for comprehensive LC-MS/MS proteomics and phosphoproteomics.

    PubMed

    Grassetti, Andrew V; Hards, Rufus; Gerber, Scott A

    2017-07-01

    Technological advances in liquid chromatography and tandem mass spectrometry (LC-MS/MS) have enabled comprehensive analyses of proteins and their post-translational modifications from cell culture and tissue samples. However, sample complexity necessitates offline prefractionation via a chromatographic method that is orthogonal to online reversed-phase high-performance liquid chromatography (RP-HPLC). This additional fractionation step improves target identification rates by reducing the complexity of the sample as it is introduced to the instrument. A commonly employed offline prefractionation method is high pH reversed-phase (Hi-pH RP) chromatography. Though highly orthogonal to online RP-HPLC, Hi-pH RP relies on buffers that interfere with electrospray ionization. Thus, samples that are prefractionated using Hi-pH RP are typically desalted prior to LC-MS/MS. In the present work, we evaluate an alternative offline prefractionation method, pentafluorophenyl (PFP)-based reversed-phase chromatography. Importantly, PFP prefractionation results in samples that are dried prior to analysis by LC-MS/MS. This reduction in sample handling relative to Hi-pH RP results in time savings and could facilitate higher target identification rates. Here, we have compared the performances of PFP and Hi-pH RP in offline prefractionation of peptides and phosphopeptides that have been isolated from human cervical carcinoma (HeLa) cells. Given the prevalence of isobaric mass tags for peptide quantification, we evaluated PFP chromatography of peptides labeled with tandem mass tags. Our results suggest that PFP is a viable alternative to Hi-pH RP for both peptide and phosphopeptide offline prefractionation.

  17. NIPTmer: rapid k-mer-based software package for detection of fetal aneuploidies.

    PubMed

    Sauk, Martin; Žilina, Olga; Kurg, Ants; Ustav, Eva-Liina; Peters, Maire; Paluoja, Priit; Roost, Anne Mari; Teder, Hindrek; Palta, Priit; Brison, Nathalie; Vermeesch, Joris R; Krjutškov, Kaarel; Salumets, Andres; Kaplinski, Lauris

    2018-04-04

    Non-invasive prenatal testing (NIPT) is a recent and rapidly evolving method for detecting genetic lesions, such as aneuploidies, of a fetus. However, there is a need for faster and cheaper laboratory and analysis methods to make NIPT more widely accessible. We have developed a novel software package for detection of fetal aneuploidies from next-generation low-coverage whole genome sequencing data. Our tool - NIPTmer - is based on counting pre-defined per-chromosome sets of unique k-mers from raw sequencing data, and applying linear regression model on the counts. Additionally, the filtering process used for k-mer list creation allows one to take into account the genetic variance in a specific sample, thus reducing the source of uncertainty. The processing time of one sample is less than 10 CPU-minutes on a high-end workstation. NIPTmer was validated on a cohort of 583 NIPT samples and it correctly predicted 37 non-mosaic fetal aneuploidies. NIPTmer has the potential to reduce significantly the time and complexity of NIPT post-sequencing analysis compared to mapping-based methods. For non-commercial users the software package is freely available at http://bioinfo.ut.ee/NIPTMer/ .

  18. One-dimensional ion-beam figuring for grazing-incidence reflective optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Lin; Idir, Mourad; Bouet, Nathalie

    2016-01-01

    One-dimensional ion-beam figuring (1D-IBF) can improve grazing-incidence reflective optics, such as Kirkpatrick–Baez mirrors. 1D-IBF requires only one motion degree of freedom, which reduces equipment complexity, resulting in compact and low-cost IBF instrumentation. Furthermore, 1D-IBF is easy to integrate into a single vacuum system with other fabrication processes, such as a thin-film deposition. The NSLS-II Optical Metrology and Fabrication Group has recently integrated the 1D-IBF function into an existing thin-film deposition system by adding an RF ion source to the system. Using a rectangular grid, a 1D removal function needed to perform 1D-IBF has been produced. In this paper, demonstration experimentsmore » of the 1D-IBF process are presented on one spherical and two plane samples. The final residual errors on both plane samples are less than 1 nm r.m.s. In conclusion, the surface error on the spherical sample has been successfully reduced by a factor of 12. The results show that the 1D-IBF method is an effective method to process high-precision 1D synchrotron optics.« less

  19. Microbiological corrosion of ASTM SA105 carbon steel pipe for industrial fire water usage

    NASA Astrophysics Data System (ADS)

    Chidambaram, S.; Ashok, K.; Karthik, V.; Venkatakrishnan, P. G.

    2018-02-01

    The large number of metallic systems developed for last few decades against both general uniform corrosion and localized corrosion. Among all microbiological induced corrosion (MIC) is attractive, multidisciplinary and complex in nature. Many chemical processing industries utilizes fresh water for fire service to nullify major/minor fire. One such fire water service line pipe attacked by micro-organisms leads to leakage which is industrially important from safety point of view. Also large numbers of leakage reported in similar fire water service of nearby food processing plant, paper & pulp plant, steel plant, electricity board etc…In present investigation one such industrial fire water service line failure analysis of carbon steel line pipe was analyzed to determine the cause of failure. The water sample subjected to various chemical and bacterial analyses. Turbidity, pH, calcium hardness, free chlorine, oxidation reduction potential, fungi, yeasts, sulphide reducing bacteria (SRB) and total bacteria (TB) were measured on water sample analysis. The corrosion rate was measured on steel samples and corrosion coupon measurements were installed in fire water for validating non flow assisted localized corrosion. The sulphide reducing bacteria (SRB) presents in fire water causes a localized micro biological corrosion attack of line pipe.

  20. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    PubMed

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  1. A spectroscopic method for observing the domain movement of the Rieske iron–sulfur protein

    PubMed Central

    Brugna, Myriam; Rodgers, Simon; Schricker, Anna; Montoya, Guillermo; Kazmeier, Michael; Nitschke, Wolfgang; Sinning, Irmgard

    2000-01-01

    The g-tensor orientation of the chemically reduced Rieske cluster in cytochrome bc1 complex from Rhodovulum sulfidophilum with respect to the membrane was determined in the presence and absence of inhibitors and in the presence of oxidized and reduced quinone in the quinol-oxidizing-site (Qo-site) by EPR on two-dimensionally ordered samples. Almost identical orientations were observed when oxidized or reduced quinone, stigmatellin, or 5-(n-undecyl)-6-hydroxy-4,7-dioxobenzothiazole was present. Occupancy of the Qo-site by myxothiazole induced appearance of a minority population with a substantially differing conformation and presence of E-β-methoxyacrylate-stilbene significantly reduced the contribution of the major conformation observed in the other cases. Furthermore, when the oxidized iron–sulfur cluster was reduced at cryogenic temperatures by the products of radiolysis, the orientation of its magnetic axes was found to differ significantly from that of the chemically reduced center. The “irradiation-induced” conformation converts to that of the chemically reduced center after thawing of the sample. These results confirm the effects of Qo-site inhibitors on the equilibrium conformation of the Rieske iron–sulfur protein and provide evidence for a reversible redox-influenced interconversion between conformational states. Moreover, the data obtained with the iron—sulfur protein demonstrate that the conformation of “EPR-inaccessible” reduction states of redox centers can be studied by inducing changes of redox state at cryogenic temperatures. This technique appears applicable to a wide range of comparable electron transfer systems performing redox-induced conformational changes. PMID:10681446

  2. Pillar cuvettes: capillary-filled, microliter quartz cuvettes with microscale path lengths for optical spectroscopy.

    PubMed

    Holzner, Gregor; Kriel, Frederik Hermanus; Priest, Craig

    2015-05-05

    The goal of most analytical techniques is to reduce the lower limit of detection; however, it is sometimes necessary to do the opposite. High sample concentrations or samples with high molar absorptivity (e.g., dyes and metal complexes) often require multiple dilution steps or laborious sample preparation prior to spectroscopic analysis. Here, we demonstrate dilution-free, one-step UV-vis spectroscopic analysis of high concentrations of platinum(IV) hexachloride in a micropillar array, that is, "pillar cuvette". The cuvette is spontaneously filled by wicking of the liquid sample into the micropillar array. The pillar height (thus, the film thickness) defines the optical path length, which was reduced to between 10 and 20 μm in this study (3 orders of magnitude smaller than in a typical cuvette). Only one small droplet (∼2 μL) of sample is required, and the dispensed volume need not be precise or even known to the analyst for accurate spectroscopy measurements. For opaque pillars, we show that absorbance is linearly related to platinum concentration (the Beer-Lambert Law). For fully transparent or semitransparent pillars, the measured absorbance was successfully corrected for the fractional surface coverage of the pillars and the transmittance of the pillars and reference. Thus, both opaque and transparent pillars can be applied to absorbance spectroscopy of high absorptivity, microliter samples. It is also shown here that the pillar array has a useful secondary function as an integrated (in-cuvette) filter for particulates. For pillar cuvette measurements of platinum solutions spiked with 6 μm diameter polystyrene spheres, filtered and unfiltered samples gave identical spectra.

  3. Bounds on the sample complexity for private learning and private data release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasiviswanathan, Shiva; Beime, Amos; Nissim, Kobbi

    2009-01-01

    Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many naturalmore » cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.« less

  4. Effect of dissolved organic matter (DOM) of contrasting origins on Cu and Pb speciation and toxicity to Paracentrotus lividus larvae.

    PubMed

    Sánchez-Marín, Paula; Santos-Echeandía, Juan; Nieto-Cid, Mar; Alvarez-Salgado, Xosé Antón; Beiras, Ricardo

    2010-01-31

    Water samples of contrasting origin, including natural seawater, two sediment elutriates and sewage-influenced seawater, were collected and obtained to examine the effect of the dissolved organic matter (DOM) present on metal bioavailability. The carbon content (DOC) and the optical properties (absorbance and fluorescence) of the coloured DOM fraction (CDOM) of these materials were determined. Cu and Pb complexation properties were measured by anodic stripping voltammetry (ASV) and the effect of DOM on Cu and Pb bioavailability was studied by means of the Paracentrotus lividus embryo-larval bioassay. Sediment elutriates and sewage-influenced water (1) were enriched 1.4-1.7 times in DOC; (2) absorbed and reemitted more light; and (3) presented higher Cu complexation capacities (L(Cu)) than the natural seawater used for their preparation. L(Cu) varied from 0.08 microM in natural seawater to 0.3 and 0.5 microM in sediment elutriates and sewage-influenced water, respectively. Differences in DOC, CDOM and Cu complexation capacities were reflected in Cu toxicity. DOM enriched samples presented a Cu EC(50) of 0.64 microM, significantly higher than the Cu EC(50) of natural and artificial seawater, which was 0.38 microM. The protecting effect of DOM on Cu toxicity greatly disappeared when the samples were irradiated with high intensity UV-light. Cu toxicity could be successfully predicted considering ASV-labile Cu concentrations in the samples. Pb complexation by DOM was only detected in the DOM-enriched samples and caused little effect on Pb EC(50). This effect was contrary for both elutriates: one elutriate reduced Pb toxicity in comparison with the control artificial seawater, while the other increased it. UV irradiation of the samples caused a marked increase in Pb toxicity, which correlated with the remaining DOC concentration. DOM parameters were related to Cu speciation and toxicity: good correlations were found between DOC and Cu EC(50), while L(Cu) correlated better with the fluorescence of marine humic substances. The present results stress the importance of characterizing not only the amount but also the quality of seawater DOM to better predict ecological effects from total metal concentration data. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  5. Spectroelectrochemistry as a Strategy for Improving Selectivity of Sensors for Security and Defense Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heineman, William R.; Seliskar, Carl J.; Morris, Laura K.

    2012-12-19

    Spectroelectrochemistry provides improved selectivity for sensors by electrochemically modulating the optical signal associated with the analyte. The sensor consists of an optically transparent electrode (OTE) coated with a film that preconcentrates the target analyte. The OTE functions as an optical waveguide for attenuated total reflectance (ATR) spectroscopy, which detects the analyte by absorption. Alternatively, the OTE can serve as the excitation light for fluorescence detection, which is generally more sensitive than absorption. The analyte partitions into the film, undergoes an electrochemical redox reaction at the OTE surface, and absorbs or emits light in its oxidized or reduced state. The changemore » in the optical response associated with electrochemical oxidation or reduction at the OTE is used to quantify the analyte. Absorption sensors for metal ion complexes such as [Fe(CN)6]4- and [Ru(bpy)3]2+ and fluorescence sensors for [Ru(bpy)3]2+ and the polycyclic aromatic hydrocarbon 1-hydroxypyrene have been developed. The sensor concept has been extended to binding assays for a protein using avidin–biotin and 17β-estradiol–anti-estradiol antibodies. The sensor has been demonstrated to measure metal complexes in complex samples such as nuclear waste and natural water. This sensor has qualities needed for security and defense applications that require a high level of selectivity and good detection limits for target analytes in complex samples. Quickly monitoring and designating intent of a nuclear program by measuring the Ru/Tc fission product ratio is such an application.« less

  6. Direct-to-digital holography reduction of reference hologram noise and fourier space smearing

    DOEpatents

    Voelkl, Edgar

    2006-06-27

    Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.

  7. A strategy to load balancing for non-connectivity MapReduce job

    NASA Astrophysics Data System (ADS)

    Zhou, Huaping; Liu, Guangzong; Gui, Haixia

    2017-09-01

    MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.

  8. Quick detection and quantification of iron-cyanide complexes using fourier transform infrared spectroscopy.

    PubMed

    Sut-Lohmann, Magdalena; Raab, Thomas

    2017-08-01

    The continuous release of persistent iron-cyanide (Fe-CN) complexes from various industrial sources poses a high hazard to the environment and indicates the necessity to analyze a considerable amount of samples. Conventional flow injection analysis (FIA) is a time and cost consuming method for cyanide (CN) determination. Thus, a rapid and economic alternative needs to be developed to quantify the Fe-CN complexes. 52 soil samples were collected at a former Manufactured Gas Plant (MGP) site in order to determine the feasibility of diffuse reflectance infrared Fourier spectroscopy (DRIFTS). Soil analysis revealed CN concentrations in a range from 8 to 14.809 mg kg -1 , where 97% was in the solid form (Fe 4 [Fe(CN) 6 ] 3 ), which is characterized by a single symmetrical CN band in the range 2092-2084 cm -1 . The partial least squares (PLS) calibration-validation model revealed IR response to CN tot which exceeds 2306 mg kg -1 (limit of detection, LOD). Leave-one-out cross-validation (LOO-CV) was performed on soil samples, which contained low CN tot (<900 mg kg -1 ). This improved the sensitivity of the model by reducing the LOD to 154 mg kg -1 . Finally, the LOO-CV conducted on the samples with CN tot  > 900 mg kg -1 resulted in LOD equal to 3751 mg kg -1 . It was found that FTIR spectroscopy provides the information concerning different CN species in the soil samples. Additionally, it is suitable for quantifying Fe-CN species in matrixes with CN tot  > 154 mg kg -1 . Thus, FTIR spectroscopy, in combination with the statistical approach applied here seems to be a feasible and quick method for screening of contaminated sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Is complexity of work associated with risk of dementia? The Canadian Study of Health And Aging.

    PubMed

    Kröger, Edeltraut; Andel, Ross; Lindsay, Joan; Benounissa, Zohra; Verreault, René; Laurin, Danielle

    2008-04-01

    The authors evaluated the association of complexity of work with data, people, and things with the incidence of dementia, Alzheimer's disease, and vascular dementia in the Canadian Study of Health and Aging, while adjusting for work-related physical activity. The Canadian Study of Health and Aging is a 10-year population study, from 1991 to 2001, of a representative sample of persons aged 65 years or older. Lifetime job history allowed application of complexity scores and classification of work-related physical activity. Analyses included 3,557 subjects, of whom 400 were incident dementia cases, including 299 with Alzheimer's disease and 93 with vascular dementia. In fully adjusted Cox regression models, high complexity of work with people or things reduced risk of dementia (hazard ratios were 0.66 (95% confidence interval: 0.44, 0.98) and 0.72 (95% confidence interval: 0.52, 0.99), respectively) but not Alzheimer's disease. For vascular dementia, hazard ratios were 0.36 (95% confidence interval: 0.15, 0.90) for high complexity of work with people and 0.50 (95% confidence interval: 0.25, 1.00) for high complexity of work with things. Subgroup analyses according to median duration (23 years) of principal occupation showed that associations with complexity varied according to duration of employment. High complexity of work appears to be associated with risk of dementia, but effects may vary according to subtype.

  10. Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples.

    PubMed

    Thorson, Megan K; Ung, Phuc; Leaver, Franklin M; Corbin, Teresa S; Tuck, Kellie L; Graham, Bim; Barrios, Amy M

    2015-10-08

    A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. BCOR regulates myeloid cell proliferation and differentiation

    PubMed Central

    Cao, Qi; Gearhart, Micah D.; Gery, Sigal; Shojaee, Seyedmehdi; Yang, Henry; Sun, Haibo; Lin, De-chen; Bai, Jing-wen; Mead, Monica; Zhao, Zhiqiang; Chen, Qi; Chien, Wen-wen; Alkan, Serhan; Alpermann, Tamara; Haferlach, Torsten; Müschen, Markus; Bardwell, Vivian J.; Koeffler, H. Phillip

    2016-01-01

    BCOR is a component of a variant Polycomb group repressive complex 1 (PRC1). Recently, we and others reported recurrent somatic BCOR loss-of-function mutations in myelodysplastic syndrome and acute myelogenous leukaemia (AML). However, the role of BCOR in normal hematopoiesis is largely unknown. Here, we explored the function of BCOR in myeloid cells using myeloid murine models with Bcor conditional loss-of-function or overexpression alleles. Bcor mutant bone marrow cells showed significantly higher proliferation and differentiation rates with upregulated expression of Hox genes. Mutation of Bcor reduced protein levels of RING1B, an H2A ubiquitin ligase subunit of PRC1 family complexes and reduced H2AK119ub upstream of upregulated HoxA genes. Global RNA expression profiling in murine cells and AML patient samples with BCOR loss-of-function mutation suggested that loss of BCOR expression is associated with enhanced cell proliferation and myeloid differentiation. Our results strongly suggest that BCOR plays an indispensable role in hematopoiesis by inhibiting myeloid cell proliferation and differentiation and offer a mechanistic explanation for how BCOR regulates gene expression such as Hox genes. PMID:26847029

  12. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  13. Design of single phase inverter using microcontroller assisted by data processing applications software

    NASA Astrophysics Data System (ADS)

    Ismail, K.; Muharam, A.; Amin; Widodo Budi, S.

    2015-12-01

    Inverter is widely used for industrial, office, and residential purposes. Inverter supports the development of alternative energy such as solar cells, wind turbines and fuel cells by converting dc voltage to ac voltage. Inverter has been made with a variety of hardware and software combinations, such as the use of pure analog circuit and various types of microcontroller as controller. When using pure analog circuit, modification would be difficult because it will change the entire hardware components. In inverter with microcontroller based design (with software), calculations to generate AC modulation is done in the microcontroller. This increases programming complexity and amount of coding downloaded to the microcontroller chip (capacity flash memory in the microcontroller is limited). This paper discusses the design of a single phase inverter using unipolar modulation of sine wave and triangular wave, which is done outside the microcontroller using data processing software application (Microsoft Excel), result shows that complexity programming was reduce and resolution sampling data is very influence to THD. Resolution sampling must taking ½ A degree to get best THD (15.8%).

  14. Comprehensive sample analysis using high performance liquid chromatography with multi-detection.

    PubMed

    Pravadali, Sercan; Bassanese, Danielle N; Conlan, Xavier A; Francis, Paul S; Smith, Zoe M; Terry, Jessica M; Shalliker, R Andrew

    2013-11-25

    Herein we assess the separation space offered by a liquid chromatography system with an optimised uni-dimensional separation for the determination of the key chemical entities in the highly complex matrix of a tobacco leaf extract. Multiple modes of detection, including UV-visible absorbance, chemiluminescence (acidic potassium permanganate, manganese(IV), and tris(2,2'-bipyridine)ruthenium(III)), mass spectrometry and DPPH radical scavenging were used in an attempt to systematically reduce the data complexity of the sample whilst obtaining a greater degree of molecule-specific information. A large amount of chemical data was obtained, but several limitations in the ability to assign detector responses to particular compounds, even with the aid of complementary detection systems, were observed. Thirty-three compounds were detected via MS on the tobacco extract and 12 out of 32 compounds gave a peak height ratio (PHR) greater than 0.33 on one or more detectors. This paper serves as a case study of these limitations, illustrating why multidimensional chromatography is an important consideration when developing a comprehensive chemical detection system. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Rapid determination of trace copper in animal feed based on micro-plate colorimetric reaction and statistical partitioning correction.

    PubMed

    Niu, Yiming; Wang, Jiayi; Zhang, Chi; Chen, Yiqiang

    2017-04-15

    The objective of this study was to develop a micro-plate based colorimetric assay for rapid and high-throughput detection of copper in animal feed. Copper ion in animal feed was extracted by trichloroacetic acid solution and reduced to cuprous ion by hydroxylamine. The cuprous ion can chelate with 2,2'-bicinchoninic acid to form a Cu-BCA complex which was detected with high sensitivity by micro-plate reader at 354nm. The whole assay procedure can be completed within 20min. To eliminate matrix interference, a statistical partitioning correction approach was proposed, which makes the detection of copper in complex samples possible. The limit of detection was 0.035μg/mL and the detection range was 0.1-10μg/mL of copper in buffer solution. Actual sample analysis indicated that this colorimetric assay produced results consistent with atomic absorption spectrometry analysis. These results demonstrated that the developed assay can be used for rapid determination of copper in animal feed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. A developmental approach to complex PTSD: childhood and adult cumulative trauma as predictors of symptom complexity.

    PubMed

    Cloitre, Marylene; Stolbach, Bradley C; Herman, Judith L; van der Kolk, Bessel; Pynoos, Robert; Wang, Jing; Petkova, Eva

    2009-10-01

    Exposure to multiple traumas, particularly in childhood, has been proposed to result in a complex of symptoms that includes posttraumatic stress disorder (PTSD) as well as a constrained, but variable group of symptoms that highlight self-regulatory disturbances. The relationship between accumulated exposure to different types of traumatic events and total number of different types of symptoms (symptom complexity) was assessed in an adult clinical sample (N = 582) and a child clinical sample (N = 152). Childhood cumulative trauma but not adulthood trauma predicted increasing symptom complexity in adults. Cumulative trauma predicted increasing symptom complexity in the child sample. Results suggest that Complex PTSD symptoms occur in both adult and child samples in a principled, rule-governed way and that childhood experiences significantly influenced adult symptoms. Copyright © 2009 International Society for Traumatic Stress Studies.

  17. Speckle-field digital holographic microscopy.

    PubMed

    Park, YongKeun; Choi, Wonshik; Yaqoob, Zahid; Dasari, Ramachandra; Badizadegan, Kamran; Feld, Michael S

    2009-07-20

    The use of coherent light in conventional holographic phase microscopy (HPM) poses three major drawbacks: poor spatial resolution, weak depth sectioning, and fixed pattern noise due to unwanted diffraction. Here, we report a technique which can overcome these drawbacks, but maintains the advantage of phase microscopy - high contrast live cell imaging and 3D imaging. A speckle beam of a complex spatial pattern is used for illumination to reduce fixed pattern noise and to improve optical sectioning capability. By recording of the electric field of speckle, we demonstrate high contrast 3D live cell imaging without the need for axial scanning - neither objective lens nor sample stage. This technique has great potential in studying biological samples with improved sensitivity, resolution and optical sectioning capability.

  18. Effects of structural complexity on within-canopy light environments and leaf traits in a northern mixed deciduous forest.

    PubMed

    Fotis, Alexander T; Curtis, Peter S

    2017-10-01

    Canopy structure influences forest productivity through its effects on the distribution of radiation and the light-induced changes in leaf physiological traits. Due to the difficulty of accessing and measuring forest canopies, few field-based studies have quantitatively linked these divergent scales of canopy functioning. The objective of our study was to investigate how canopy structure affects light profiles within a forest canopy and whether leaves of mature trees adjust morphologically and biochemically to the light environments characteristic of canopies with different structural complexity. We used a combination of light detection and ranging (LiDAR) data and hemispherical photographs to quantify canopy structure and light environments, respectively, and a telescoping pole to sample leaves. Leaf mass per area (LMA), nitrogen on an area basis (Narea) and chlorophyll on a mass basis (Chlmass) were measured in red maple (Acer rubrum), american beech (Fagus grandifolia), white pine (Pinus strobus), and northern red oak (Quercus rubra) at different heights in plots with similar leaf area index but contrasting canopy complexity (rugosity). We found that more complex canopies had greater porosity and reduced light variability in the midcanopy while total light interception was unchanged relative to less complex canopies. Leaf phenotypes of F. grandifolia, Q. rubra and P. strobus were more sun-acclimated in the midstory of structurally complex canopies while leaf phenotypes of A. rubrum were more shade-acclimated (lower LMA) in the upper canopy of more complex stands, despite no differences in total light interception. Broadleaf species showed further differences in acclimation with increased Narea and reduced Chlmass in leaves with higher LMA, while P. strobus showed no change in Narea and Chlmass with higher LMA. Our results provide new insight on how light distribution and leaf acclimation in mature trees might be altered when natural and anthropogenic disturbances cause structural changes in the canopy. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Simpler Alternative to an Optimum FQPSK-B Viterbi Receiver

    NASA Technical Reports Server (NTRS)

    Lee, Dennis; Simon, Marvin; Yan, Tsun-Yee

    2003-01-01

    A reduced-complexity alternative to an optimum FQPSK-B Viterbi receiver has been invented. As described, the reduction in complexity is achieved at the cost of only a small reduction in power performance [performance expressed in terms of a bit-energy-to-noise-energy ratio (Eb/N0) for a given bit-error rate (BER)]. The term "FQPSK-B" denotes a baseband-filtered version of Feher quadrature-phase-shift keying, which is a patented, bandwidth-efficient phase-modulation scheme named after its inventor. Heretofore, commercial FQPSK-B receivers have performed symbol-by-symbol detection, in each case using a detection filter (either the proprietary FQPSK-B filter for better BER performance, or a simple integrate-and-dump filter with degraded performance) and a sample-and-hold circuit.

  20. LIGHT-SABRE enables efficient in-magnet catalytic hyperpolarization.

    PubMed

    Theis, Thomas; Truong, Milton; Coffey, Aaron M; Chekmenev, Eduard Y; Warren, Warren S

    2014-11-01

    Nuclear spin hyperpolarization overcomes the sensitivity limitations of traditional NMR and MRI, but the most general method demonstrated to date (dynamic nuclear polarization) has significant limitations in scalability, cost, and complex apparatus design. As an alternative, signal amplification by reversible exchange (SABRE) of parahydrogen on transition metal catalysts can hyperpolarize a variety of substrates, but to date this scheme has required transfer of the sample to low magnetic field or very strong RF irradiation. Here we demonstrate "Low-Irradiation Generation of High Tesla-SABRE" (LIGHT-SABRE) which works with simple pulse sequences and low power deposition; it should be usable at any magnetic field and for hyperpolarization of many different nuclei. This approach could drastically reduce the cost and complexity of producing hyperpolarized molecules. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. [Sexual behavior and associated factors among Korean junior high school students].

    PubMed

    Lee, Gyuyoung; Song, Seunghun

    2015-01-01

    The study purpose was to identify the sexual behavior and associated factors of Korean junior high school students. Raw data from the 2013 Korean Youth Risk Behavior Web-Based Survey were used. Among the data from 72,435 students, 36,530 junior high school students were analyzed. Complex sample frequency analysis and complex sample chi-square were used to identify the condition of sexual behavior, and complex sample logistic regression was used to examine the factors related to sexual behavior. Among the students, 3.8% responded that they had experienced sexual intercourse, and the prevalence of sexual intercourse was higher among male students (5.0%) compared to female students (2.5%). Among male students, those who had the following were more likely to have had sexual intercourse: perceived high economic status, living with a relative, experience with a partti-me job, a foreign father, experience with smoking and drinking during the past month, experience with drug use, and depression during the past 12 months. Among the female students who were more likely to have had sexual intercourse, the following were ascertained: higher grades, perceived high economic status, living with a relative or in childcare facilities, experience with a part-time job, a foreign father or mother, experience with smoking and drinking during the past month, and experience with drug use. The results suggest that it is important to develop a comprehensive approach program not only focused on sexual behavior but also including mental health or other health behaviors to effectively reduce the likelihood of sexual intercourse among Korean junior high school students.

  2. SAS procedures for designing and analyzing sample surveys

    USGS Publications Warehouse

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  3. Complexity perplexity: a systematic review to describe the measurement of medication regimen complexity.

    PubMed

    Paquin, Allison M; Zimmerman, Kristin M; Kostas, Tia R; Pelletier, Lindsey; Hwang, Angela; Simone, Mark; Skarf, Lara M; Rudolph, James L

    2013-11-01

    Complex medication regimens are error prone and challenging for patients, which may impact medication adherence and safety. No universal method to assess the complexity of medication regimens (CMRx) exists. The authors aim to review literature for CMRx measurements to establish consistencies and, secondarily, describe CMRx impact on healthcare outcomes. A search of EMBASE and PubMed for studies analyzing at least two medications and complexity components, among those self-managing medications, was conducted. Out of 1204 abstracts, 38 studies were included in the final sample. The majority (74%) of studies used one of five validated CMRx scales; their components and scoring were compared. Universal CMRx assessment is needed to identify and reduce complex regimens, and, thus, improve safety. The authors highlight commonalities among five scales to help build consensus. Common components (i.e., regimen factors) included dosing frequency, units per dose, and non-oral routes. Elements (e.g., twice daily) of these components (e.g., dosing frequency) and scoring varied. Patient-specific factors (e.g., dexterity, cognition) were not addressed, which is a shortcoming of current scales and a challenge for future scales. As CMRx has important outcomes, notably adherence and healthcare utilization, a standardized tool has potential for far-reaching clinical, research, and patient-safety impact.

  4. Spectrophotometric total reducing sugars assay based on cupric reduction.

    PubMed

    Başkan, Kevser Sözgen; Tütem, Esma; Akyüz, Esin; Özen, Seda; Apak, Reşat

    2016-01-15

    As the concentration of reducing sugars (RS) is controlled by European legislation for certain specific food and beverages, a simple and sensitive spectrophotometric method for the determination of RS in various food products is proposed. The method is based on the reduction of Cu(II) to Cu(I) with reducing sugars in alkaline medium in the presence of 2,9-dimethyl-1,10-phenanthroline (neocuproine: Nc), followed by the formation of a colored Cu(I)-Nc charge-transfer complex. All simple sugars tested had the linear regression equations with almost equal slope values. The proposed method was successfully applied to fresh apple juice, commercial fruit juices, milk, honey and onion juice. Interference effect of phenolic compounds in plant samples was eliminated by a solid phase extraction (SPE) clean-up process. The method was proven to have higher sensitivity and precision than the widely used dinitrosalicylic acid (DNS) colorimetric method. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. The use of lactic acid bacteria to reduce mercury bioaccessibility.

    PubMed

    Jadán-Piedra, C; Alcántara, C; Monedero, V; Zúñiga, M; Vélez, D; Devesa, V

    2017-08-01

    Mercury in food is present in either inorganic [Hg(II)] or methylmercury (CH 3 Hg) form. Intestinal absorption of mercury is influenced by interactions with other food components. The use of dietary components to reduce mercury bioavailability has been previously proposed. The aim of this work is to explore the use of lactic acid bacteria to reduce the amount of mercury solubilized after gastrointestinal digestion and available for absorption (bioaccessibility). Ten strains were tested by addition to aqueous solutions containing Hg(II) or CH 3 Hg, or to food samples, and submission of the mixtures to gastrointestinal digestion. All of the strains assayed reduce the soluble fraction from standards of mercury species under gastrointestinal digestion conditions (72-98%). However their effectiveness is lower in food, and reductions in bioaccessibility are only observed with mushrooms (⩽68%). It is hypothesized that bioaccessible mercury in seafood forms part of complexes that do not interact with lactic acid bacteria. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Xpert® MTB/RIF: Usefulness for the diagnosis of tuberculosis and resistance to rifampicin.

    PubMed

    Vergara Gómez, Andrea; González-Martín, Julià; García-Basteiro, Alberto L

    2017-11-09

    The advent of the Xpert® MTB/RIF technique was a revolution in the diagnosis of tuberculosis, especially in areas with high incidence and low resources. It allows the detection of Mycobacterium tuberculosis complex and simultaneously the most common resistance mutations to rifampicin in less than 2h. For respiratory samples the sensitivity is very high, but it decreases for extrapulmonary samples and children. Although it is faster and simpler than conventional methods, it presents some limitations and new and better techniques are needed to reduce the number of cases and deaths caused by tuberculosis. This review aims to assess the scientific evidence around the diagnostic performance of Xpert® MTB/RIF in different types of samples and populations, as well as analyse its strengths and limitations for TB diagnosis. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  7. Microfluidic platform combining droplets and magnetic tweezers: application to HER2 expression in cancer diagnosis.

    PubMed

    Ferraro, Davide; Champ, Jérôme; Teste, Bruno; Serra, Marco; Malaquin, Laurent; Viovy, Jean-Louis; de Cremoux, Patricia; Descroix, Stephanie

    2016-05-09

    The development of precision medicine, together with the multiplication of targeted therapies and associated molecular biomarkers, call for major progress in genetic analysis methods, allowing increased multiplexing and the implementation of more complex decision trees, without cost increase or loss of robustness. We present a platform combining droplet microfluidics and magnetic tweezers, performing RNA purification, reverse transcription and amplification in a fully automated and programmable way, in droplets of 250nL directly sampled from a microtiter-plate. This platform decreases sample consumption about 100 fold as compared to current robotized platforms and it reduces human manipulations and contamination risk. The platform's performance was first evaluated on cell lines, showing robust operation on RNA quantities corresponding to less than one cell, and then clinically validated with a cohort of 21 breast cancer samples, for the determination of their HER2 expression status, in a blind comparison with an established routine clinical analysis.

  8. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritsenko, Marina A.; Xu, Zhe; Liu, Tao

    Comprehensive, quantitative information on abundances of proteins and their post-translational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labelling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification andmore » quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples, and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.« less

  9. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    NASA Astrophysics Data System (ADS)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  10. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS.

    PubMed

    Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D

    2016-01-01

    Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.

  11. CO hydrogenation on PdCo/NaY catalysts: Effect of ion hydration on metal phases and selectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuangen Yin; Zongchao Zhang; Sachtler, W.M.H.

    1993-02-01

    Exposure of calcined PdCo/NaY catalyst precursors to water vapor, prior to reduction, strongly affects the CO hydrogenation activity and selectivity of the reduced bimetal catalysts. With samples that had been exposed to H[sub 2]O before reduction, the formation of hydrocarbons prevails; nonhydrated reference samples of the same overall composition are mainly selective for oxygenates. After 6 h of reaction time PdCo alloy particles of 5.8 nm are detected by XRD in H[sub 2]O-exposed catalysts, but in the reference samples the metal particles are below the limit of detection by XRD. The observed effects are attributed to the formation of mobilemore » aquo-complexes of metal ions; after reduction they are converted to larger alloy particles, richer on Co, than in the reference samples. Results obtained with NaOH-neutralized and Co-free Pd/NaY catalysts are also discussed. 23 refs., 13 figs., 1 tab.« less

  12. Sonication-Based Improvement of the Physicochemical Properties of Guar Gum as a Potential Substrate for Modified Drug Delivery Systems

    PubMed Central

    Ansari, Siddique Akber; Cencetti, Claudia; Carafa, Maria; Mazzuca, Claudia; Capitani, Donatella; Coviello, Tommasina

    2013-01-01

    Guar Gum is a natural polysaccharide that, due to its physicochemical properties, is extensively investigated for biomedical applications as a matrix for modified drug delivery, but it is also used in the food industry as well as in cosmetics. A commercial sample of Guar Gum was sonicated for different periods of time, and the reduction in the average molecular weight was monitored by means of viscometric measurements. At the same time, the rheological behaviour was also followed, in terms of viscoelasticity range, flow curves, and mechanical spectra. Sonicated samples were used for the preparation of gels in the presence of borate ions. The effect of borax on the new samples was investigated by recording mechanical spectra, flow curves, and visible absorption spectra of complexes with Congo Red. The anisotropic elongation, observed in previous studies with tablets of Guar Gum and borax, was remarkably reduced when the sonicated samples were used for the preparation of the gels. PMID:23984426

  13. [Automatic adjustment control system for DC glow discharge plasma source].

    PubMed

    Wan, Zhen-zhen; Wang, Yong-qing; Li, Xiao-jia; Wang, Hai-zhou; Shi, Ning

    2011-03-01

    There are three important parameters in the DC glow discharge process, the discharge current, discharge voltage and argon pressure in discharge source. These parameters influence each other during glow discharge process. This paper presents an automatic control system for DC glow discharge plasma source. This system collects and controls discharge voltage automatically by adjusting discharge source pressure while the discharge current is constant in the glow discharge process. The design concept, circuit principle and control program of this automatic control system are described. The accuracy is improved by this automatic control system with the method of reducing the complex operations and manual control errors. This system enhances the control accuracy of glow discharge voltage, and reduces the time to reach discharge voltage stability. The glow discharge voltage stability test results with automatic control system are provided as well, the accuracy with automatic control system is better than 1% FS which is improved from 4% FS by manual control. Time to reach discharge voltage stability has been shortened to within 30 s by automatic control from more than 90 s by manual control. Standard samples like middle-low alloy steel and tin bronze have been tested by this automatic control system. The concentration analysis precision has been significantly improved. The RSDs of all the test result are better than 3.5%. In middle-low alloy steel standard sample, the RSD range of concentration test result of Ti, Co and Mn elements is reduced from 3.0%-4.3% by manual control to 1.7%-2.4% by automatic control, and that for S and Mo is also reduced from 5.2%-5.9% to 3.3%-3.5%. In tin bronze standard sample, the RSD range of Sn, Zn and Al elements is reduced from 2.6%-4.4% to 1.0%-2.4%, and that for Si, Ni and Fe is reduced from 6.6%-13.9% to 2.6%-3.5%. The test data is also shown in this paper.

  14. Using complex auditory-visual samples to produce emergent relations in children with autism.

    PubMed

    Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P

    2010-03-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.

  15. Complex sample survey estimation in static state-space

    Treesearch

    Raymond L. Czaplewski

    2010-01-01

    Increased use of remotely sensed data is a key strategy adopted by the Forest Inventory and Analysis Program. However, multiple sensor technologies require complex sampling units and sampling designs. The Recursive Restriction Estimator (RRE) accommodates this complexity. It is a design-consistent Empirical Best Linear Unbiased Prediction for the state-vector, which...

  16. Sampling from complex networks using distributed learning automata

    NASA Astrophysics Data System (ADS)

    Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza

    2014-02-01

    A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.

  17. The efficacy of two oral hygiene regimens in reducing oral malodour: a randomised clinical trial.

    PubMed

    Feres, Magda; Figueiredo, Luciene Cristina; Faveri, Marcelo; Guerra, Marcelo C; Mateo, Luis R; Stewart, Bernal; Williams, Malcolm; Panagakos, Foti

    2015-12-01

    This study compared the efficacy of two oral hygiene regimens in reducing oral malodour and the proportions of bacterial species involved in the production of volatile sulphur compounds. Seventy subjects who participated in a halitosis-induction phase and achieved an organoleptic score of ≥ 3.0 [time point 0 (T0)] randomised into two groups: brushing with regular fluoride toothpaste alone (control group) or brushing with regular fluoride toothpaste followed by rinsing with a 0.075% cetylpyridinium chloride (CPC) mouthwash (CPC group). Subjects followed their assigned oral hygiene regimen for 21 days. Then, they underwent an organoleptic examination and measurement of volatile sulphur compounds (VSCs) using a portable gas chromatograph, 12 hours after their last oral hygiene procedure (T1) and 4 hours after an on-site oral hygiene (T2). Microbiological samples (supragingival biofilm, tongue coating and saliva) were analysed using checkerboard DNA-DNA hybridisation. Both therapies statistically significantly improved the organoleptic scores (P < 0.05), but the VSC levels and/or concentrations were reduced only in the CPC group (P < 0.05). In subjects rinsing with CPC, oral malodour scores were reduced by 49% at the 4-hour assessment (T2) compared with those not rinsing (P < 0.05). Red-complex pathogens were reduced more effectively in the CPC group than in the control group. Brushing followed by rinsing with a 0.075% CPC mouthwash provided statistically significantly greater reductions in oral malodour, measured organoleptically and instrumentally, and in the proportions of red-complex species when compared with brushing alone. © 2015 FDI World Dental Federation.

  18. Generation Scotland: Donor DNA Databank; A control DNA resource.

    PubMed

    Kerr, Shona M; Liewald, David C M; Campbell, Archie; Taylor, Kerrie; Wild, Sarah H; Newby, David; Turner, Marc; Porteous, David J

    2010-11-23

    Many medical disorders of public health importance are complex diseases caused by multiple genetic, environmental and lifestyle factors. Recent technological advances have made it possible to analyse the genetic variants that predispose to complex diseases. Reliable detection of these variants requires genome-wide association studies in sufficiently large numbers of cases and controls. This approach is often hampered by difficulties in collecting appropriate control samples. The Generation Scotland: Donor DNA Databank (GS:3D) aims to help solve this problem by providing a resource of control DNA and plasma samples accessible for research. GS:3D participants were recruited from volunteer blood donors attending Scottish National Blood Transfusion Service (SNBTS) clinics across Scotland. All participants gave full written consent for GS:3D to take spare blood from their normal donation. Participants also supplied demographic data by completing a short questionnaire. Over five thousand complete sets of samples, data and consent forms were collected. DNA and plasma were extracted and stored. The data and samples were unlinked from their original SNBTS identifier number. The plasma, DNA and demographic data are available for research. New data obtained from analysis of the resource will be fed back to GS:3D and will be made available to other researchers as appropriate. Recruitment of blood donors is an efficient and cost-effective way of collecting thousands of control samples. Because the collection is large, subsets of controls can be selected, based on age range, gender, and ethnic or geographic origin. The GS:3D resource should reduce time and expense for investigators who would otherwise have had to recruit their own controls.

  19. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    PubMed

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  20. Mass spectrometer having a derivatized sample presentation apparatus

    DOEpatents

    Nelson, Randall W.

    2000-07-25

    A mass spectrometer having a derivatized sample presentation apparatus is provided. The sample presentation apparatus has a complex bound to the surface of the sample presentation apparatus. This complex includes a molecule which may chemically modify a biomolecule.

  1. Characterization of microbial associations with methanotrophic archaea and sulfate-reducing bacteria through statistical comparison of nested Magneto-FISH enrichments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trembath-Reichert, Elizabeth; Case, David H.; Orphan, Victoria J.

    Methane seep systems along continental margins host diverse and dynamic microbial assemblages, sustained in large part through the microbially mediated process of sulfate-coupled Anaerobic Oxidation of Methane (AOM). This methanotrophic metabolism has been linked to consortia of anaerobic methane-oxidizing archaea (ANME) and sulfate-reducing bacteria (SRB). These two groups are the focus of numerous studies; however, less is known about the wide diversity of other seep associated microorganisms. We selected a hierarchical set of FISH probes targeting a range ofDeltaproteobacteriadiversity. Using the Magneto-FISH enrichment technique, we then magnetically captured CARD-FISH hybridized cells and their physically associated microorganisms from a methane seepmore » sediment incubation. DNA from nested Magneto-FISH experiments was analyzed using Illumina tag 16S rRNA gene sequencing (iTag). Enrichment success and potential bias with iTag was evaluated in the context of full-length 16S rRNA gene clone libraries, CARD-FISH, functional gene clone libraries, and iTag mock communities. We determined commonly used Earth Microbiome Project (EMP) iTAG primers introduced bias in some common methane seep microbial taxa that reduced the ability to directly compare OTU relative abundances within a sample, but comparison of relative abundances between samples (in nearly all cases) and whole community-based analyses were robust. The iTag dataset was subjected to statistical co-occurrence measures of the most abundant OTUs to determine which taxa in this dataset were most correlated across all samples. In addition, many non-canonical microbial partnerships were statistically significant in our co-occurrence network analysis, most of which were not recovered with conventional clone library sequencing, demonstrating the utility of combining Magneto-FISH and iTag sequencing methods for hypothesis generation of associations within complex microbial communities. Network analysis pointed to many co-occurrences containing putatively heterotrophic, candidate phyla such as OD1, Atribacteria, MBG-B, and Hyd24-12 and the potential for complex sulfur cycling involving Epsilon-, Delta-, and Gammaproteobacteria in methane seep ecosystems.« less

  2. Characterization of microbial associations with methanotrophic archaea and sulfate-reducing bacteria through statistical comparison of nested Magneto-FISH enrichments

    DOE PAGES

    Trembath-Reichert, Elizabeth; Case, David H.; Orphan, Victoria J.

    2016-04-18

    Methane seep systems along continental margins host diverse and dynamic microbial assemblages, sustained in large part through the microbially mediated process of sulfate-coupled Anaerobic Oxidation of Methane (AOM). This methanotrophic metabolism has been linked to consortia of anaerobic methane-oxidizing archaea (ANME) and sulfate-reducing bacteria (SRB). These two groups are the focus of numerous studies; however, less is known about the wide diversity of other seep associated microorganisms. We selected a hierarchical set of FISH probes targeting a range ofDeltaproteobacteriadiversity. Using the Magneto-FISH enrichment technique, we then magnetically captured CARD-FISH hybridized cells and their physically associated microorganisms from a methane seepmore » sediment incubation. DNA from nested Magneto-FISH experiments was analyzed using Illumina tag 16S rRNA gene sequencing (iTag). Enrichment success and potential bias with iTag was evaluated in the context of full-length 16S rRNA gene clone libraries, CARD-FISH, functional gene clone libraries, and iTag mock communities. We determined commonly used Earth Microbiome Project (EMP) iTAG primers introduced bias in some common methane seep microbial taxa that reduced the ability to directly compare OTU relative abundances within a sample, but comparison of relative abundances between samples (in nearly all cases) and whole community-based analyses were robust. The iTag dataset was subjected to statistical co-occurrence measures of the most abundant OTUs to determine which taxa in this dataset were most correlated across all samples. In addition, many non-canonical microbial partnerships were statistically significant in our co-occurrence network analysis, most of which were not recovered with conventional clone library sequencing, demonstrating the utility of combining Magneto-FISH and iTag sequencing methods for hypothesis generation of associations within complex microbial communities. Network analysis pointed to many co-occurrences containing putatively heterotrophic, candidate phyla such as OD1, Atribacteria, MBG-B, and Hyd24-12 and the potential for complex sulfur cycling involving Epsilon-, Delta-, and Gammaproteobacteria in methane seep ecosystems.« less

  3. Characterization of microbial associations with methanotrophic archaea and sulfate-reducing bacteria through statistical comparison of nested Magneto-FISH enrichments.

    PubMed

    Trembath-Reichert, Elizabeth; Case, David H; Orphan, Victoria J

    2016-01-01

    Methane seep systems along continental margins host diverse and dynamic microbial assemblages, sustained in large part through the microbially mediated process of sulfate-coupled Anaerobic Oxidation of Methane (AOM). This methanotrophic metabolism has been linked to consortia of anaerobic methane-oxidizing archaea (ANME) and sulfate-reducing bacteria (SRB). These two groups are the focus of numerous studies; however, less is known about the wide diversity of other seep associated microorganisms. We selected a hierarchical set of FISH probes targeting a range of Deltaproteobacteria diversity. Using the Magneto-FISH enrichment technique, we then magnetically captured CARD-FISH hybridized cells and their physically associated microorganisms from a methane seep sediment incubation. DNA from nested Magneto-FISH experiments was analyzed using Illumina tag 16S rRNA gene sequencing (iTag). Enrichment success and potential bias with iTag was evaluated in the context of full-length 16S rRNA gene clone libraries, CARD-FISH, functional gene clone libraries, and iTag mock communities. We determined commonly used Earth Microbiome Project (EMP) iTAG primers introduced bias in some common methane seep microbial taxa that reduced the ability to directly compare OTU relative abundances within a sample, but comparison of relative abundances between samples (in nearly all cases) and whole community-based analyses were robust. The iTag dataset was subjected to statistical co-occurrence measures of the most abundant OTUs to determine which taxa in this dataset were most correlated across all samples. Many non-canonical microbial partnerships were statistically significant in our co-occurrence network analysis, most of which were not recovered with conventional clone library sequencing, demonstrating the utility of combining Magneto-FISH and iTag sequencing methods for hypothesis generation of associations within complex microbial communities. Network analysis pointed to many co-occurrences containing putatively heterotrophic, candidate phyla such as OD1, Atribacteria, MBG-B, and Hyd24-12 and the potential for complex sulfur cycling involving Epsilon-, Delta-, and Gammaproteobacteria in methane seep ecosystems.

  4. Miniaturized and direct spectrophotometric multi-sample analysis of trace metals in natural waters.

    PubMed

    Albendín, Gemma; López-López, José A; Pinto, Juan J

    2016-03-15

    Trends in the analysis of trace metals in natural waters are mainly based on the development of sample treatment methods to isolate and pre-concentrate the metal from the matrix in a simpler extract for further instrumental analysis. However, direct analysis is often possible using more accessible techniques such as spectrophotometry. In this case a proper ligand is required to form a complex that absorbs radiation in the ultraviolet-visible (UV-Vis) spectrum. In this sense, the hydrazone derivative, di-2-pyridylketone benzoylhydrazone (dPKBH), forms complexes with copper (Cu) and vanadium (V) that absorb light at 370 and 395 nm, respectively. Although spectrophotometric methods are considered as time- and reagent-consuming, this work focused on its miniaturization by reducing the volume of sample as well as time and cost of analysis. In both methods, a micro-amount of sample is placed into a microplate reader with a capacity for 96 samples, which can be analyzed in times ranging from 5 to 10 min. The proposed methods have been optimized using a Box-Behnken design of experiments. For Cu determination, concentration of phosphate buffer solution at pH 8.33, masking agents (ammonium fluoride and sodium citrate), and dPKBH were optimized. For V analysis, sample (pH 4.5) was obtained using acetic acid/sodium acetate buffer, and masking agents were ammonium fluoride and 1,2-cyclohexanediaminetetraacetic acid. Under optimal conditions, both methods were applied to the analysis of certified reference materials TMDA-62 (lake water), LGC-6016 (estuarine water), and LGC-6019 (river water). In all cases, results proved the accuracy of the method. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Characterization and Beneficiation Studies of a Low Grade Bauxite Ore

    NASA Astrophysics Data System (ADS)

    Rao, D. S.; Das, B.

    2014-10-01

    A low grade bauxite sample of central India was thoroughly characterized with the help of stereomicroscope, reflected light microscope and electron microscope using QEMSCAN. A few hand picked samples were collected from different places of the mine and were subjected to geochemical characterization studies. The geochemical studies indicated that most of the samples contain high silica and low alumina, except a few which are high grade. Mineralogically the samples consist of bauxite (gibbsite and boehmite), ferruginous mineral phases (goethite and hematite), clay and silicate (quartz), and titanium bearing minerals like rutile and ilmenite. Majority of the gibbsite, boehmite and gibbsitic oolites contain clay, quartz and iron and titanium mineral phases within the sample as inclusions. The sample on an average contains 39.1 % Al2O3 and 12.3 % SiO2, and 20.08 % of Fe2O3. Beneficiation techniques like size classification, sorting, scrubbing, hydrocyclone and magnetic separation were employed to reduce the silica content suitable for Bayer process. The studies indicated that, 50 % by weight with 41 % Al2O3 containing less than 5 % SiO2 could be achieved. The finer sized sample after physical beneficiation still contains high silica due to complex mineralogical associations.

  6. Harpoon-based sample Acquisition System

    NASA Astrophysics Data System (ADS)

    Bernal, Javier; Nuth, Joseph; Wegel, Donald

    2012-02-01

    Acquiring information about the composition of comets, asteroids, and other near Earth objects is very important because they may contain the primordial ooze of the solar system and the origins of life on Earth. Sending a spacecraft is the obvious answer, but once it gets there it needs to collect and analyze samples. Conceptually, a drill or a shovel would work, but both require something extra to anchor it to the comet, adding to the cost and complexity of the spacecraft. Since comets and asteroids are very low gravity objects, drilling becomes a problem. If you do not provide a grappling mechanism, the drill would push the spacecraft off the surface. Harpoons have been proposed as grappling mechanisms in the past and are currently flying on missions such as ROSETTA. We propose to use a hollow, core sampling harpoon, to act as the anchoring mechanism as well as the sample collecting device. By combining these two functions, mass is reduced, more samples can be collected and the spacecraft can carry more propellant. Although challenging, returning the collected samples to Earth allows them to be analyzed in laboratories with much greater detail than possible on a spacecraft. Also, bringing the samples back to Earth allows future generations to study them.

  7. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Are Complexity Metrics Reliable in Assessing HRV Control in Obese Patients During Sleep?

    PubMed

    Cabiddu, Ramona; Trimer, Renata; Borghi-Silva, Audrey; Migliorini, Matteo; Mendes, Renata G; Oliveira, Antonio D; Costa, Fernando S M; Bianchi, Anna M

    2015-01-01

    Obesity is associated with cardiovascular mortality. Linear methods, including time domain and frequency domain analysis, are normally applied on the heart rate variability (HRV) signal to investigate autonomic cardiovascular control, whose imbalance might promote cardiovascular disease in these patients. However, given the cardiac activity non-linearities, non-linear methods might provide better insight. HRV complexity was hereby analyzed during wakefulness and different sleep stages in healthy and obese subjects. Given the short duration of each sleep stage, complexity measures, normally extracted from long-period signals, needed be calculated on short-term signals. Sample entropy, Lempel-Ziv complexity and detrended fluctuation analysis were evaluated and results showed no significant differences among the values calculated over ten-minute signals and longer durations, confirming the reliability of such analysis when performed on short-term signals. Complexity parameters were extracted from ten-minute signal portions selected during wakefulness and different sleep stages on HRV signals obtained from eighteen obese patients and twenty controls. The obese group presented significantly reduced complexity during light and deep sleep, suggesting a deficiency in the control mechanisms integration during these sleep stages. To our knowledge, this study reports for the first time on how the HRV complexity changes in obesity during wakefulness and sleep. Further investigation is needed to quantify altered HRV impact on cardiovascular mortality in obesity.

  9. Are Complexity Metrics Reliable in Assessing HRV Control in Obese Patients During Sleep?

    PubMed Central

    Cabiddu, Ramona; Trimer, Renata; Borghi-Silva, Audrey; Migliorini, Matteo; Mendes, Renata G.; Oliveira Jr., Antonio D.; Costa, Fernando S. M.; Bianchi, Anna M.

    2015-01-01

    Obesity is associated with cardiovascular mortality. Linear methods, including time domain and frequency domain analysis, are normally applied on the heart rate variability (HRV) signal to investigate autonomic cardiovascular control, whose imbalance might promote cardiovascular disease in these patients. However, given the cardiac activity non-linearities, non-linear methods might provide better insight. HRV complexity was hereby analyzed during wakefulness and different sleep stages in healthy and obese subjects. Given the short duration of each sleep stage, complexity measures, normally extracted from long-period signals, needed be calculated on short-term signals. Sample entropy, Lempel-Ziv complexity and detrended fluctuation analysis were evaluated and results showed no significant differences among the values calculated over ten-minute signals and longer durations, confirming the reliability of such analysis when performed on short-term signals. Complexity parameters were extracted from ten-minute signal portions selected during wakefulness and different sleep stages on HRV signals obtained from eighteen obese patients and twenty controls. The obese group presented significantly reduced complexity during light and deep sleep, suggesting a deficiency in the control mechanisms integration during these sleep stages. To our knowledge, this study reports for the first time on how the HRV complexity changes in obesity during wakefulness and sleep. Further investigation is needed to quantify altered HRV impact on cardiovascular mortality in obesity. PMID:25893856

  10. Antigen-Dissociation from Antibody-Modified Nanotransistor Sensor Arrays as a Direct Biomarker Detection Method in Unprocessed Biosamples.

    PubMed

    Krivitsky, Vadim; Zverzhinetsky, Marina; Patolsky, Fernando

    2016-10-12

    The detection of biomolecules is critical for a wide spectrum of applications in life sciences and medical diagnosis. Nonetheless, biosamples are highly complex solutions, which contain an enormous variety of biomolecules, cells, and chemical species. Consequently, the intrinsic chemical complexity of biosamples results in a significant analytical background noise and poses an immense challenge to any analytical measurement, especially when applied without prior efficient separation and purification steps. Here, we demonstrate the application of antigen-dissociation regime, from antibody-modified Si-nanowire sensors, as a simple and effective direct sensing mechanism of biomarkers of interest in complex biosamples, such as serum and untreated blood, which does not require ex situ time-consuming biosample manipulation steps, such as centrifugation, filtering, preconcentration, and desalting, thus overcoming the detrimental Debye screening limitation of nanowire-based biosensors. We found that two key parameters control the capability to perform quantitative biomarkers analysis in biosamples: (i) the affinity strength (k off rate) of the antibody-antigen recognition pair, which dictates the time length of the high-affinity slow dissociation subregime, and (ii) the "flow rate" applied during the solution exchange dissociation step, which controls the time width of the low-affinity fast-dissociation subregime. Undoubtedly, this is the simplest and most convenient approach for the SiNW FET-based detection of antigens in complex untreated biosamples. The lack of ex situ biosample manipulation time-consuming processes enhances the portability of the sensing platform and reduces to minimum the required volume of tested sample, as it allows the direct detection of untreated biosamples (5-10 μL blood or serum), while readily reducing the detection cycle duration to less than 5 min, factors of great importance in near-future point-of-care medical applications. We believe this is the first ever reported demonstration on the real-time, direct label-free sensing of biomarkers from untreated blood samples, using SiNW-based FET devices, while not compromising the ultrasensitive sensing capabilities inherent to these devices.

  11. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  12. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  13. Regulation of Botulinum Neurotoxin Synthesis and Toxin Complex Formation by Arginine and Glucose in Clostridium botulinum ATCC 3502.

    PubMed

    Fredrick, Chase M; Lin, Guangyun; Johnson, Eric A

    2017-07-01

    Botulinum neurotoxin (BoNT), produced by neurotoxigenic clostridia, is the most potent biological toxin known and the causative agent of the paralytic disease botulism. The nutritional, environmental, and genetic regulation of BoNT synthesis, activation, stability, and toxin complex (TC) formation is not well studied. Previous studies indicated that growth and BoNT formation were affected by arginine and glucose in Clostridium botulinum types A and B. In the present study, C. botulinum ATCC 3502 was grown in toxin production medium (TPM) with different levels of arginine and glucose and of three products of arginine metabolism, citrulline, proline, and ornithine. Cultures were analyzed for growth (optical density at 600 nm [OD 600 ]), spore formation, and BoNT and TC formation by Western blotting and immunoprecipitation and for BoNT activity by mouse bioassay. A high level of arginine (20 g/liter) repressed BoNT production approximately 1,000-fold, enhanced growth, slowed lysis, and reduced endospore production by greater than 1,000-fold. Similar effects on toxin production were seen with equivalent levels of citrulline but not ornithine or proline. In TPM lacking glucose, levels of formation of BoNT/A1 and TC were significantly decreased, and extracellular BoNT and TC proteins were partially inactivated after the first day of culture. An understanding of the regulation of C. botulinum growth and BoNT and TC formation should be valuable in defining requirements for BoNT formation in foods and clinical samples, improving the quality of BoNT for pharmaceutical preparations, and elucidating the biological functions of BoNTs for the bacterium. IMPORTANCE Botulinum neurotoxin (BoNT) is a major food safety and bioterrorism concern and is also an important pharmaceutical, and yet the regulation of its synthesis, activation, and stability in culture media, foods, and clinical samples is not well understood. This paper provides insights into the effects of critical nutrients on growth, lysis, spore formation, BoNT and TC production, and stability of BoNTs of C. botulinum We show that for C. botulinum ATCC 3502 cultured in a complex medium, a high level of arginine repressed BoNT expression by ca. 1,000-fold and also strongly reduced sporulation. Arginine stimulated growth and compensated for a lack of glucose. BoNT and toxin complex proteins were partially inactivated in a complex medium lacking glucose. This work should aid in optimizing BoNT production for pharmaceutical uses, and furthermore, an understanding of the nutritional regulation of growth and BoNT formation may provide insights into growth and BoNT formation in foods and clinical samples and into the enigmatic function of BoNTs in nature. Copyright © 2017 American Society for Microbiology.

  14. Reducing the Complexity Gap: Expanding the Period of Human Nurturance

    ERIC Educational Resources Information Center

    Kiel, L. Douglas

    2014-01-01

    Socio-techno-cultural reality, in the current historical era, evolves at a faster rate than do human brain or human institutions. This reality creates a "complexity gap" that reduces human and institutional capacities to adapt to the challenges of late modernity. New insights from the neurosciences may help to reduce the complexity gap.…

  15. Delivery route determines the presence of immune complexes on umbilical cord erythrocytes.

    PubMed

    de Lima, Andrés; Franco, Luis C; Sarmiento, Andrés; González, John M

    2017-11-01

    Umbilical cord blood offers a unique opportunity to study the basal level of immunoglobulin complexes. This study aims to determine the presence of immune complexes and complement deposition on erythrocytes from umbilical cord blood from normal, full-term pregnancies. In vitro pre-formed IgA, IgG, and IgM complexes were used as positive control for flow cytometry detection, and for C3d deposition. Blood samples (34) of umbilical cord blood taken from vaginal and cesarean deliveries were tested for the presence of immunoglobulin complexes. Fourteen samples from vaginal deliveries and 20 samples from cesarean deliveries were assessed. IgG and IgM complexes were detected on erythrocytes, whereas no IgA complexes or complement deposition was observed. Interestingly, the percentage of IgG complexes was higher on erythrocytes from vaginal delivery samples compared to those from cesarean deliveries. No other associations between immune complexes and other maternal or newborn variables were found. IgG and IgM complexes seem to be normally present on umbilical cord erythrocytes. Erythrocytes from vaginal deliveries have a higher percentage of IgG complexes present compared to that from cesarean deliveries. Since no C3d activity was detected, these complexes are non-pathological and should be part of the newborn's initial innate immune response.

  16. Robust Timing Synchronization in Aeronautical Mobile Communication Systems

    NASA Technical Reports Server (NTRS)

    Xiong, Fu-Qin; Pinchak, Stanley

    2004-01-01

    This work details a study of robust synchronization schemes suitable for satellite to mobile aeronautical applications. A new scheme, the Modified Sliding Window Synchronizer (MSWS), is devised and compared with existing schemes, including the traditional Early-Late Gate Synchronizer (ELGS), the Gardner Zero-Crossing Detector (GZCD), and the Sliding Window Synchronizer (SWS). Performance of the synchronization schemes is evaluated by a set of metrics that indicate performance in digital communications systems. The metrics are convergence time, mean square phase error (or root mean-square phase error), lowest SNR for locking, initial frequency offset performance, midstream frequency offset performance, and system complexity. The performance of the synchronizers is evaluated by means of Matlab simulation models. A simulation platform is devised to model the satellite to mobile aeronautical channel, consisting of a Quadrature Phase Shift Keying modulator, an additive white Gaussian noise channel, and a demodulator front end. Simulation results show that the MSWS provides the most robust performance at the cost of system complexity. The GZCD provides a good tradeoff between robustness and system complexity for communication systems that require high symbol rates or low overall system costs. The ELGS has a high system complexity despite its average performance. Overall, the SWS, originally designed for multi-carrier systems, performs very poorly in single-carrier communications systems. Table 5.1 in Section 5 provides a ranking of each of the synchronization schemes in terms of the metrics set forth in Section 4.1. Details of comparison are given in Section 5. Based on the results presented in Table 5, it is safe to say that the most robust synchronization scheme examined in this work is the high-sample-rate Modified Sliding Window Synchronizer. A close second is its low-sample-rate cousin. The tradeoff between complexity and lowest mean-square phase error determines the rankings of the Gardner Zero-Crossing Detector and both versions of the Early-Late Gate Synchronizer. The least robust models are the high and low-sample-rate Sliding Window Synchronizers. Consequently, the recommended replacement synchronizer for NASA's Advanced Air Transportation Technologies mobile aeronautical communications system is the high-sample-rate Modified Sliding Window Synchronizer. By incorporating this synchronizer into their system, NASA can be assured that their system will be operational in extremely adverse conditions. The quick convergence time of the MSWS should allow the use of high-level protocols. However, if NASA feels that reduced system complexity is the most important aspect of their replacement synchronizer, the Gardner Zero-Crossing Detector would be the best choice.

  17. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Generalized sample entropy analysis for traffic signals based on similarity measure

    NASA Astrophysics Data System (ADS)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  19. Meta-analysis of mismatch negativity to simple versus complex deviants in schizophrenia.

    PubMed

    Avissar, Michael; Xie, Shanghong; Vail, Blair; Lopez-Calderon, Javier; Wang, Yuanjia; Javitt, Daniel C

    2018-01-01

    Mismatch negativity (MMN) deficits in schizophrenia (SCZ) have been studied extensively since the early 1990s, with the vast majority of studies using simple auditory oddball task deviants that vary in a single acoustic dimension such as pitch or duration. There has been a growing interest in using more complex deviants that violate more abstract rules to probe higher order cognitive deficits. It is still unclear how sensory processing deficits compare to and contribute to higher order cognitive dysfunction, which can be investigated with later attention-dependent auditory event-related potential (ERP) components such as a subcomponent of P300, P3b. In this meta-analysis, we compared MMN deficits in SCZ using simple deviants to more complex deviants. We also pooled studies that measured MMN and P3b in the same study sample and examined the relationship between MMN and P3b deficits within study samples. Our analysis reveals that, to date, studies using simple deviants demonstrate larger deficits than those using complex deviants, with effect sizes in the range of moderate to large. The difference in effect sizes between deviant types was reduced significantly when accounting for magnitude of MMN measured in healthy controls. P3b deficits, while large, were only modestly greater than MMN deficits (d=0.21). Taken together, our findings suggest that MMN to simple deviants may still be optimal as a biomarker for SCZ and that sensory processing dysfunction contributes significantly to MMN deficit and disease pathophysiology. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Mitochondrial dysfunction in blood cells from amyotrophic lateral sclerosis patients.

    PubMed

    Ehinger, Johannes K; Morota, Saori; Hansson, Magnus J; Paul, Gesine; Elmér, Eskil

    2015-06-01

    Mitochondrial dysfunction is implicated in amyotrophic lateral sclerosis, where the progressive degeneration of motor neurons results in muscle atrophy, paralysis and death. Abnormalities in both central nervous system and muscle mitochondria have previously been demonstrated in patient samples, indicating systemic disease. In this case-control study, venous blood samples were acquired from 24 amyotrophic lateral sclerosis patients and 21 age-matched controls. Platelets and peripheral blood mononuclear cells were isolated and mitochondrial oxygen consumption measured in intact and permeabilized cells with additions of mitochondrial substrates, inhibitors and titration of an uncoupler. Respiratory values were normalized to cell count and for two markers of cellular mitochondrial content, citrate synthase activity and mitochondrial DNA, respectively. Mitochondrial function was correlated with clinical staging of disease severity. Complex IV (cytochrome c-oxidase)-activity normalized to mitochondrial content was decreased in platelets from amyotrophic lateral sclerosis patients both when normalized to citrate synthase activity and mitochondrial DNA copy number. In mononuclear cells, complex IV-activity was decreased when normalized to citrate synthase activity. Mitochondrial content was increased in amyotrophic lateral sclerosis patient platelets. In mononuclear cells, complex I activity declined and mitochondrial content increased progressively with advancing disease stage. The findings are, however, based on small subsets of patients and need to be confirmed. We conclude that when normalized to mitochondria-specific content, complex IV-activity is reduced in blood cells from amyotrophic lateral sclerosis patients and that there is an apparent compensatory increase in cellular mitochondrial content. This supports systemic involvement in amyotrophic lateral sclerosis and suggests further study of mitochondrial function in blood cells as a future biomarker for the disease.

  1. Diabetic retinopathy and complexity of retinal surgery in a general hospital.

    PubMed

    Mijangos-Medina, Laura Fanny; Hurtado-Noriega, Blanca Esmeralda; Lima-Gómez, Virgilio

    2012-01-01

    Usual retinal surgery (vitrectomy or surgery for retinal detachment) may require additional procedures to deal with complex cases, which increase time and resource use and delay access to treatment. We undertook this study to identify the proportion of primary retinal surgeries that required complex procedures and the associated causes. We carried out an observational, descriptive, cross-sectional, retrospective study. Patients with primary retinal surgery were evaluated (January 2007-December 2010). The proportion and 95% confidence intervals (CI) of preoperative diagnosis and cause of the disease requiring retinal surgery as well as the causes for complex retinal surgery were identified. Complex retinal surgery was defined as that requiring lens extraction, intraocular lens implantation, heavy perfluorocarbon liquids, silicone oil tamponade or intravitreal drugs, in addition to the usual surgical retinal procedure. The proportion of complex retinal surgeries was compared among preoperative diagnoses and among causes (χ(2), odds ratio [OR]). We studied 338 eyes. Mean age of subjects was 53.7 years, and there were 49% females. The most common diagnoses were vitreous hemorrhage (27.2%) and rhegmatogenous retinal detachment (24.6%). The most common cause was diabetes (50.6%); 273 eyes required complex surgery (80.8%, 95% CI: 76.6-85). The proportion did not differ among diagnoses but was higher in diabetic retinopathy (89%, p <0.001, OR 3.04, 95% CI: 1.63-5.7). Of the total sample, 80.8% of eyes required complex surgical procedures; diabetic retinopathy increased by 3-fold the probability of requiring these complex procedures. Early treatment of diabetic retinopathy may reduce the proportion of complex retinal surgery by 56%.

  2. Diminished superoxide generation is associated with respiratory chain dysfunction and changes in the mitochondrial proteome of sensory neurons from diabetic rats.

    PubMed

    Akude, Eli; Zherebitskaya, Elena; Chowdhury, Subir K Roy; Smith, Darrell R; Dobrowsky, Rick T; Fernyhough, Paul

    2011-01-01

    Impairments in mitochondrial function have been proposed to play a role in the etiology of diabetic sensory neuropathy. We tested the hypothesis that mitochondrial dysfunction in axons of sensory neurons in type 1 diabetes is due to abnormal activity of the respiratory chain and an altered mitochondrial proteome. Proteomic analysis using stable isotope labeling with amino acids in cell culture (SILAC) determined expression of proteins in mitochondria from dorsal root ganglia (DRG) of control, 22-week-old streptozotocin (STZ)-diabetic rats, and diabetic rats treated with insulin. Rates of oxygen consumption and complex activities in mitochondria from DRG were measured. Fluorescence imaging of axons of cultured sensory neurons determined the effect of diabetes on mitochondrial polarization status, oxidative stress, and mitochondrial matrix-specific reactive oxygen species (ROS). Proteins associated with mitochondrial dysfunction, oxidative phosphorylation, ubiquinone biosynthesis, and the citric acid cycle were downregulated in diabetic samples. For example, cytochrome c oxidase subunit IV (COX IV; a complex IV protein) and NADH dehydrogenase Fe-S protein 3 (NDUFS3; a complex I protein) were reduced by 29 and 36% (P < 0.05), respectively, in diabetes and confirmed previous Western blot studies. Respiration and mitochondrial complex activity was significantly decreased by 15 to 32% compared with control. The axons of diabetic neurons exhibited oxidative stress and depolarized mitochondria, an aberrant adaption to oligomycin-induced mitochondrial membrane hyperpolarization, but reduced levels of intramitochondrial superoxide compared with control. Abnormal mitochondrial function correlated with a downregulation of mitochondrial proteins, with components of the respiratory chain targeted in lumbar DRG in diabetes. The reduced activity of the respiratory chain was associated with diminished superoxide generation within the mitochondrial matrix and did not contribute to oxidative stress in axons of diabetic neurons. Alternative pathways involving polyol pathway activity appear to contribute to raised ROS in axons of diabetic neurons under high glucose concentration.

  3. Estimation of Variance in the Case of Complex Samples.

    ERIC Educational Resources Information Center

    Groenewald, A. C.; Stoker, D. J.

    In a complex sampling scheme it is desirable to select the primary sampling units (PSUs) without replacement to prevent duplications in the sample. Since the estimation of the sampling variances is more complicated when the PSUs are selected without replacement, L. Kish (1965) recommends that the variance be calculated using the formulas…

  4. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  5. Development of a low-cost optical sensor for cupric reducing antioxidant capacity measurement of food extracts.

    PubMed

    Bener, Mustafa; Ozyürek, Mustafa; Güçlü, Kubilay; Apak, Reşat

    2010-05-15

    A low-cost optical sensor using an immobilized chromogenic redox reagent was devised for measuring the total antioxidant level in a liquid sample without requiring sample pretreatment. The reagent, copper(II)-neocuproine (Cu(II)-Nc) complex, was immobilized onto a cation-exchanger film of Nafion, and the absorbance changes associated with the formation of the highly colored Cu(I)-Nc chelate as a result of reaction with antioxidants was measured at 450 nm. The sensor gave a linear response over a wide concentration range of standard antioxidant compounds. The trolox equivalent antioxidant capacity (TEAC) values of various antioxidants reported in this work using the optical sensor-based "cupric reducing antioxidant capacity" (CUPRAC) assay were comparable to those of the standard solution-based CUPRAC assay, showing that the immobilized Cu(II)-Nc reagent retained its reactivity toward antioxidants. Common food ingredients like oxalate, citrate, fruit acids, and reducing sugars did not interfere with the proposed sensing method. This assay was validated through linearity, additivity, precision, and recovery, demonstrating that the assay is reliable and robust. The developed optical sensor was used to screen total antioxidant capacity (TAC) of some commercial fruit juices without preliminary treatment and showed a promising potential for the preparation of antioxidant inventories of a wide range of food plants.

  6. Energy-saving method for technogenic waste processing

    PubMed Central

    Dikhanbaev, Bayandy; Dikhanbaev, Aristan Bayandievich

    2017-01-01

    Dumps of a mining-metallurgical complex of post-Soviet Republics have accumulated a huge amount of technogenic waste products. Out of them, Kazakhstan alone has preserved about 20 billion tons. In the field of technogenic waste treatment, there is still no technical solution that leads it to be a profitable process. Recent global trends prompted scientists to focus on developing energy-saving and a highly efficient melting unit that can significantly reduce specific fuel consumption. This paper reports, the development of a new technological method—smelt layer of inversion phase. The introducing method is characterized by a combination of ideal stirring and ideal displacement regimes. Using the method of affine modelling, recalculation of pilot plant’s test results on industrial sample has been obtained. Experiments show that in comparison with bubbling and boiling layers of smelt, the degree of zinc recovery increases in the layer of inversion phase. That indicates the reduction of the possibility of new formation of zinc silicates and ferrites from recombined molecules of ZnO, SiO2, and Fe2O3. Calculations show that in industrial samples of the pilot plant, the consumption of natural gas has reduced approximately by two times in comparison with fuming-furnace. The specific fuel consumption has reduced by approximately four times in comparison with Waelz-kiln. PMID:29281646

  7. Redox-dependent substrate-cofactor interactions in the Michaelis-complex of a flavin-dependent oxidoreductase

    NASA Astrophysics Data System (ADS)

    Werther, Tobias; Wahlefeld, Stefan; Salewski, Johannes; Kuhlmann, Uwe; Zebger, Ingo; Hildebrandt, Peter; Dobbek, Holger

    2017-07-01

    How an enzyme activates its substrate for turnover is fundamental for catalysis but incompletely understood on a structural level. With redox enzymes one typically analyses structures of enzyme-substrate complexes in the unreactive oxidation state of the cofactor, assuming that the interaction between enzyme and substrate is independent of the cofactors oxidation state. Here, we investigate the Michaelis complex of the flavoenzyme xenobiotic reductase A with the reactive reduced cofactor bound to its substrates by X-ray crystallography and resonance Raman spectroscopy and compare it to the non-reactive oxidized Michaelis complex mimics. We find that substrates bind in different orientations to the oxidized and reduced flavin, in both cases flattening its structure. But only authentic Michaelis complexes display an unexpected rich vibrational band pattern uncovering a strong donor-acceptor complex between reduced flavin and substrate. This interaction likely activates the catalytic ground state of the reduced flavin, accelerating the reaction within a compressed cofactor-substrate complex.

  8. Redox-dependent substrate-cofactor interactions in the Michaelis-complex of a flavin-dependent oxidoreductase

    PubMed Central

    Werther, Tobias; Wahlefeld, Stefan; Salewski, Johannes; Kuhlmann, Uwe; Zebger, Ingo; Hildebrandt, Peter; Dobbek, Holger

    2017-01-01

    How an enzyme activates its substrate for turnover is fundamental for catalysis but incompletely understood on a structural level. With redox enzymes one typically analyses structures of enzyme–substrate complexes in the unreactive oxidation state of the cofactor, assuming that the interaction between enzyme and substrate is independent of the cofactors oxidation state. Here, we investigate the Michaelis complex of the flavoenzyme xenobiotic reductase A with the reactive reduced cofactor bound to its substrates by X-ray crystallography and resonance Raman spectroscopy and compare it to the non-reactive oxidized Michaelis complex mimics. We find that substrates bind in different orientations to the oxidized and reduced flavin, in both cases flattening its structure. But only authentic Michaelis complexes display an unexpected rich vibrational band pattern uncovering a strong donor–acceptor complex between reduced flavin and substrate. This interaction likely activates the catalytic ground state of the reduced flavin, accelerating the reaction within a compressed cofactor–substrate complex.

  9. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    PubMed

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  10. New Laboratory Technique to Determine Thermal Conductivity of Complex Regolith Simulants Under High Vacuum

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2016-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under high vacuum and across a wide range of temperatures. Here, we present our laboratory method, strategy, and initial results. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and cementation. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe and meets grain size requirements for sampling. Our system consists of a cryostat vacuum chamber with an internal liquid nitrogen dewar. A granular sample is contained in a cylindrical cup that is 4 cm in diameter and 1 to 6 cm deep. The surface of the sample is exposed to vacuum and is surrounded by a black liquid nitrogen cold shroud. Once the system has equilibrated at 80 K, the base of the sample cup is rapidly heated to 450 K. An infrared camera observes the sample from above to monitor its temperature change over time. We have built a time-dependent finite element model of the experiment in COMSOL Multiphysics. Boundary temperature conditions and all known material properties (including surface emissivities) are included to replicate the experiment as closely as possible. The Optimization module in COMSOL is specifically designed for parameter estimation. Sample thermal conductivity is assumed to be a quadratic or cubic polynomial function of temperature. We thus use gradient-based optimization methods in COMSOL to vary the polynomial coefficients in an effort to reduce the least squares error between the measured and modeled sample surface temperature.

  11. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  12. Efficient two-dimensional compressive sensing in MIMO radar

    NASA Astrophysics Data System (ADS)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  13. Dual-comb spectroscopy of water vapor with a free-running semiconductor disk laser.

    PubMed

    Link, S M; Maas, D J H C; Waldburger, D; Keller, U

    2017-06-16

    Dual-comb spectroscopy offers the potential for high accuracy combined with fast data acquisition. Applications are often limited, however, by the complexity of optical comb systems. Here we present dual-comb spectroscopy of water vapor using a substantially simplified single-laser system. Very good spectroscopy measurements with fast sampling rates are achieved with a free-running dual-comb mode-locked semiconductor disk laser. The absolute stability of the optical comb modes is characterized both for free-running operation and with simple microwave stabilization. This approach drastically reduces the complexity for dual-comb spectroscopy. Band-gap engineering to tune the center wavelength from the ultraviolet to the mid-infrared could optimize frequency combs for specific gas targets, further enabling dual-comb spectroscopy for a wider range of industrial applications. Copyright © 2017, American Association for the Advancement of Science.

  14. Statistical analysis and machine learning algorithms for optical biopsy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.

    2018-02-01

    Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.

  15. A method to calibrate phase fluctuation in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.

    2011-06-01

    A phase fluctuation calibration method is presented for polarization-sensitive swept-source optical coherence tomography (PS-SS-OCT) using continuous polarization modulation. The method consists of the generation of a continuous triggered tone-burst waveform rather than an asynchronous waveform by use of a function generator and the removal of the global phases of the measured Jones matrices by use of matrix normalization. This could remove the use of auxiliary optical components for the phase fluctuation compensation in the system, which reduces the system complexity. Phase fluctuation calibration is necessary to obtain the reference Jones matrix by averaging the measured Jones matrices at sample surfaces. Measurements on an equine tendon sample were made by the PS-SS-OCT system to validate the proposed method.

  16. A sub-microwatt asynchronous level-crossing ADC for biomedical applications.

    PubMed

    Li, Yongjia; Zhao, Duan; Serdijn, Wouter A

    2013-04-01

    A continuous-time level-crossing analog-to-digital converter (LC-ADC) for biomedical applications is presented. When compared to uniform-sampling (US) ADCs LC-ADCs generate fewer samples for various sparse biomedical signals. Lower power consumption and reduced design complexity with respect to conventional LC-ADCs are achieved due to: 1) replacing the n-bit digital-to-analog converter (DAC) with a 1-bit DAC; 2) splitting the level-crossing detections; and 3) fixing the comparison window. Designed and implemented in 0.18 μm CMOS technology, the proposed ADC uses a chip area of 220 × 203 μm(2). Operating from a supply voltage of 0.8 V, the ADC consumes 313-582 nW from 5 Hz to 5 kHz and achieves an ENOB up to 7.9 bits.

  17. Archaeo-directional and -intensity data from burnt structures at the Thracian site of Halka Bunar (Bulgaria): The effect of magnetic mineralogy, temperature and atmosphere of heating in antiquity

    NASA Astrophysics Data System (ADS)

    Herries, A. I. R.; Kovacheva, M.; Kostadinova, M.; Shaw, J.

    2007-07-01

    Archaeomagnetic results are presented from a series of burnt structures at the Thracian site of Halka Bunar. Archaeointensity and archaeodirectional studies were undertaken on three kilns from a pottery production complex. This has been dated to the late 4th and early 3rd century B.C. (325-280 B.C.) based on coins found associated with the kilns [Tonkova, M., 2003. Newly discovered Thracian Centre of the Early Hellenistic Age at the Spring "Halka Bunar" in the Land of C. Gorno Belevo. Annuary of the Institute of Archaeology with Museum. Bulgarian Academy Sci. 2, 148-196 (in Bulgarian)]. This data provides a new point for the Bulgarian archaeomagnetic curve (Dec: 348.70 ± 5.79, Inc: 62.20 ± 2.70, and Fa: 77.23 ± 2.17 μT). The kilns are thought to have been used for producing different types of pottery in a range of heating atmospheres and at different temperatures. Therefore, special attention was paid to the magnetic mineralogy of the samples and its effect on the palaeodata. Kiln 3, orange clay samples were dominated by fine to ultra-fine grained single domain and superparamagnetic magnetite, with a small proportion of haematite. The samples were heated in a high temperature oxidising environment. Kiln 2 was probably used to make grey ware pottery. The samples are light grey and were dominated by stable single domain magnetite formed by high temperature heating in a more reducing environment. Kiln 4, mottled samples consisted of a variable mineralogy showing characteristics of both Kiln 2 and Kiln 3 samples. It was probably used to make traditional, mottled, Thracian ware pottery and was heated to lower temperatures in a mixed environment of heating. Samples heated in an oxidising environment gave more reliable Thellier results than samples heated in a reducing environment in antiquity, as the latter altered heavily on re-heating. A fourth kiln and a destruction feature from different trenches than the kiln complex were also investigated to establish their age. Archaeodirectional data was not recoverable from these two structures due to post-burning disturbance. The mean archaeointensity from Kiln 5 (mean 78.0 ± 1.7 μT) is consistent with that from the main kiln complex (mean 77.23 ± 2.17 μT) and is therefore considered to be contemporary. It was probably not used to make pottery. The destruction feature records much lower archaeointensity values (mean 65.1 ± 1.1 μT). When this value is compared to the existing reference points of the Bulgarian database it suggests this feature is younger than the kilns (250-140 B.C.). Multiple age use of the site is therefore confirmed with a main period of occupation in the late 4th and early 3rd century B.C. and another phase of occupation in the mid 3rd to mid 2nd century B.C.

  18. Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data

    PubMed Central

    Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924

  19. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  20. An uncertainty analysis of the hydrogen source term for a station blackout accident in Sequoyah using MELCOR 1.8.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles

    2014-03-01

    A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less

  1. Ultimate detectability of volatile organic compounds: how much further can we reduce their ambient air sample volumes for analysis?

    PubMed

    Kim, Yong-Hyun; Kim, Ki-Hyun

    2012-10-02

    To understand the ultimately lowest detection range of volatile organic compounds (VOCs) in air, application of a high sensitivity analytical system was investigated by coupling thermal desorption (TD) technique with gas chromatography (GC) and time-of-flight (TOF) mass spectrometry (MS). The performance of the TD-GC/TOF MS system was evaluated using liquid standards of 19 target VOCs prepared in the range of 35 pg to 2.79 ng per μL. Studies were carried out using both total ion chromatogram (TIC) and extracted ion chromatogram (EIC) mode. EIC mode was used for calibration to reduce background and to improve signal-to-noise. The detectability of 19 target VOCs, if assessed in terms of method detection limit (MDL, per US EPA definition) and limit of detection (LOD), averaged 5.90 pg and 0.122 pg, respectively, with the mean coefficient of correlation (R(2)) of 0.9975. The minimum quantifiable mass of target analytes, when determined using real air samples by the TD-GC/TOF MS, is highly comparable to the detection limits determined experimentally by standard. In fact, volumes for the actual detection of the major aromatic VOCs like benzene, toluene, and xylene (BTX) in ambient air samples were as low as 1.0 mL in the 0.11-2.25 ppb range. It was thus possible to demonstrate that most target compounds including those in low abundance could be reliably quantified at concentrations down to 0.1 ppb at sample volumes of less than 10 mL. The unique sensitivity of this advanced analytical system can ultimately lead to a shift in field sampling strategy with smaller air sample volumes facilitating faster, simpler air sampling (e.g., use of gas syringes rather than the relative complexity of pumps or bags/canisters), with greatly reduced risk of analyte breakthrough and minimal interference, e.g., from atmospheric humidity. The improved detection limits offered by this system can also enhance accuracy and measurement precision.

  2. Origin of middle rare earth element enrichments in acid waters of a Canadian high Arctic lake.

    NASA Astrophysics Data System (ADS)

    Johannesson, Kevin H.; Zhou, Xiaoping

    1999-01-01

    -Middle rare earth element (MREE) enriched rock-normalized rare earth element (REE) patterns of a dilute acidic lake (Colour Lake) in the Canadian High Arctic, were investigated by quantifying whole-rock REE concentrations of rock samples collected from the catchment basin, as well as determining the acid leachable REE fraction of these rocks. An aliquot of each rock sample was leached with 1 N HNO 3 to examine the readily leachable REE fraction of each rock, and an additional aliquot was leached with a 0.04 M NH 2OH · HCl in 25% (v/v) CH 3COOH solution, designed specifically to reduce Fe-Mn oxides/oxyhydroxides. Rare earth elements associated with the leachates that reacted with clastic sedimentary rock samples containing petrographically identifiable Fe-Mn oxide/oxyhydroxide cements and/or minerals/amorphous phases, exhibited whole-rock-normalized REE patterns similar to the lake waters, whereas whole-rock-normalized leachates from mafic igneous rocks and other clastic sedimentary rocks from the catchment basin differed substantially from the lake waters. The whole-rock, leachates, and lake water REE data support acid leaching or dissolution of MREE enriched Fe-Mn oxides/oxyhydroxides contained and identified within some of the catchment basin sedimentary rocks as the likely source of the unique lake water REE patterns. Solution complexation modelling of the REEs in the inflow streams and lake waters indicate that free metal ions (e.g., Ln 3+, where Ln = any REE) and sulfate complexes (LnSO 4+) are the dominant forms of dissolved REEs. Consequently, solution complexation reactions involving the REEs during weathering, transport to the lake, or within the lake, cannot be invoked to explain the MREE enrichments observed in the lake waters.

  3. Effects of Preretirement Work Complexity and Postretirement Leisure Activity on Cognitive Aging

    PubMed Central

    Finkel, Deborah; Pedersen, Nancy L.

    2016-01-01

    Objectives: We examined the influence of postretirement leisure activity on longitudinal associations between work complexity in main lifetime occupation and trajectories of cognitive change before and after retirement. Methods: Information on complexity of work with data, people, and things, leisure activity participation in older adulthood, and four cognitive factors (verbal, spatial, memory, and speed) was available from 421 individuals in the longitudinal Swedish Adoption/Twin Study of Aging. Participants were followed for an average of 14.2 years (SD = 7.1 years) and up to 23 years across eight cognitive assessments. Most of the sample (88.6%) completed at least three cognitive assessments. Results: Results of growth curve analyses indicated that higher complexity of work with people significantly attenuated cognitive aging in verbal skills, memory, and speed of processing controlling for age, sex, and education. When leisure activity was added, greater cognitive and physical leisure activity was associated with reduced cognitive aging in verbal skills, speed of processing, and memory (for cognitive activity only). Discussion: Engagement in cognitive or physical leisure activities in older adulthood may compensate for cognitive disadvantage potentially imposed by working in occupations that offer fewer cognitive challenges. These results may provide a platform to encourage leisure activity participation in those retiring from less complex occupations. PMID:25975289

  4. Impact of Data Assimilation on Cost-Accuracy Tradeoff in Multi-Fidelity Models at the Example of an Infiltration Problem

    NASA Astrophysics Data System (ADS)

    Sinsbeck, Michael; Tartakovsky, Daniel

    2015-04-01

    Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.

  5. Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.

    PubMed

    Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping

    2017-10-06

    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.

  6. The effect of signal acquisition and processing choices on ApEn values: towards a "gold standard" for distinguishing effort levels from isometric force records.

    PubMed

    Forrest, Sarah M; Challis, John H; Winter, Samantha L

    2014-06-01

    Approximate entropy (ApEn) is frequently used to identify changes in the complexity of isometric force records with ageing and disease. Different signal acquisition and processing parameters have been used, making comparison or confirmation of results difficult. This study determined the effect of sampling and parameter choices by examining changes in ApEn values across a range of submaximal isometric contractions of the first dorsal interosseus. Reducing the sample rate by decimation changed both the value and pattern of ApEn values dramatically. The pattern of ApEn values across the range of effort levels was not sensitive to the filter cut-off frequency, or the criterion used to extract the section of data for analysis. The complexity increased with increasing effort levels using a fixed 'r' value (which accounts for measurement noise) but decreased with increasing effort level when 'r' was set to 0.1 of the standard deviation of force. It is recommended isometric force records are sampled at frequencies >200Hz, template length ('m') is set to 2, and 'r' set to measurement system noise or 0.1SD depending on physiological process to be distinguished. It is demonstrated that changes in ApEn across effort levels are related to changes in force gradation strategy. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  8. Methodological considerations in using complex survey data: an applied example with the Head Start Family and Child Experiences Survey.

    PubMed

    Hahs-Vaughn, Debbie L; McWayne, Christine M; Bulotsky-Shearer, Rebecca J; Wen, Xiaoli; Faria, Ann-Marie

    2011-06-01

    Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results. © The Author(s) 2011

  9. Airborne Lidar-Based Estimates of Tropical Forest Structure in Complex Terrain: Opportunities and Trade-Offs for REDD+

    NASA Technical Reports Server (NTRS)

    Leitold, Veronika; Keller, Michael; Morton, Douglas C.; Cook, Bruce D.; Shimabukuro, Yosio E.

    2015-01-01

    Background: Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. Results: We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (approx. 20 returns/sq m) data was highly accurate (mean signed error of 0.19 +/-0.97 m), while those derived from reduced-density datasets (8/sq m, 4/sq m, 2/sq m and 1/sq m) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4/sq m, the bias in height estimates translated into errors of 80-125 Mg/ha in predicted aboveground biomass. Conclusions: Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.

  10. Airborne lidar-based estimates of tropical forest structure in complex terrain: opportunities and trade-offs for REDD+

    PubMed

    Leitold, Veronika; Keller, Michael; Morton, Douglas C; Cook, Bruce D; Shimabukuro, Yosio E

    2015-12-01

    Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (~20 returns m -2 ) data was highly accurate (mean signed error of 0.19 ± 0.97 m), while those derived from reduced-density datasets (8 m -2 , 4 m -2 , 2 m -2 and 1 m -2 ) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4 m -2 , the bias in height estimates translated into errors of 80-125 Mg ha -1 in predicted aboveground biomass. Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.

  11. Quantifying complexity in translational research: an integrated approach.

    PubMed

    Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L

    2014-01-01

    The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.

  12. Application of Biologically Based Lumping To Investigate the Toxicokinetic Interactions of a Complex Gasoline Mixture.

    PubMed

    Jasper, Micah N; Martin, Sheppard A; Oshiro, Wendy M; Ford, Jermaine; Bushnell, Philip J; El-Masri, Hisham

    2016-03-15

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate the performance of our PBPK model and chemical lumping method. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course toxicokinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 nontarget chemicals. The same biologically based lumping approach can be used to simplify any complex mixture with tens, hundreds, or thousands of constituents.

  13. Solution and Gas-Phase H/D Exchange of Protein-Small-Molecule Complexes: Cex and Its Inhibitors

    NASA Astrophysics Data System (ADS)

    Kang, Yang; Terrier, Peran; Ding, Chuanfan; Douglas, D. J.

    2012-01-01

    The properties of noncovalent complexes of the enzyme exo-1,4-β-D-glycanase ("Cex") with three aza-sugar inhibitors, deoxynojirimycin (X2DNJ), isofagomine lactam (X2IL), and isofagomine (X2IF), have been studied with solution and gas-phase hydrogen deuterium exchange (H/Dx) and measurements of collision cross sections of gas-phase ions. In solution, complexes have lower H/Dx levels than free Cex because binding the inhibitors blocks some sites from H/Dx and reduces fluctuations of the protein. In mass spectra of complexes, abundant Cex ions are seen, which mostly are formed by dissociation of complexes in the ion sampling interface. Both complex ions and Cex ions formed from a solution containing complexes have lower cross sections than Cex ions from a solution of Cex alone. This suggests the Cex ions formed by dissociation "remember" their solution conformations. For a given charge, ions of the complexes have greater gas-phase H/Dx levels than ions of Cex. Unlike cross sections, H/Dx levels of the complexes do not correlate with the relative gas-phase binding strengths measured by MS/MS. Cex ions from solutions with or without inhibitors, which have different cross sections, show the same H/Dx level after 15 s, indicating the ions may fold or unfold on the seconds time scale of the H/Dx experiment. Thus, cross sections show that complexes have more compact conformations than free protein ions on the time scale of ca. 1 ms. The gas-phase H/Dx measurements show that at least some complexes retain different conformations from the Cex ions on a time scale of seconds.

  14. Biocorrosive activity analysis of the oil pipeline soil in the Khanty-Mansiysk Autonomous Region of Ugra and the Krasnodar Territory of the Russian Federation

    NASA Astrophysics Data System (ADS)

    Chesnokova, M. G.; Shalay, V. V.; Kriga, A. S.

    2017-08-01

    The purpose of the study was to assess the biocorrosive activity of oil pipeline soil in the Khanty-Mansiysk Autonomous Region of Yugra and the Krasnodar Territory of the Russian Federation, due to the action of a complex of factors and analysis of sulfate-reducing and thionic bacteria content. The number of bacteria in the sulfur cycle (autotrophic thionic and sulfate-reducing bacteria), the total concentration of sulfur and iron in soil samples adjacent to the surface of underground pipelines, the specific electrical resistivity of the soil was determined. A criterion for the biocorrosive activity of the soil (CBA) was established. The study of the biocorrosive activity of the soil has established its features in the area of the oil pipeline construction in the compared territories. In the soil of the Krasnodar Territory pipeline, aggressive samples were recorded in 5.75% of cases, samples with moderate aggressiveness (49.43%), with weak soil aggressiveness (42.53% of cases), and samples with potential aggressiveness (2.30%). On the territory of the Khanty-Mansiysk Autonomous Region of Yugra, samples with weak soil aggressiveness prevailed (55.17% of cases), with moderate aggressiveness (34.5% of cases). When carrying out multiple regression analysis in the system of variables "factors of soil biocorrosive activity", informative data of modeling the indicator "the content of thiobacteria in soil" was established. The results of the research show the need for dynamic monitoring and the development of preventive measures to prevent biocorrosion.

  15. Detecting bacteria and Determining Their Susceptibility to Antibiotics by Stochastic Confinement in Nanoliter Droplets using Plug-Based Microfluidics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boedicker, J.; Li, L; Kline, T

    2008-01-01

    This article describes plug-based microfluidic technology that enables rapid detection and drug susceptibility screening of bacteria in samples, including complex biological matrices, without pre-incubation. Unlike conventional bacterial culture and detection methods, which rely on incubation of a sample to increase the concentration of bacteria to detectable levels, this method confines individual bacteria into droplets nanoliters in volume. When single cells are confined into plugs of small volume such that the loading is less than one bacterium per plug, the detection time is proportional to plug volume. Confinement increases cell density and allows released molecules to accumulate around the cell, eliminatingmore » the pre-incubation step and reducing the time required to detect the bacteria. We refer to this approach as stochastic confinement. Using the microfluidic hybrid method, this technology was used to determine the antibiogram - or chart of antibiotic sensitivity - of methicillin-resistant Staphylococcus aureus (MRSA) to many antibiotics in a single experiment and to measure the minimal inhibitory concentration (MIC) of the drug cefoxitin (CFX) against this strain. In addition, this technology was used to distinguish between sensitive and resistant strains of S. aureus in samples of human blood plasma. High-throughput microfluidic techniques combined with single-cell measurements also enable multiple tests to be performed simultaneously on a single sample containing bacteria. This technology may provide a method of rapid and effective patient-specific treatment of bacterial infections and could be extended to a variety of applications that require multiple functional tests of bacterial samples on reduced timescales.« less

  16. Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand

    PubMed Central

    DeLuca, Samuel; Khar, Karen; Meiler, Jens

    2015-01-01

    RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742

  17. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  18. A Multi-Pumping Flow System for In Situ Measurements of Dissolved Manganese in Aquatic Systems

    PubMed Central

    Meyer, David; Prien, Ralf D.; Dellwig, Olaf; Waniek, Joanna J.; Schuffenhauer, Ingo; Donath, Jan; Krüger, Siegfried; Pallentin, Malte; Schulz-Bull, Detlef E.

    2016-01-01

    A METals In Situ analyzer (METIS) has been used to determine dissolved manganese (II) concentrations in the subhalocline waters of the Gotland Deep (central Baltic Sea). High-resolution in situ measurements of total dissolved Mn were obtained in near real-time by spectrophotometry using 1-(2-pyridylazo)-2-naphthol (PAN). PAN is a complexing agent of dissolved Mn and forms a wine-red complex with a maximum absorbance at a wavelength of 562 nm. Results are presented together with ancillary temperature, salinity, and dissolved O2 data. Lab calibration of the analyzer was performed in a pressure testing tank. A detection limit of 77 nM was obtained. For validation purposes, discrete water samples were taken by using a pump-CTD system. Dissolved Mn in these samples was determined by an independent laboratory based method (inductively coupled plasma–optical emission spectrometry, ICP-OES). Mn measurements from both METIS and ICP-OES analysis were in good agreement. The results showed that the in situ analysis of dissolved Mn is a powerful technique reducing dependencies on heavy and expensive equipment (pump-CTD system, ICP-OES) and is also cost and time effective. PMID:27916898

  19. Placental Proteomics: A Shortcut to Biological Insight

    PubMed Central

    Robinson, John M.; Vandré, Dale D.; Ackerman, William E.

    2012-01-01

    Proteomics analysis of biological samples has the potential to identify novel protein expression patterns and/or changes in protein expression patterns in different developmental or disease states. An important component of successful proteomics research, at least in its present form, is to reduce the complexity of the sample if it is derived from cells or tissues. One method to simplify complex tissues is to focus on a specific, highly purified sub-proteome. Using this approach we have developed methods to prepare highly enriched fractions of the apical plasma membrane of the syncytiotrophoblast. Through proteomics analysis of this fraction we have identified over five hundred proteins several of which were previously not known to reside in the syncytiotrophoblast. Herein, we focus on two of these, dysferlin and myoferlin. These proteins, largely known from studies of skeletal muscle, may not have been found in the human placenta were it not for discovery-based proteomics analysis. This new knowledge, acquired through a discovery-driven approach, can now be applied for the generation of hypothesis-based experimentation. Thus discovery-based and hypothesis-based research are complimentary approaches that when coupled together can hasten scientific discoveries. PMID:19070895

  20. Characterization of conducting cellulose acetate based polymer electrolytes doped with "green" ionic mixture.

    PubMed

    Ramesh, S; Shanti, R; Morris, Ezra

    2013-01-02

    Polymer electrolytes were developed by solution casting technique utilizing the materials of cellulose acetate (CA), lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) and deep eutectic solvent (DES). The DES is synthesized from the mixture of choline chloride and urea of 1:2 ratios. The increasing DES content well plasticizes the CA:LiTFSI:DES matrix and gradually improves the ionic conductivity and chemical integrity. The highest conducting sample was identified for the composition of CA:LiTFSI:DES (28 wt.%:12 wt.%:60 wt.%), which has the greatest ability to retain the room temperature ionic conductivity over the entire 30 days of storage time. The changes in FTIR cage peaks upon varying the DES content in CA:LiTFSI:DES prove the complexation. This complexation results in the collapse of CA matrix crystallinity, observed from the reduced intensity of XRD diffraction peaks. The DES-plasticized sample is found to be more heat-stable compared to pure CA. Nevertheless, the addition of DES diminishes the CA:LiTFSI matrix's heat-resistivity but at the minimum addition the thermal stability is enhanced. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Integrative genetic risk prediction using non-parametric empirical Bayes classification.

    PubMed

    Zhao, Sihai Dave

    2017-06-01

    Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.

  2. An anaerobic field injection experiment in a landfill leachate plume, Grindsted, Denmark: 2. Deduction of anaerobic (methanogenic, sulfate-, and Fe (III)-reducing) redox conditions

    NASA Astrophysics Data System (ADS)

    Albrechtsen, Hans-JøRgen; Bjerg, Poul L.; Ludvigsen, Liselotte; Rügge, Kirsten; Christensen, Thomas H.

    1999-04-01

    Redox conditions may be environmental factors which affect the fate of the xenobiotic organic compounds. Therefore the redox conditions were characterized in an anaerobic, leachate-contaminated aquifer 15-60 m downgradient from the Grindsted Landfill, Denmark, where an field injection experiment was carried out. Furthermore, the stability of the redox conditions spatially and over time were investigated, and different approaches to deduce the redox conditions were evaluated. The redox conditions were evaluated in a set of 20 sediment and groundwater samples taken from locations adjacent to the sediment samples. Samples were investigated with respect to groundwater chemistry, including hydrogen and volatile fatty acids (VFAs) and sediment geochemistry, and bioassays were performed. The groundwater chemistry, including redox sensitive species for a large number of samples, varied over time during the experimental period of 924 days owing to variations in the leachate from the landfill. However, no indication of change in the redox environment resulting from the field injection experiment or natural variation was observed in the individual sampling points. The methane, Fe(II), hydrogen, and VFA groundwater chemistry parameters strongly indicated a Fe(III)-reducing environment. This was further supported by the bioassays, although methane production and sulfate-reduction were also observed in a few samples close to the landfill. On the basis of the calculated carbon conversion, Fe(III) was the dominant electron acceptor in the region of the aquifer, which was investigated. Because of the complexity of a landfill leachate plume, several redox processes may occur simultaneously, and an array of methods must be applied for redox characterization in such multicomponent systems.

  3. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomeroy, Marc D

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition beforemore » condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.« less

  4. The use of PCR technique in the identification of Mycobacterium species responsible for bovine tuberculosis in cattle and buffaloes in Pakistan.

    PubMed

    Akhtar, Farah; Javed, Muhammad Tariq; Aziz-ur-Rehman; Khan, Muhammad Nisar; Akhtar, Pervez; Hussain, Sayed Misdaq; Aslam, Muhammad Sohaib; Kausar, Razia; Qamar, Mehwish; Cagiola, Monica

    2015-08-01

    Bovine tuberculosis is one of the important diseases of dairy and wild animals. The disease is prevalent all over the world, though developed countries have tremendously reduced the prevalence through eradication campaigns. The prevalence of disease in Pakistan on the basis of tuberculin testing or culture isolation of the organism has been reported previously. It is, however, important to use the latest diagnostic tools, i.e. PCR to confirm the type of Mycobacterium infecting the animals in Pakistan. Therefore, the present study was carried out to assess the utility of direct PCR on milk samples and nasal swabs to confirm the type of Mycobacterium infecting the animals. This study was carried out on 215 cattle and buffaloes of more than 2 years of age present at two livestock farms. The tuberculin results showed 22.5% prevalence at one farm and 25.9% at the other with an overall prevalence of 24.7%. The 92.5% of milk samples and/or nasal swabs showed positive PCR for Mycobacterium genus, 86.8% for Mycobacterium tuberculosis complex and 77.4% for Mycobacterium bovis. The M. bovis by PCR was detected in 13.2% of milk samples, 24.5% of nasal swabs and 39.6% of both milk samples + nasal swabs. The results suggested that there are 60% higher chance for a nasal swab to yield a positive PCR for M. bovis than the milk sample. It can be concluded from the present study that tuberculin testing is a useful method in studying the prevalence of disease as the PCR for Mycobacterium genus was positive in 92.5%, M. tuberculosis complex in 86.8% and Mycobacterium bovis in 77.4% cases.

  5. [Isolation of Sporothrix pallida complex in clinical and environmental samples from Chile].

    PubMed

    Cruz Choappa, Rodrigo M; Vieille Oyarzo, Peggy I; Carvajal Silva, Laura C

    2014-01-01

    The isolation of S. pallida complex from medical samples and home garden soil of a patient in Chile is here in reported. Fungi of the Sporothrix schenckii complex can cause various infections. In Chile, the medical and environmental isolates of these this complex are rare. The aim of this study was to identify an unusual agent in a case of onychomycosis and to detect its presence in the patient's home garden. For this purpose, clinical samples were obtained by scraping the patient's subungueal first right toe nail as well as by taking soil samples from different areas of her home garden. Species identification was performed by morphophysiology and one of the strains isolated from the patient's toe nail was sent to CBS for molecular confirmation (14.062). S. pallida complex was identified both from the patient's toe nail and samples taken from her home garden. Copyright © 2014 Asociación Argentina de Microbiología. Publicado por Elsevier España. All rights reserved.

  6. MRM validation of targeted nonglycosylated peptides from N-glycoprotein biomarkers using direct trypsin digestion of undepleted human plasma.

    PubMed

    Lee, Ju Yeon; Kim, Jin Young; Cheon, Mi Hee; Park, Gun Wook; Ahn, Yeong Hee; Moon, Myeong Hee; Yoo, Jong Shin

    2014-02-26

    A rapid, simple, and reproducible MRM-based validation method for serological glycoprotein biomarkers in clinical use was developed by targeting the nonglycosylated tryptic peptides adjacent to N-glycosylation sites. Since changes in protein glycosylation are known to be associated with a variety of diseases, glycoproteins have been major targets in biomarker discovery. We previously found that nonglycosylated tryptic peptides adjacent to N-glycosylation sites differed in concentration between normal and hepatocellular carcinoma (HCC) plasma due to differences in steric hindrance of the glycan moiety in N-glycoproteins to tryptic digestion (Lee et al., 2011). To increase the feasibility and applicability of clinical validation of biomarker candidates (nonglycosylated tryptic peptides), we developed a method to effectively monitor nonglycosylated tryptic peptides from a large number of plasma samples and to reduce the total analysis time with maximizing the effect of steric hindrance by the glycans during digestion of glycoproteins. The AUC values of targeted nonglycosylated tryptic peptides were excellent (0.955 for GQYCYELDEK, 0.880 for FEDGVLDPDYPR and 0.907 for TEDTIFLR), indicating that these could be effective biomarkers for hepatocellular carcinoma. This method provides the necessary throughput required to validate glycoprotein biomarkers, as well as quantitative accuracy for human plasma analysis, and should be amenable to clinical use. Difficulties in verifying and validating putative protein biomarkers are often caused by complex sample preparation procedures required to determine their concentrations in a large number of plasma samples. To solve the difficulties, we developed MRM-based protein biomarker assays that greatly reduce complex, time-consuming, and less reproducible sample pretreatment steps in plasma for clinical implementation. First, we used undepleted human plasma samples without any enrichment procedures. Using nanoLC/MS/MS, we targeted nonglycosylated tryptic peptides adjacent to N-linked glycosylation sites in N-linked glycoprotein biomarkers, which could be detected in human plasma samples without depleting highly abundant proteins. Second, human plasma proteins were digested with trypsin without reduction and alkylation procedures to minimize sample preparation. Third, trypsin digestion times were shortened so as to obtain reproducible results with maximization of the steric hindrance effect of the glycans during enzyme digestion. Finally, this rapid and simple sample preparation method was applied to validate targeted nonglycosylated tryptic peptides as liver cancer biomarker candidates for diagnosis in 40 normal and 41 hepatocellular carcinoma (HCC) human plasma samples. This strategy provided the necessary throughput required to monitor protein biomarkers, as well as quantitative accuracy in human plasma analysis. From biomarker discovery to clinical implementation, our method will provide a biomarker study platform that is suitable for clinical deployment, and can be applied to high-throughput approaches. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Lateral Flow Immunoassays for Ebola Virus Disease Detection in Liberia.

    PubMed

    Phan, Jill C; Pettitt, James; George, Josiah S; Fakoli, Lawrence S; Taweh, Fahn M; Bateman, Stacey L; Bennett, Richard S; Norris, Sarah L; Spinnler, David A; Pimentel, Guillermo; Sahr, Phillip K; Bolay, Fatorma K; Schoepp, Randal J

    2016-10-15

    Lateral flow immunoassays (LFIs) are point-of-care diagnostic assays that are designed for single use outside a formal laboratory, with in-home pregnancy tests the best-known example of these tests. Although the LFI has some limitations over more-complex immunoassay procedures, such as reduced sensitivity and the potential for false-positive results when using complex sample matrices, the assay has the benefits of a rapid time to result and ease of use. These benefits make it an attractive option for obtaining rapid results in an austere environment. In an outbreak of any magnitude, a field-based rapid diagnostic assay would allow proper patient transport and for safe burials to be conducted without the delay caused by transport of samples between remote villages and testing facilities. Use of such point-of-care instruments in the ongoing Ebola virus disease (EVD) outbreak in West Africa would have distinct advantages in control and prevention of local outbreaks, but proper understanding of the technology and interpretation of results are important. In this study, a LFI, originally developed by the Naval Medical Research Center for Ebola virus environmental testing, was evaluated for its ability to detect the virus in clinical samples in Liberia. Clinical blood and plasma samples and post mortem oral swabs submitted to the Liberian Institute for Biomedical Research, the National Public Health Reference Laboratory for EVD testing, were tested and compared to results of real-time reverse transcription-polymerase chain reaction (rRT-PCR), using assays targeting Ebola virus glycoprotein and nucleoprotein. The LFI findings correlated well with those of the real-time RT-PCR assays used as benchmarks. Rapid antigen-detection tests such as LFIs are attractive alternatives to traditional immunoassays but have reduced sensitivity and specificity, resulting in increases in false-positive and false-negative results. An understanding of the strengths, weaknesses, and limitations of a particular assay lets the diagnostician choose the correct situation to use the correct assay and properly interpret the results. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  8. Understanding and improving mitigation strategies for reducing catchment scale nutrient loads using high resolution observations and uncertainty analysis approaches

    NASA Astrophysics Data System (ADS)

    Collins, A.; Lloyd, C.; Freer, J. E.; Johnes, P.; Stirling, M.

    2012-12-01

    One of the biggest challenges in catchment water quality management is tackling the problem of reducing water pollution from agriculture whilst ensuring food security nationally. Improvements to catchment management plans are needed if we are to enhance biodiversity and maintain good ecological status in freshwater ecosystems, while producing enough food to support a growing global population. In order to plan for a more sustainable and secure future, research needs to quantify the uncertainties and understand the complexities in the source-mobilisation-delivery-impact continuum of pollution and nutrients at all scales. In the UK the Demonstration Test Catchment (DTC) project has been set up to improve water quality specifically from diffuse pollution from agriculture by enhanced high resolution monitoring and targeted mitigation experiments. The DTC project aims to detect shifts in the baseline trend of the most ecologically-significant pollutants resulting from targeted on-farm measures at field to farm scales and assessing their effects on ecosystem function. The DTC programme involves three catchments across the UK that are indicative of three different typologies and land uses. This paper will focus on the Hampshire Avon DTC, where a total of 12 parameters are monitored by bank-side stations at two sampling sites, including flow, turbidity, phosphate and nitrate concentrations at 30 min resolution. This monitoring is supported by daily resolution sampling at 5 other sites and storm sampling at all locations. Part of the DTC project aims to understand how observations of water quality within river systems at different temporal resolutions and types of monitoring strategies enable us to understand and detect changes over and above the natural variability. Baseline monitoring is currently underway and early results show that high-resolution data is essential at this sub-catchment scale to understand important process dynamics. This is critical if we are to design cost efficient and effective management strategies. The high-resolution dataset means that there are new opportunities to explore the associated uncertainties in monitoring water quality and assessing ecological status and how that relates to current monitoring networks. For example, concurrent grab samples at the high-resolution sampling stations allow the assessment of the uncertainties which would be generated through coarser sampling strategies. This is just the beginning of the project, however, as the project progresses, the high resolution dataset will provide higher statistical power compared with previous data collection schemes and allow the employment of more complex methods such as signal decomposition e.g. wavelet analysis, which can allow us to start to decipher the complex interactions occurring at sub-catchment scale which may not be immediately detectable in bulk signals. In this paper we outline our methodological approach, present some of the initial findings of this research and how we can quantify changes to nutrient loads whilst taking account the main uncertainties and the inherent natural variability.

  9. Volumetric determination of uranium using titanous sulfate as reductant before oxidimetric titration

    USGS Publications Warehouse

    Wahlberg, James S.; Skinner, Dwight L.; Rader, Lewis F.

    1956-01-01

    A new method for determining uranium in samples containing 0.05 percent or more U3O8, using titanous sulfate as reducing agent, is much shorter, faster, and has fewer interferences than conventional methods using reductor columns. The sample is dissolved with sulfuric, nitric, perchloric, and hydrofluoric acids. Elements that would otherwise form insoluble fluorides are kept in solution by complexing the fluoride ion with boric acid. A precipitation is made with cupferron to remove interfering elements. The solution is filtered to remove the precipitated cupferrates instead of extracting them with chloroform as is usually done. Filtration is preferred to extraction because any niobium that may be in solution forms an insoluble cupferrate that may be removed by filtering but is very difficult to extract with chloroform. Excess cupferron is destroyed by oxidizing with nitric and perchloric acids, and evaporating to dense fumes of sulfuric acid. The uranium is reduced to U(IV) by the addition of titanous sulfate, with cupric sulfate used as an indicator of the completeness of the reduction. Metallic copper is formed when all the uranium is reduced. The reduced copper is then reoxidized by the addition of mercuric perchlorate, an excess of ferric sulfate added, and the solution titrated immediately with standard ceric sulfate with ferroin as an indicator. Precision of the method compared favorable with methods in common use, both for uranium ores and for most types of uranium-rich materials.

  10. Predictors of actual turnover in a national sample of newly licensed registered nurses employed in hospitals.

    PubMed

    Brewer, Carol S; Kovner, Christine T; Greene, William; Tukov-Shuser, Magdalene; Djukic, Maja

    2012-03-01

    This paper is a report of a study of factors that affect turnover of newly licensed registered nurses in United States hospitals. There is a large body of research related to nursing retention; however, there is little information specific to newly licensed registered nurse turnover. Incidence rates of turnover among new nurses are unknown because most turnover data are not from nationally representative samples of nurses. This study used a longitudinal panel design to obtain data from 1653 registered nurses who were recently licensed by examination for the first time. We mailed surveys to a nationally representative sample of hospital registered nurses 1 year apart. The analytic sample consisted of 1653 nurses who responded to both survey mailings in January of 2006 and 2007. Full-time employment and more sprains and strains (including back injuries) result in more turnover. Higher intent to stay and hours of voluntary overtime and more than one job for pay reduces turnover. When we omitted intent to stay from the probit model, less job satisfaction and organizational commitment led to more turnover, confirming their importance to turnover. Magnet Recognition Award(®) hospitals and several other work attributes had no effect on turnover.   Turnover problems are complex, which means that there is no one solution to decreasing turnover. Multiple points of intervention exist. One specific approach that may improve turnover rates is hospital policies that reduce strains and sprains. © 2011 The Authors. Journal of Advanced Nursing © 2011 Blackwell Publishing Ltd.

  11. Simultaneous extraction and determination of trace amounts of diclofenac from whole blood using supported liquid membrane microextraction and fast Fourier transform voltammetry.

    PubMed

    Mofidi, Zahra; Norouzi, Parviz; Sajadian, Masumeh; Ganjali, Mohammad Reza

    2018-04-01

    A novel, simple, and inexpensive analytical technique based on flat sheet supported liquid membrane microextraction coupled with fast Fourier transform stripping cyclic voltammetry on a reduced graphene oxide carbon paste electrode was used for the extraction and online determination of diclofenac in whole blood. First, diclofenac was extracted from blood samples using a polytetrafluoroethylene membrane impregnated with 1-octanol and then into an acceptor solution, subsequently it was oxidized on a carbon paste electrode modified with reduced graphene oxide nanosheets. The optimal values of the key parameters influencing the method were as follows: scan rate, 6 V/s; stripping potential, 200 mV; stripping time, 5 s; pH of the sample solution, 5; pH of the acceptor solution,7; and extraction time, 240 min. The calibration curves were plotted for the whole blood samples and the method was found to have a good linearity within the range of 1-25 μg/mL with a determination coefficient of 0.99. The limits of detection and quantification were 0.1 and 1.0 μg/mL, respectively. Using this coupled method, the extraction and determination were merged into one step. Accordingly, the speed of detection for sensitive determination of diclofenac in complex samples, such as blood, increased considerably. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    ERIC Educational Resources Information Center

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  13. Actinomycetal complex of light sierozem on the Kopet-Dag piedmont plain

    NASA Astrophysics Data System (ADS)

    Zenova, G. M.; Zvyagintsev, D. G.; Manucharova, N. A.; Stepanova, O. A.; Chernov, I. Yu.

    2016-10-01

    The population density of actinomycetes in the samples of light sierozem from the Kopet Dag piedmont plain (75 km from Ashkhabad, Turkmenistan) reaches hundreds of thousand CFU/g soil. The actinomycetal complex is represented by two genera: Streptomyces and Micromonospora. Representatives of the Streptomyces genus predominate and comprise 73 to 87% of the actinomycetal complex. In one sample, representatives of the Micromonospora genus predominated in the complex (75%). The Streptomyces genus in the studied soil samples is represented by the species from several sections and series: the species of section Helvolo-Flavus series Helvolus represent the dominant component of the streptomycetal complex; their portion is up to 77% of all isolated actinomycetes. The species of other sections and series are much less abundant. Thus, the percentage of the Cinereus Achromogenes section in the actinomycetal complex does not exceed 28%; representatives of the Albus section Albus series, Roseus section Lavendulae-Roseus series, and Imperfectus section belong to rare species; they have been isolated not from all the studied samples of light sierozem, and their portion does not exceed 10% of the actinomycetal complex.

  14. Geochemistry of metal-rich brines from central Mississippi Salt Dome basin, U.S.A.

    USGS Publications Warehouse

    Kharaka, Y.K.; Maest, A.S.; Carothers, W.W.; Law, L.M.; Lamothe, P.J.; Fries, T.L.

    1987-01-01

    Oil-field brines are the most favored ore-forming solutions for the sediment-hosted Mississippi Valley-type ore deposits. Detailed inorganic and organic chemical and isotope analyses of water and gas samples from six oil fields in central Mississippi, one of the very few areas with high metal brines, were conducted to study the inorganic and organic complexes responsible for the high concentrations of these metals. The samples were obtained from production zones consisting of sandstone and limestone that range in depth from 1900 to 4000 m (70-120??C) and in age from Late Cretaceous to Late Jurassic. Results show that the waters are dominantly bittern brines related to the Louann Salt. The brines have extremely high salinities that range from 160,000 to 320,000 mg/l total dissolved solids and are NaCaCl-type waters with very high concentrations of Ca (up to 48,000 mg/l) and other alkaline-earth metals, but with low concentrations of aliphatic acid anions. The concentrations of metals in many water samples are very high, reaching values of 70 mg/l for Pb, 245 mg/l for Zn, 465 mg/l for Fe and 210 mg/l for Mn. The samples with high metal contents have extremely low concentrations (<0.02 mg/l) of H2S. Samples obtained from the Smackover Formation (limestone) have low metal contents that are more typical of oil-field waters, but have very high concentrations (up to 85 mg/l) of H2S. Computations with the geochemical code SOLMINEQ.87 give the following results: (1) both Pb and Zn are present predominantly as aqueous chloride complexes (mainly as PbCl42- and ZnCl42-, respectively); (2) the concentrations of metals complexed with short-chained aliphatic acid anions and reduced S species are minor; (3) organic acid anions are important in controlling the concentrations of metals because they affect the pH and buffer capacity of the waters at subsurface conditions; and (4) galena and sphalerite solubilities control the concentrations of Pb and Zn in these waters. ?? 1988.

  15. NASA Tech Briefs, August 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Aligning a Receiving Antenna Array to Reduce Interference; Collecting Ground Samples for Balloon-Borne Instruments; Tethered Pyrotechnic Apparatus for Acquiring a Ground Sample; Enhanced Video-Oculography System; Joint Carrier-Phase Synchronization and LDPC Decoding; Dual-Polarization, Sideband-Separating, Balanced Receiver for 1.5 THz Modular Battery Charge Controller; Efficient Multiplexer FPGA Block Structures Based on G4FETs; VLSI Microsystem for Rapid Bioinformatic Pattern Recognition; Low-Noise Amplifier for 100 to 180 GHz; Improved Fabrication of Ceramic Matrix Composite/Foam Core Integrated Structures; Inert Welding/Brazing Gas Filters and Dryers; Fabricating Copper Nanotubes by Electrodeposition; Reducing Aerodynamic Drag on Empty Open Cargo Vehicles; Rotary Percussive Auto-Gopher for Deep Drilling and Sampling; More About Reconfigurable Exploratory Robotic Vehicles; Thermostatic Valves Containing Silicone-Oil Actuators; Improving Heat Flux Performance of Flat Surface in Spray-Cooling Systems; Treating Fibrous Insulation to Reduce Thermal Conductivity; Silica-Aerogel Composites Opacified with La(sub0.7)Sr(sub0.3)MnO3; Cyclic Oxidation Behavior of CuCrAl Cold-Sprayed Coatings for Reusable Launch Vehicles; Ceramic Fiber Structures for Cryogenic Load-Bearing Applications; Elastomer Reinforced with Carbon Nanotubes; Biologically Inspired Purification and Dispersion of SWCNTs; A Technique for Adjusting Eigenfrequencies of WGM Resonators; Low-Pressure, Field-Ionizing Mass Spectrometer; Modifying Operating Cycles to Increase Stability in a LITS; Chamber for Simulating Martian and Terrestrial Environments; Algorithm for Detecting a Bright Spot in an Image; Extreme Programming: Maestro Style; Adaptive Behavior for Mobile Robots; Protocol for Communication Networking for Formation Flying; Planning Complex Sequences Using Compressed Representations; and Self-Supervised Learning of Terrain Traversability from Proprioceptive Sensors.

  16. 230Th-U dating of surficial deposits using the ion microprobe (SHRIMP-RG): A microstratigraphic perspective

    USGS Publications Warehouse

    Maher, K.; Wooden, J.L.; Paces, J.B.; Miller, D.M.

    2007-01-01

    We used the sensitive high-resolution ion microprobe reverse-geometry (SHRIMP-RG) to date pedogenic opal using the 230Th-U system. Due to the high-spatial resolution of an ion microprobe (typically 30 ??m), regions of pure opal within a sample can be targeted and detrital material can be avoided. In addition, because the technique is non-destructive, the sample can be preserved for other types of analyses including electron microprobe or other stable isotope or trace element ion microprobe measurements. The technique is limited to material with U concentrations greater than ???50 ppm. However, the high spatial resolution, small sample requirements, and the ability to avoid detrital material make this technique a suitable technique for dating many Pleistocene deposits formed in semi-arid environments. To determine the versatility of the method, samples from several different deposits were analyzed, including silica-rich pebble coatings from pedogenic carbonate horizons, a siliceous sinter deposit, and opaline silica deposited as a spring mound. U concentrations for 30-??m-diameter spots ranged from 50 to 1000 ppm in these types of materials. The 230Th/232Th activity ratios also ranged from ???100 to 106, eliminating the need for detrital Th corrections that reduce the precision of traditional U-Th ages for many milligram- and larger-sized samples. In pedogenic material, layers of high-U opal (ca. 500 ppm) are commonly juxtaposed next to layers of calcite with much lower U concentrations (1-2 ppm). If these types of samples are not analyzed using a technique with the appropriate spatial resolution, the ages may be strongly biased towards the age of the opal. Comparison with standard TIMS (Thermal Ionization Mass Spectrometry) measurements from separate microdrilled samples suggests that although the analytical precision of the ion microprobe (SHRIMP-RG) measurements is less than TIMS, the high spatial resolution results in better accuracy in the age determination for finely layered or complex deposits. The ion microprobe approach also may be useful for pre-screening samples to determine the age and degree of post-depositional alteration, analyzing finely layered samples or samples with complex growth histories, and obtaining simultaneous measurements of trace elements.

  17. Replica exchange and expanded ensemble simulations as Gibbs sampling: simple improvements for enhanced mixing.

    PubMed

    Chodera, John D; Shirts, Michael R

    2011-11-21

    The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.

  18. Rinsing paired-agent model (RPAM) to quantify cell-surface receptor concentrations in topical staining applications of thick tissues

    NASA Astrophysics Data System (ADS)

    Xu, Xiaochun; Wang, Yu; Xiang, Jialing; Liu, Jonathan T. C.; Tichauer, Kenneth M.

    2017-06-01

    Conventional molecular assessment of tissue through histology, if adapted to fresh thicker samples, has the potential to enhance cancer detection in surgical margins and monitoring of 3D cell culture molecular environments. However, in thicker samples, substantial background staining is common despite repeated rinsing, which can significantly reduce image contrast. Recently, ‘paired-agent’ methods—which employ co-administration of a control (untargeted) imaging agent—have been applied to thick-sample staining applications to account for background staining. To date, these methods have included (1) a simple ratiometric method that is relatively insensitive to noise in the data but has accuracy that is dependent on the staining protocol and the characteristics of the sample; and (2) a complex paired-agent kinetic modeling method that is more accurate but is more noise-sensitive and requires a precise serial rinsing protocol. Here, a new simplified mathematical model—the rinsing paired-agent model (RPAM)—is derived and tested that offers a good balance between the previous models, is adaptable to arbitrary rinsing-imaging protocols, and does not require calibration of the imaging system. RPAM is evaluated against previous models and is validated by comparison to estimated concentrations of targeted biomarkers on the surface of 3D cell culture and tumor xenograft models. This work supports the use of RPAM as a preferable model to quantitatively analyze targeted biomarker concentrations in topically stained thick tissues, as it was found to match the accuracy of the complex paired-agent kinetic model while retaining the low noise-sensitivity characteristics of the ratiometric method.

  19. Characterization of the Sukinda and Nausahi ultramafic complexes, Orissa, India by platinum-group element geochemistry

    USGS Publications Warehouse

    Page, N.J.; Banerji, P.K.; Haffty, J.

    1985-01-01

    Samples of 20 chromitite, 14 ultramafic and mafic rock, and 9 laterite and soil samples from the Precambrian Sukinda and Nausahi ultramafic complexes, Orissa, India were analyzed for platinum-group elements (PGE). The maximum concentrations are: palladium, 13 parts per billion (ppb); platinum, 120 ppb; rhodium, 21 ppb; iridium, 210 ppb; and ruthenium, 630 ppb. Comparison of chondrite-normalized ratios of PGE for the chromitite samples of lower Proterozoic to Archean age with similar data from Paleozoic and Mesozoic ophiolite complexes strongly implies that these complexes represent Precambrian analogs of ophiolite complexes. This finding is consistent with the geology and petrology of the Indian complexes and suggests that plate-tectonic and ocean basin developement models probably apply to some parts of Precambrian shield areas. ?? 1985.

  20. Electrochemical sample matrix elimination for trace-level potentiometric detection with polymeric membrane ion-selective electrodes.

    PubMed

    Chumbimuni-Torres, Karin Y; Calvo-Marzal, Percy; Wang, Joseph; Bakker, Eric

    2008-08-01

    Potentiometric sensors are today sufficiently well understood and optimized to reach ultratrace level (subnanomolar) detection limits for numerous ions. In many cases of practical relevance, however, a high electrolyte background hampers the attainable detection limits. A particularly difficult sample matrix for potentiometric detection is seawater, where the high saline concentration forms a major interfering background and reduces the activity of most trace metals by complexation. This paper describes for the first time a hyphenated system for the online electrochemically modulated preconcentration and matrix elimination of trace metals, combined with a downstream potentiometric detection with solid contact polymeric membrane ion-selective microelectrodes. Following the preconcentration at the bismuth-coated electrode, the deposited metals are oxidized and released to a medium favorable to potentiometric detection, in this case calcium nitrate. Matrix interferences arising from the saline sample medium are thus circumvented. This concept is successfully evaluated with cadmium as a model trace element and offers potentiometric detection down to low parts per billion levels in samples containing 0.5 M NaCl background electrolyte.

  1. Electrochemical Sample Matrix Elimination for Trace Level Potentiometric Detection with Polymeric Membrane Ion-Selective Electrodes

    PubMed Central

    Chumbimuni-Torres, Karin Y.; Calvo-Marzal, Percy; Wang, Joseph; Bakker, Eric

    2008-01-01

    Potentiometric sensors are today sufficiently well understood and optimized to reach ultra-trace level (sub-nanomolar) detection limits for numerous ions. In many cases of practical relevance, however, a high electrolyte background hampers the attainable detection limits. A particularly difficult sample matrix for potentiometric detection is seawater, where the high saline concentration forms a major interfering background and reduces the activity of most trace metals by complexation. This paper describes for the first time a hyphenated system for the online electrochemically modulated preconcentration and matrix elimination (EMPM) of trace metals, combined with a downstream potentiometric detection with solid contact polymeric membrane ion-selective microelectrodes. Following the preconcentration at the bismuth coated electrodes, the deposited metals are oxidized and released to a medium favorable to potentiometric detection, in this case calcium nitrate. Matrix interferences arising from the saline sample medium are thus circumvented. This concept is successfully evaluated with cadmium as a model trace element and offers potentiometric detection down to low parts per billion levels in samples containing 0.5 M NaCl background electrolyte. PMID:18570385

  2. Topical Application of Tranexamic Acid to Reduce Blood Loss During Complex Combat Related Spine Trauma Surgery

    DTIC Science & Technology

    2015-10-01

    AWARD NUMBER: W81XWH-14-2-0177 TITLE: Topical Application of Tranexamic Acid to Reduce Blood Loss During Complex Combat-Related Spine Trauma...COVERED (From - To) 30 Sep 2014 - 29 Sep 2015 4. TITLE AND SUBTITLE Topical Application of Tranexamic Acid to Reduce Blood Loss During Complex...application will be submitted shortly and successfully. 15. SUBJECT TERMS Spine; Tranexamic Acid ; Perioperative blood loss; Trauma; Antifibrinolytic

  3. Improvement in Titanium Complexes Bearing Schiff Base Ligands in the Ring-Opening Polymerization of L-Lactide: A Dinuclear System with Hydrazine-Bridging Schiff Base Ligands.

    PubMed

    Tseng, Hsi-Ching; Chen, Hsing-Yin; Huang, Yen-Tzu; Lu, Wei-Yi; Chang, Yu-Lun; Chiang, Michael Y; Lai, Yi-Chun; Chen, Hsuan-Ying

    2016-02-15

    A series of titanium (Ti) complexes bearing hydrazine-bridging Schiff base ligands were synthesized and investigated as catalysts for the ring-opening polymerization (ROP) of L-lactide (LA). Complexes with electron withdrawing or steric bulky groups reduced the catalytic activity. In addition, the steric bulky substituent on the imine groups reduced the space around the Ti atom and then reduced LA coordination with Ti atom, thereby reducing catalytic activity. All the dinuclear Ti complexes exhibited higher catalytic activity (approximately 10-60-fold) than mononuclear L(Cl-H)-TiOPr2 did. The strategy of bridging dinuclear Ti complexes with isopropoxide groups in the ROP of LA was successful, and adjusting the crowded heptacoordinated transition state by the bridging isopropoxide groups may be the key to our successful strategy.

  4. Original XPCTM Effect on Salmonella Typhimurium and Cecal Microbiota from Three Different Ages of Broiler Chickens When Incubated in an Anaerobic In Vitro Culture System

    PubMed Central

    Park, Si Hong; Kim, Sun Ae; Lee, Sang In; Rubinelli, Peter M.; Roto, Stephanie M.; Pavlidis, Hilary O.; McIntyre, Donald R.; Ricke, Steven C.

    2017-01-01

    Feed supplements are utilized in the poultry industry as a means for improving growth performance and reducing pathogens. The aim of the present study was to evaluate the effects of Diamond V Original XPCTM (XPC, a fermented product generated from yeast cultures) on Salmonella Typhimurium ST 97 along with its potential for modulation of the cecal microbiota by using an anaerobic in vitro mixed culture assay. Cecal slurries obtained from three broiler chickens at each of three sampling ages (14, 28, and 42 days) were generated and exposed to a 24 h pre-incubation period with the various treatments: XPC (1% XPC, ceca, and feeds), CO (ceca only), and NC (negative control) group consisting of ceca and feeds. The XPC, CO, and NC were each challenged with S. Typhimurium and subsequently plated on selective media at 0, 24, and 48 h. Plating results indicated that the XPC treatment significantly reduced the survival of S. Typhimurium at the 24 h plating time point for both the 28 and 42 days bird sampling ages, while S. Typhimurium reduction in the NC appeared to eventually reach the same population survival level at the 48 h plating time point. For microbiome analysis, Trial 1 revealed that XPC, CO, and NC groups exhibited a similar pattern of taxa summary. However, more Bacteroidetes were observed in the CO group at 24 and 48 h. There were no significant differences (P > 0.05) in alpha diversity among samples based on day, hour and treatment. For beta diversity analysis, a pattern shift was observed when samples clustered according to sampling hour. In Trial 2, both XPC and NC groups exhibited the highest Firmicutes level at 0 h but the Bacteroidetes group became dominant at 6 h. Complexity of alpha diversity was increased in the initial contents from older birds and became less complex after 6 h of incubation. Beta diversity analysis was clustered as a function of treatment NC and XPC groups and by individual hours including 6, 12, 24, and 48 h. Overall, addition of XPC influenced microbiome diversity in a similar fashion to the profile of the NC group. PMID:28659891

  5. Integrated fluorescence correlation spectroscopy device for point-of-care clinical applications

    PubMed Central

    Olson, Eben; Torres, Richard; Levene, Michael J.

    2013-01-01

    We describe an optical system which reduces the cost and complexity of fluorescence correlation spectroscopy (FCS), intended to increase the suitability of the technique for clinical use. Integration of the focusing optics and sample chamber into a plastic component produces a design which is simple to align and operate. We validate the system by measurements on fluorescent dye, and compare the results to a commercial instrument. In addition, we demonstrate its application to measurements of concentration and multimerization of the clinically relevant protein von Willebrand factor (vWF) in human plasma. PMID:23847733

  6. Quantum state estimation when qubits are lost: a no-data-left-behind approach

    DOE PAGES

    Williams, Brian P.; Lougovski, Pavel

    2017-04-06

    We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.

  7. Nondestructive Evaluation (NDE) Results on Sikorsky Aircraft Survivable Affordable Reparable Airframe Program (SARAP) Samples

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Anastasi, Robert F.; Madaras, Eric I.

    2004-01-01

    The Survivable, Affordable, Reparable Airframe Program (SARAP) will develop/produce new structural design concepts with lower structural weight, reduced manufacturing complexity and development time, increased readiness, and improved threat protection. These new structural concepts will require advanced field capable inspection technologies to help meet the SARAP structural objectives. In the area of repair, damage assessment using nondestructive inspection (NDI) is critical to identify repair location and size. The purpose of this work is to conduct an assessment of new and emerging NDI methods that can potentially satisfy the SARAP program goals.

  8. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  9. High-throughput sequencing enhanced phage display enables the identification of patient-specific epitope motifs in serum.

    PubMed

    Christiansen, Anders; Kringelum, Jens V; Hansen, Christian S; Bøgh, Katrine L; Sullivan, Eric; Patel, Jigar; Rigby, Neil M; Eiwegger, Thomas; Szépfalusi, Zsolt; de Masi, Federico; Nielsen, Morten; Lund, Ole; Dufva, Martin

    2015-08-06

    Phage display is a prominent screening technique with a multitude of applications including therapeutic antibody development and mapping of antigen epitopes. In this study, phages were selected based on their interaction with patient serum and exhaustively characterised by high-throughput sequencing. A bioinformatics approach was developed in order to identify peptide motifs of interest based on clustering and contrasting to control samples. Comparison of patient and control samples confirmed a major issue in phage display, namely the selection of unspecific peptides. The potential of the bioinformatic approach was demonstrated by identifying epitopes of a prominent peanut allergen, Ara h 1, in sera from patients with severe peanut allergy. The identified epitopes were confirmed by high-density peptide micro-arrays. The present study demonstrates that high-throughput sequencing can empower phage display by (i) enabling the analysis of complex biological samples, (ii) circumventing the traditional laborious picking and functional testing of individual phage clones and (iii) reducing the number of selection rounds.

  10. Microfluidic platform combining droplets and magnetic tweezers: application to HER2 expression in cancer diagnosis

    PubMed Central

    Ferraro, Davide; Champ, Jérôme; Teste, Bruno; Serra, Marco; Malaquin, Laurent; Viovy, Jean-Louis; de Cremoux, Patricia; Descroix, Stephanie

    2016-01-01

    The development of precision medicine, together with the multiplication of targeted therapies and associated molecular biomarkers, call for major progress in genetic analysis methods, allowing increased multiplexing and the implementation of more complex decision trees, without cost increase or loss of robustness. We present a platform combining droplet microfluidics and magnetic tweezers, performing RNA purification, reverse transcription and amplification in a fully automated and programmable way, in droplets of 250nL directly sampled from a microtiter-plate. This platform decreases sample consumption about 100 fold as compared to current robotized platforms and it reduces human manipulations and contamination risk. The platform’s performance was first evaluated on cell lines, showing robust operation on RNA quantities corresponding to less than one cell, and then clinically validated with a cohort of 21 breast cancer samples, for the determination of their HER2 expression status, in a blind comparison with an established routine clinical analysis. PMID:27157697

  11. New Trends in Pesticide Residue Analysis in Cereals, Nutraceuticals, Baby Foods, and Related Processed Consumer Products.

    PubMed

    Raina-Fulton, Renata

    2015-01-01

    Pesticide residue methods have been developed for a wide variety of food products including cereal-based foods, nutraceuticals and related plant products, and baby foods. These cereal, fruit, vegetable, and plant-based products provide the basis for many processed consumer products. For cereal and nutraceuticals, which are dry sample products, a modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) method has been used with additional steps to allow wetting of the dry sample matrix and subsequent cleanup using dispersive or cartridge format SPE to reduce matrix effects. More processed foods may have lower pesticide concentrations but higher co-extracts that can lead to signal suppression or enhancement with MS detection. For complex matrixes, GC/MS/MS or LC/electrospray ionization (positive or negative ion)-MS/MS is more frequently used. The extraction and cleanup methods vary with different sample types particularly for cereal-based products, and these different approaches are discussed in this review. General instrument considerations are also discussed.

  12. The presence of the cyanobacterial toxin microcystin in black band disease of corals.

    PubMed

    Richardson, Laurie L; Sekar, Raju; Myers, Jamie L; Gantar, Miroslav; Voss, Joshua D; Kaczmarsky, Longin; Remily, Elizabeth R; Boyer, Gregory L; Zimba, Paul V

    2007-07-01

    Black band disease (BBD) is a migrating, cyanobacterial dominated, sulfide-rich microbial mat that moves across coral colonies lysing coral tissue. While it is known that BBD sulfate-reducing bacteria contribute to BBD pathogenicity by production of sulfide, additional mechanisms of toxicity may be involved. Using HPLC/MS, the cyanotoxin microcystin was detected in 22 field samples of BBD collected from five coral species on nine reefs of the wider Caribbean (Florida Keys and Bahamas). Two cyanobacterial cultures isolated from BBD, Geitlerinema and Leptolyngbya sp. contained microcystin based on HPLC/MS, with toxic activity confirmed using the protein phosphatase inhibition assay. The gene mcyA from the microcystin synthesis complex was detected in two field samples and from both BBD cyanobacterial cultures. Microcystin was not detected in six BBD samples from a different area of the Caribbean (St Croix, USVI) and the Philippines, suggesting regional specificity for BBD microcystin. This is the first report of the presence of microcystin in a coral disease.

  13. Microfluidic platform combining droplets and magnetic tweezers: application to HER2 expression in cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Ferraro, Davide; Champ, Jérôme; Teste, Bruno; Serra, Marco; Malaquin, Laurent; Viovy, Jean-Louis; de Cremoux, Patricia; Descroix, Stephanie

    2016-05-01

    The development of precision medicine, together with the multiplication of targeted therapies and associated molecular biomarkers, call for major progress in genetic analysis methods, allowing increased multiplexing and the implementation of more complex decision trees, without cost increase or loss of robustness. We present a platform combining droplet microfluidics and magnetic tweezers, performing RNA purification, reverse transcription and amplification in a fully automated and programmable way, in droplets of 250nL directly sampled from a microtiter-plate. This platform decreases sample consumption about 100 fold as compared to current robotized platforms and it reduces human manipulations and contamination risk. The platform’s performance was first evaluated on cell lines, showing robust operation on RNA quantities corresponding to less than one cell, and then clinically validated with a cohort of 21 breast cancer samples, for the determination of their HER2 expression status, in a blind comparison with an established routine clinical analysis.

  14. Generation and coherent detection of QPSK signal using a novel method of digital signal processing

    NASA Astrophysics Data System (ADS)

    Zhao, Yuan; Hu, Bingliang; He, Zhen-An; Xie, Wenjia; Gao, Xiaohui

    2018-02-01

    We demonstrate an optical quadrature phase-shift keying (QPSK) signal transmitter and an optical receiver for demodulating optical QPSK signal with homodyne detection and digital signal processing (DSP). DSP on the homodyne detection scheme is employed without locking the phase of the local oscillator (LO). In this paper, we present an extracting one-dimensional array of down-sampling method for reducing unwanted samples of constellation diagram measurement. Such a novel scheme embodies the following major advantages over the other conventional optical QPSK signal detection methods. First, this homodyne detection scheme does not need strict requirement on LO in comparison with linear optical sampling, such as having a flat spectral density and phase over the spectral support of the source under test. Second, the LabVIEW software is directly used for recovering the QPSK signal constellation without employing complex DSP circuit. Third, this scheme is applicable to multilevel modulation formats such as M-ary PSK and quadrature amplitude modulation (QAM) or higher speed signals by making minor changes.

  15. Characterization of a Track-and-Hold Amplifier for Application to a High Performance SAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DUBBERT, DALE F.; HARDIN, TERRY LYNN; DELAPLAIN, GILBERT G.

    2002-07-01

    A Synthetic Aperture Radar (SAR) which employs direct IF sampling can significantly reduce the complexity of the analog electronics prior to the analog-to-digital converter (ADC). For relatively high frequency IF bands, a wide-bandwidth track-and-hold amplifier (THA) is required prior to the ADC. The THA functions primarily as a means of converting, through bandpass sampling, the IF signal to a baseband signal which can be sampled by the ADC. For a wide-band, high dynamic-range receiver system, such as a SAR receiver, stringent performance requirements are placed on the THA. We first measure the THA parameters such as gain, gain compression, third-ordermore » intercept (TOI), signal-to-noise ratio (SNR), spurious-free dynamic-range (SFDR), noise figure (NF), and phase noise. The results are then analyzed in terms of their respective impact on the overall performance of the SAR. The specific THA under consideration is the Rockwell Scientific RTH010.« less

  16. The Contrasting Effects of Alum-Treated Chicken Manures and KH2PO4 on Phosphorus Behavior in Soils.

    PubMed

    Huang, Lidong; Yang, Junming; Xu, Yuting; Lei, Jiayan; Luo, Xiaoshan; Cade-Menun, Barbara J

    2018-03-01

    Alum [KAl(SO)⋅12HO] is often added to chicken manure to limit P solubility after land application. This is generally ascribed to the formation of Al-PO complexes. However, Al-PO complex formation could be affected by the matrix of chicken manure, which varies with animal diet. Alum was added to KHPO (as a reference material) and two manures from typical chicken farms in China, one from an intensive farm (CMIF) and another from free-ranging chickens (CMFR). These were subsequently incubated with soils for 100 d to investigate P transformations. Alum reduced water-soluble colorimetrically reactive phosphorus (RP) from soils amended with manure more effectively than in soils amended with KHPO. Alum addition lowered Mehlich-3 RP in soils with CMFR but had no influence on Mehlich-3 RP in CMIF- or KHPO-amended soils. A comparison of P in digested Mehlich-3 extracts with RP in undigested samples showed significantly increased P in digests of alum-treated CMFR only. Fractionation data indicated that alum treatment increased P in the NHF-RP (Al-P) fraction only in soils with KHPO, but not in soils with manure treatments. Furthermore, NaOH-extracted nonreactive P was markedly higher in soil with alum-treated CMFR relative to normal CMFR. The CMFR manure was assumed to contain higher concentrations of organic P because these chickens were fed grains only. These results suggest that the formation of alum-organic P complexes may reduce P solubility. By comparing alum-treated KHPO and manures, it appears that organic matter in manure could interfere with the formation of Al-PO complexes. Copyright © Her Majesty the Queen in Right of Canada, as represented by the Minister of Agriculture and AgriFood Canada.

  17. Estimating genetic effects and quantifying missing heritability explained by identified rare-variant associations.

    PubMed

    Liu, Dajiang J; Leal, Suzanne M

    2012-10-05

    Next-generation sequencing has led to many complex-trait rare-variant (RV) association studies. Although single-variant association analysis can be performed, it is grossly underpowered. Therefore, researchers have developed many RV association tests that aggregate multiple variant sites across a genetic region (e.g., gene), and test for the association between the trait and the aggregated genotype. After these aggregate tests detect an association, it is only possible to estimate the average genetic effect for a group of RVs. As a result of the "winner's curse," such an estimate can be biased. Although for common variants one can obtain unbiased estimates of genetic parameters by analyzing a replication sample, for RVs it is desirable to obtain unbiased genetic estimates for the study where the association is identified. This is because there can be substantial heterogeneity of RV sites and frequencies even among closely related populations. In order to obtain an unbiased estimate for aggregated RV analysis, we developed bootstrap-sample-split algorithms to reduce the bias of the winner's curse. The unbiased estimates are greatly important for understanding the population-specific contribution of RVs to the heritability of complex traits. We also demonstrate both theoretically and via simulations that for aggregate RV analysis the genetic variance for a gene or region will always be underestimated, sometimes substantially, because of the presence of noncausal variants or because of the presence of causal variants with effects of different magnitudes or directions. Therefore, even if RVs play a major role in the complex-trait etiologies, a portion of the heritability will remain missing, and the contribution of RVs to the complex-trait etiologies will be underestimated. Copyright © 2012 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. Digital tanlock loop architecture with no delay

    NASA Astrophysics Data System (ADS)

    Al-Kharji AL-Ali, Omar; Anani, Nader; Al-Araji, Saleh; Al-Qutayri, Mahmoud; Ponnapalli, Prasad

    2012-02-01

    This article proposes a new architecture for a digital tanlock loop which eliminates the time-delay block. The ? (rad) phase shift relationship between the two channels, which is generated by the delay block in the conventional time-delay digital tanlock loop (TDTL), is preserved using two quadrature sampling signals for the loop channels. The proposed system outperformed the original TDTL architecture, when both systems were tested with frequency shift keying input signal. The new system demonstrated better linearity and acquisition speed as well as improved noise performance compared with the original TDTL architecture. Furthermore, the removal of the time-delay block enables all processing to be digitally performed, which reduces the implementation complexity. Both the original TDTL and the new architecture without the delay block were modelled and simulated using MATLAB/Simulink. Implementation issues, including complexity and relation to simulation of both architectures, are also addressed.

  19. The presence of opportunistic pathogens, Legionella spp., L. pneumophila and Mycobacterium avium complex, in South Australian reuse water distribution pipelines.

    PubMed

    Whiley, H; Keegan, A; Fallowfield, H; Bentham, R

    2015-06-01

    Water reuse has become increasingly important for sustainable water management. Currently, its application is primarily constrained by the potential health risks. Presently there is limited knowledge regarding the presence and fate of opportunistic pathogens along reuse water distribution pipelines. In this study opportunistic human pathogens Legionella spp., L. pneumophila and Mycobacterium avium complex were detected using real-time polymerase chain reaction along two South Australian reuse water distribution pipelines at maximum concentrations of 10⁵, 10³ and 10⁵ copies/mL, respectively. During the summer period of sampling the concentration of all three organisms significantly increased (P < 0.05) along the pipeline, suggesting multiplication and hence viability. No seasonality in the decrease in chlorine residual along the pipelines was observed. This suggests that the combination of reduced chlorine residual and increased water temperature promoted the presence of these opportunistic pathogens.

  20. Label-Free Molecular Imaging of Biological Cells and Tissues by Linear and Nonlinear Raman Spectroscopic Approaches.

    PubMed

    Krafft, Christoph; Schmitt, Michael; Schie, Iwan W; Cialla-May, Dana; Matthäus, Christian; Bocklitz, Thomas; Popp, Jürgen

    2017-04-10

    Raman spectroscopy is an emerging technique in bioanalysis and imaging of biomaterials owing to its unique capability of generating spectroscopic fingerprints. Imaging cells and tissues by Raman microspectroscopy represents a nondestructive and label-free approach. All components of cells or tissues contribute to the Raman signals, giving rise to complex spectral signatures. Resonance Raman scattering and surface-enhanced Raman scattering can be used to enhance the signals and reduce the spectral complexity. Raman-active labels can be introduced to increase specificity and multimodality. In addition, nonlinear coherent Raman scattering methods offer higher sensitivities, which enable the rapid imaging of larger sampling areas. Finally, fiber-based imaging techniques pave the way towards in vivo applications of Raman spectroscopy. This Review summarizes the basic principles behind medical Raman imaging and its progress since 2012. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A Novel Passive Robotic Tool Interface

    NASA Astrophysics Data System (ADS)

    Roberts, Paul

    2013-09-01

    The increased capability of space robotics has seen their uses increase from simple sample gathering and mechanical adjuncts to humans, to sophisticated multi- purpose investigative and maintenance tools that substitute for humans for many external space tasks. As with all space missions, reducing mass and system complexity is critical. A key component of robotic systems mass and complexity is the number of motors and actuators needed. MDA has developed a passive tool interface that, like a household power drill, permits a single tool actuator to be interfaced with many Tool Tips without requiring additional actuators to manage the changing and storage of these tools. MDA's Multifunction Tool interface permits a wide range of Tool Tips to be designed to a single interface that can be pre-qualified to torque and strength limits such that additional Tool Tips can be added to a mission's "tool kit" simply and quickly.

  2. Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems

    NASA Astrophysics Data System (ADS)

    Wu, Sau-Hsuan; Kuo, C.-C. Jay

    2002-11-01

    The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.

  3. Integrative Analysis of Complex Cancer Genomics and Clinical Profiles Using the cBioPortal

    PubMed Central

    Gao, Jianjiong; Aksoy, Bülent Arman; Dogrusoz, Ugur; Dresdner, Gideon; Gross, Benjamin; Sumer, S. Onur; Sun, Yichao; Jacobsen, Anders; Sinha, Rileen; Larsson, Erik; Cerami, Ethan; Sander, Chris; Schultz, Nikolaus

    2014-01-01

    The cBioPortal for Cancer Genomics (http://cbioportal.org) provides a Web resource for exploring, visualizing, and analyzing multidimensional cancer genomics data. The portal reduces molecular profiling data from cancer tissues and cell lines into readily understandable genetic, epigenetic, gene expression, and proteomic events. The query interface combined with customized data storage enables researchers to interactively explore genetic alterations across samples, genes, and pathways and, when available in the underlying data, to link these to clinical outcomes. The portal provides graphical summaries of gene-level data from multiple platforms, network visualization and analysis, survival analysis, patient-centric queries, and software programmatic access. The intuitive Web interface of the portal makes complex cancer genomics profiles accessible to researchers and clinicians without requiring bioinformatics expertise, thus facilitating biological discoveries. Here, we provide a practical guide to the analysis and visualization features of the cBioPortal for Cancer Genomics. PMID:23550210

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jie; Krems, Roman V.; Li, Zhiying

    We use classical trajectory calculations to study the effects of the interaction strength and the geometry of rigid polyatomic molecules on the formation of long-lived collision complexes at low collision energies. We first compare the results of the calculations for collisions of benzene molecules with rare gas atoms He, Ne, Ar, Kr, and Xe. The comparison illustrates that the mean lifetimes of the collision complexes increase monotonically with the strength of the atom–molecule interaction. We then compare the results of the atom–benzene calculations with those for benzene–benzene collisions. The comparison illustrates that the mean lifetimes of the benzene–benzene collision complexesmore » are significantly reduced due to non-ergodic effects prohibiting the molecules from sampling the entire configuration space. We find that the thermally averaged lifetimes of the benzene–benzene collisions are much shorter than those for Xe with benzene and similar to those for Ne with benzene.« less

  5. Why Education Predicts Decreased Belief in Conspiracy Theories

    PubMed Central

    2016-01-01

    Summary People with high education are less likely than people with low education to believe in conspiracy theories. It is yet unclear why these effects occur, however, as education predicts a range of cognitive, emotional, and social outcomes. The present research sought to identify mediators of the relationship between education and conspiracy beliefs. Results of Study 1 revealed three independent mediators of this relationship, namely, belief in simple solutions for complex problems, feelings of powerlessness, and subjective social class. A nationally representative sample (Study 2) replicated these findings except for subjective social class. Moreover, variations in analytic thinking statistically accounted for the path through belief in simple solutions. I conclude that the relationship between education and conspiracy beliefs cannot be reduced to a single mechanism but is the result of the complex interplay of multiple psychological factors that are associated with education. © 2016 The Authors. Applied Cognitive Psychology published by John Wiley & Sons Ltd. PMID:28163371

  6. Weed manipulation for insect pest management in corn

    NASA Astrophysics Data System (ADS)

    Altieri, M. A.; Whitcomb, W. H.

    1980-11-01

    Populations of insect pests and associated predaceous arthropods were sampled by direct observation and other relative methods in simple and diversified corn habitats at two sites in north Florida during 1978 and 1979. Through various cultural manipulations, characteristic weed communities were established selectively in alternate rows within corn plots. Fall armyworm ( Spodoptera frugiperda J. E. Smith) incidence was consistently higher in the weed-free habitats than in the corn habitats containing natural weed complexes or selected weed associations. Corn earworm ( Heliothis zea Boddie) damage was similar in all weed-free and weedy treatments, suggesting that this insect is not affected greatly by weed diversity. Only the diversification of corn with a strip of soybean significantly reduced corn earworm damage. In one site, distance between plots was reduced. Because predators moved freely between habitats, it was difficult to identify between-treatment differences in the composition of predator communities. In the other site, increased distances between plots minimized such migrations, resulting in greater population densities and diversity of common foliage insect predators in the weed-manipulated corn systems than in the weed-free plots. Trophic relationships in the weedy habitats were more complex than food webs in monocultures. Predator diversity (measured as mean number of species per area) and predator density was higher in com plots surrounded by mature, complex vegetation than at those surrounded by annual crops. This suggests that diverse adjacent areas to crops provide refuge for predators, thus acting as colonization sources.

  7. Field-testing a new directional passive air sampler for fugitive dust in a complex industrial source environment.

    PubMed

    Ferranti, E J S; Fryer, M; Sweetman, A J; Garcia, M A Solera; Timmis, R J

    2014-01-01

    Quantifying the sources of fugitive dusts on complex industrial sites is essential for regulation and effective dust management. This study applied two recently-patented Directional Passive Air Samplers (DPAS) to measure the fugitive dust contribution from a Metal Recovery Plant (MRP) located on the periphery of a major steelworks site. The DPAS can collect separate samples for winds from different directions (12 × 30° sectors), and the collected dust may be quantified using several different measurement methods. The DPASs were located up and down-prevailing-wind of the MRP processing area to (i) identify and measure the contribution made by the MRP processing operation; (ii) monitor this contribution during the processing of a particularly dusty material; and (iii) detect any changes to this contribution following new dust-control measures. Sampling took place over a 12-month period and the amount of dust was quantified using photographic, magnetic and mass-loading measurement methods. The DPASs are able to effectively resolve the incoming dust signal from the wider steelworks complex, and also different sources of fugitive dust from the MRP processing area. There was no confirmable increase in the dust contribution from the MRP during the processing of a particularly dusty material, but dust levels significantly reduced following the introduction of new dust-control measures. This research was undertaken in a regulatory context, and the results provide a unique evidence-base for current and future operational or regulatory decisions.

  8. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    NASA Astrophysics Data System (ADS)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  9. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    PubMed

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  10. Thin-plate spline analysis of treatment effects of rapid maxillary expansion and face mask therapy in early Class III malocclusions.

    PubMed

    Baccetti, T; Franchi, L; McNamara, J A

    1999-06-01

    An effective morphometric method (thin-plate spline analysis) was applied to evaluate shape changes in the craniofacial configuration of a sample of 23 children with Class III malocclusions in the early mixed dentition treated with rapid maxillary expansion and face mask therapy, and compared with a sample of 17 children with untreated Class III malocclusions. Significant treatment-induced changes involved both the maxilla and the mandible. Major deformations consisted of forward displacement of the maxillary complex from the pterygoid region and of anterior morphogenetic rotation of the mandible, due to a significant upward and forward direction of growth of the mandibular condyle. Significant differences in size changes due to reduced increments in mandibular dimensions were associated with significant shape changes in the treated group.

  11. Development of Flat Roof Construction with Waterproofing from Modified Self-Compacting Concrete

    NASA Astrophysics Data System (ADS)

    Bogdanov, R. R.; Ibragimov, R. A.

    2017-11-01

    The given article considers the issues of increase of building flat roof durability by application of the modified self-compacting concrete (SSC). When SSC was modified, a complex modifier was developed and the optimization of the complex modifier composition was carried out using a three-factor experiment. The physico-mechanical properties of the obtained SSC are determined. The microstructure and phase composition of the modified cement stone were studied. On the basis of the studies carried out, namely, X-ray phase analysis and electron microscopy, it was concluded that the reduced content of calcium hydroxide in the samples with a complex modifier is due to the adsorption of calcium hydroxide on highly dispersed particles and the reaction of interaction with metakaolin also contributing to reduction in the content of calcium hydroxide in cement stone. The received data allow one to speak about SSC high operational characteristics. With the mark for the spreading of cone P5, the modified SSC has a class of compressive strength B50, high frost resistance (F600) and water resistance (W16).

  12. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  13. Technique for measurement of characteristic impedance and propagation constant for porous materials

    NASA Astrophysics Data System (ADS)

    Jung, Ki Won; Atchley, Anthony A.

    2005-09-01

    Knowledge of acoustic properties such as characteristic impedance and complex propagation constant is useful to characterize the acoustic behaviors of porous materials. Song and Bolton's four-microphone method [J. Acoust. Soc. Am. 107, 1131-1152 (2000)] is one of the most widely employed techniques. In this method two microphones are used to determine the complex pressure amplitudes for each side of a sample. Muehleisen and Beamer [J. Acoust. Soc. Am. 117, 536-544 (2005)] improved upon a four-microphone method by interchanging microphones to reduce errors due to uncertainties in microphone response. In this paper, a multiple microphone technique is investigated to reconstruct the pressure field inside an impedance tube. Measurements of the acoustic properties of a material having square cross-section pores is used to check the validity of the technique. The values of characteristic impedance and complex propagation constant extracted from the reconstruction agree well with predicted values. Furthermore, this technique is used in investigating the acoustic properties of reticulated vitreous carbon (RVC) in the range of 250-1100 Hz.

  14. Digital gene expression analysis with sample multiplexing and PCR duplicate detection: A straightforward protocol.

    PubMed

    Rozenberg, Andrey; Leese, Florian; Weiss, Linda C; Tollrian, Ralph

    2016-01-01

    Tag-Seq is a high-throughput approach used for discovering SNPs and characterizing gene expression. In comparison to RNA-Seq, Tag-Seq eases data processing and allows detection of rare mRNA species using only one tag per transcript molecule. However, reduced library complexity raises the issue of PCR duplicates, which distort gene expression levels. Here we present a novel Tag-Seq protocol that uses the least biased methods for RNA library preparation combined with a novel approach for joint PCR template and sample labeling. In our protocol, input RNA is fragmented by hydrolysis, and poly(A)-bearing RNAs are selected and directly ligated to mixed DNA-RNA P5 adapters. The P5 adapters contain i5 barcodes composed of sample-specific (moderately) degenerate base regions (mDBRs), which later allow detection of PCR duplicates. The P7 adapter is attached via reverse transcription with individual i7 barcodes added during the amplification step. The resulting libraries can be sequenced on an Illumina sequencer. After sample demultiplexing and PCR duplicate removal with a free software tool we designed, the data are ready for downstream analysis. Our protocol was tested on RNA samples from predator-induced and control Daphnia microcrustaceans.

  15. David Brandner | NREL

    Science.gov Websites

    chemical reaction engineering and transport phenomena Analytical analysis of complex bio-derived samples and Lignin Areas of Expertise Analytical analysis of complex samples Chemical reaction engineering and

  16. Determination of aluminium in groundwater samples by GF-AAS, ICP-AES, ICP-MS and modelling of inorganic aluminium complexes.

    PubMed

    Frankowski, Marcin; Zioła-Frankowska, Anetta; Kurzyca, Iwona; Novotný, Karel; Vaculovič, Tomas; Kanický, Viktor; Siepak, Marcin; Siepak, Jerzy

    2011-11-01

    The paper presents the results of aluminium determinations in ground water samples of the Miocene aquifer from the area of the city of Poznań (Poland). The determined aluminium content amounted from <0.0001 to 752.7 μg L(-1). The aluminium determinations were performed using three analytical techniques: graphite furnace atomic absorption spectrometry (GF-AAS), inductively coupled plasma atomic emission spectrometry (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS). The results of aluminium determinations in groundwater samples for particular analytical techniques were compared. The results were used to identify the ascent of ground water from the Mesozoic aquifer to the Miocene aquifer in the area of the fault graben. Using the Mineql+ program, the modelling of the occurrence of aluminium and the following aluminium complexes: hydroxy, with fluorides and sulphates was performed. The paper presents the results of aluminium determinations in ground water using different analytical techniques as well as the chemical modelling in the Mineql+ program, which was performed for the first time and which enabled the identification of aluminium complexes in the investigated samples. The study confirms the occurrence of aluminium hydroxy complexes and aluminium fluoride complexes in the analysed groundwater samples. Despite the dominance of sulphates and organic matter in the sample, major participation of the complexes with these ligands was not stated based on the modelling.

  17. Paper SERS chromatography for detection of trace analytes in complex samples

    NASA Astrophysics Data System (ADS)

    Yu, Wei W.; White, Ian M.

    2013-05-01

    We report the application of paper SERS substrates for the detection of trace quantities of multiple analytes in a complex sample in the form of paper chromatography. Paper chromatography facilitates the separation of different analytes from a complex sample into distinct sections in the chromatogram, which can then be uniquely identified using SERS. As an example, the separation and quantitative detection of heroin in a highly fluorescent mixture is demonstrated. Paper SERS chromatography has obvious applications, including law enforcement, food safety, and border protection, and facilitates the rapid detection of chemical and biological threats at the point of sample.

  18. HIV Viral RNA Extraction in Wax Immiscible Filtration Assisted by Surface Tension (IFAST) Devices

    PubMed Central

    Berry, Scott M.; LaVanway, Alex J.; Pezzi, Hannah M.; Guckenberger, David J.; Anderson, Meghan A.; Loeb, Jennifer M.; Beebe, David J.

    2015-01-01

    The monitoring of viral load is critical for proper management of antiretroviral therapy for HIV-positive patients. Unfortunately, in the developing world, significant economic and geographical barriers exist, limiting access to this test. The complexity of current viral load assays makes them expensive and their access limited to advanced facilities. We attempted to address these limitations by replacing conventional RNA extraction, one of the essential processes in viral load quantitation, with a simplified technique known as immiscible filtration assisted by surface tension (IFAST). Furthermore, these devices were produced via the embossing of wax, enabling local populations to produce and dispose of their own devices with minimal training or infrastructure, potentially reducing the total assay cost. In addition, IFAST can be used to reduce cold chain dependence during transportation. Viral RNA extracted from raw samples stored at 37°C for 1 week exhibited nearly complete degradation. However, IFAST-purified RNA could be stored at 37°C for 1 week without significant loss. These data suggest that RNA isolated at the point of care (eg, in a rural clinic) via IFAST could be shipped to a central laboratory for quantitative RT-PCR without a cold chain. Using this technology, we have demonstrated accurate and repeatable measurements of viral load on samples with as low as 50 copies per milliliter of sample. PMID:24613822

  19. Complexation of rice starch/flour and maize oil through heat moisture treatment: Structural, in vitro digestion and physicochemical properties.

    PubMed

    Chen, Xu; He, Xiaowei; Fu, Xiong; Zhang, Bin; Huang, Qiang

    2017-05-01

    This study investigated structural, in vitro digestion and physicochemical properties of normal rice starch (NRS)/flour (NRF) complexed with maize oil (MO) through heat-moisture treatment (HMT). The NRS-/NRF-MO complex displayed an increased pasting temperature and a decreased peak viscosity. After HMT, less ordered Maltese and more granule fragments were observed for NRS-/NRF-MO complex. Meanwhile, more aggregation was observed in the HMT samples with higher moisture contents. We found that higher onset temperature, lower enthalpy change and relative crystallinity of the NRS-/NRF-MO complex were associated with a higher moisture content of HMT samples. The higher moisture content of HMT was also favorable for the amylose-lipid complex formation. Differences in starch digestion properties were found for NRS-MO and NRF-MO complex. All of the NRS/NRF complexed MO after cooking showed lower rapidly digestible starch (RDS) contents compared with the control sample, therein NRS-/NRF- MO 20% exhibited the highest sum of the slowly digestible starch and resistant starch contents. In general, HMT had a greater impact on the in vitro digestion and physicochemical properties of flour samples compared with starch counterparts. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Distributional potential of the Triatoma brasiliensis species complex at present and under scenarios of future climate conditions

    PubMed Central

    2014-01-01

    Background The Triatoma brasiliensis complex is a monophyletic group, comprising three species, one of which includes two subspecific taxa, distributed across 12 Brazilian states, in the caatinga and cerrado biomes. Members of the complex are diverse in terms of epidemiological importance, morphology, biology, ecology, and genetics. Triatoma b. brasiliensis is the most disease-relevant member of the complex in terms of epidemiology, extensive distribution, broad feeding preferences, broad ecological distribution, and high rates of infection with Trypanosoma cruzi; consequently, it is considered the principal vector of Chagas disease in northeastern Brazil. Methods We used ecological niche models to estimate potential distributions of all members of the complex, and evaluated the potential for suitable adjacent areas to be colonized; we also present first evaluations of potential for climate change-mediated distributional shifts. Models were developed using the GARP and Maxent algorithms. Results Models for three members of the complex (T. b. brasiliensis, N = 332; T. b. macromelasoma, N = 35; and T. juazeirensis, N = 78) had significant distributional predictivity; however, models for T. sherlocki and T. melanica, both with very small sample sizes (N = 7), did not yield predictions that performed better than random. Model projections onto future-climate scenarios indicated little broad-scale potential for change in the potential distribution of the complex through 2050. Conclusions This study suggests that T. b. brasiliensis is the member of the complex with the greatest distributional potential to colonize new areas: overall; however, the distribution of the complex appears relatively stable. These analyses offer key information to guide proactive monitoring and remediation activities to reduce risk of Chagas disease transmission. PMID:24886587

  1. A XPS Study of the Passivity of Stainless Steels Influenced by Sulfate-Reducing Bacteria.

    NASA Astrophysics Data System (ADS)

    Chen, Guocun

    The influence of sulfate-reducing bacteria (SRB) on the passivity of type 304 and 317L stainless steels (SS) was investigated by x-ray photoelectron spectroscopy (XPS), microbiological and electrochemical techniques. Samples were exposed to SRB, and then the resultant surfaces were analyzed by XPS, and the corrosion resistance by potentiodynamic polarization in deaerated 0.1 M HCl. To further understand their passivity, the SRB-exposed samples were analyzed by XPS after potentiostatic polarization at a passive potential in the hydrochloric solution. The characterization was performed under two surface conditions: unrinsed and rinsed by deaerated alcohol and deionized water. Comparisons were made with control samples immersed in uninoculated medium. SRB caused a severe loss of the passivity of 304 SS through sulfide formation and possible additional activation to form hexavalent chromium. The sulfides included FeS, FeS_2, Cr_2S _3, NiS and possibly Fe_ {rm 1-x}S. The interaction took place nonuniformly, resulting in undercutting of the passive film and preferential hydration of inner surface layers. The bacterial activation of the Cr^{6+ }^ecies was magnified by subsequent potentiostatic polarization. In contrast, 317L SS exhibited a limited passivity. The sulfides were formed mainly in the outer layers. Although Cr^{6+}^ecies were observed after the exposure, they were dissolved upon polarization. Since 317L SS has a higher Mo content, its higher passivity was ascribed to Mo existing as molybdate on the surface and Mo^{5+} species in the biofilm. Consequently, the interaction of SRB with Mo was studied. It was observed that molybdate could be retained on the surfaces of Mo coupons by corrosion products. In the presence of SRB, however, a considerable portion of the molybdate interacted with intermediate sulfur -containing proteins, forming Mo(V)-S complexes and reducing bacterial growth and sulfate reduction. The limited insolubility of the Mo(V)-S complexes in 0.1 M HCl provided a certain protection so that the pitting potential of the SRB-exposed Mo coupons was not considerably decreased. The interaction of the sulfur-containing proteins with Mo also provided mechanistic information about the adhesion of biofilm to Mo-bearing steels. Additionally, the interactions of SRB with other alloying elements, Cr and Ni, were investigated.

  2. Evaluation of the nephrotoxicity of complex mixtures containing organics and metals: advantages and disadvantages of the use of real-world complex mixtures.

    PubMed

    Simmons, J E; Yang, R S; Berman, E

    1995-02-01

    As part of a multidisciplinary health effects study, the nephrotoxicity of complex industrial waste mixtures was assessed. Adult, male Fischer 344 rats were gavaged with samples of complex industrial waste and nephrotoxicity evaluated 24 hr later. Of the 10 tested samples, 4 produced increased absolute or relative kidney weight, or both, coupled with a statistically significant alteration in at least one of the measured serum parameters (urea nitrogen (BUN), creatinine (CREAT), and BUN/CREAT ratio). Although the waste samples had been analyzed for a number of organic chemicals and 7 of the 10 samples were analyzed also for 12 elemental metals and metalloids, their nephrotoxicity was not readily predicted from the partial chemical characterization data. Because the chemical form or speciation of the metals was unknown, it was not possible to estimate their contribution to the observed biological response. Various experimental approaches, including use of real-world complex mixtures, chemically defined synthetic mixtures, and simple mixtures, will be necessary to adequately determine the potential human health risk from exposure to complex chemical mixtures.

  3. Variance Estimation Using Replication Methods in Structural Equation Modeling with Complex Sample Data

    ERIC Educational Resources Information Center

    Stapleton, Laura M.

    2008-01-01

    This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…

  4. Method of tagging excipients with /sup 99m/Tc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardy, A.; Beydon, J.; Gobin, R.

    1977-11-08

    A method of using /sup 99m/technetium for tagging excipients in medical diagnosis by scintigraphy comprises mixing, in an aqueous solution of alkali-metal pertechnetate, an excipient and a reducing agent in the form of a complex, which complex is such that the association constant of the anion with reduced techetium is less than the association constant of the excipient with reduced technetium, thereby forming a radio-pharmaceutical substance which is a complex between the excipient and /sup 99m/technetium.

  5. An Improved Model Predictive Current Controller of Switched Reluctance Machines Using Time-Multiplexed Current Sensor

    PubMed Central

    Li, Bingchu; Ling, Xiao; Huang, Yixiang; Gong, Liang; Liu, Chengliang

    2017-01-01

    This paper presents a fixed-switching-frequency model predictive current controller using multiplexed current sensor for switched reluctance machine (SRM) drives. The converter was modified to distinguish currents from simultaneously excited phases during the sampling period. The only current sensor installed in the converter was time division multiplexing for phase current sampling. During the commutation stage, the control steps of adjacent phases were shifted so that sampling time was staggered. The maximum and minimum duty ratio of pulse width modulation (PWM) was limited to keep enough sampling time for analog-to-digital (A/D) conversion. Current sensor multiplexing was realized without complex adjustment of either driver circuit nor control algorithms, while it helps to reduce the cost and errors introduced in current sampling due to inconsistency between sensors. The proposed controller is validated by both simulation and experimental results with a 1.5 kW three-phase 12/8 SRM. Satisfied current sampling is received with little difference compared with independent phase current sensors for each phase. The proposed controller tracks the reference current profile as accurately as the model predictive current controller with independent phase current sensors, while having minor tracking errors compared with a hysteresis current controller. PMID:28513554

  6. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources

    PubMed Central

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.

    2016-01-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323

  7. Towards point of care testing for C. difficile infection by volatile profiling, using the combination of a short multi-capillary gas chromatography column with metal oxide sensor detection

    NASA Astrophysics Data System (ADS)

    McGuire, N. D.; Ewen, R. J.; de Lacy Costello, B.; Garner, C. E.; Probert, C. S. J.; Vaughan, K.; Ratcliffe, N. M.

    2014-06-01

    Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor sensor and artificial neural network software. For direct analysis of biological samples this prototype offers alternatives to conventional gas chromatography (GC) detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenized in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 min compared to 30 min for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden.

  8. Towards point of care testing for C. difficile infection by volatile profiling, using the combination of a short multi-capillary gas chromatography column with metal oxide sensor detection

    PubMed Central

    McGuire, N D; Ewen, R J; de Lacy Costello, B; Garner, C E; Probert, C S J; Vaughan, K.; Ratcliffe, N M

    2016-01-01

    Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor (MOS) sensor and artificial neural network (ANN) software. For direct analysis of biological samples this prototype offers alternatives to conventional GC detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenised in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 minutes compared to 30 minutes for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden. PMID:27212803

  9. Broadband Ftmw Spectroscopy of the Urea-Argon and Thiourea-Argon Complexes

    NASA Astrophysics Data System (ADS)

    Medcraft, Chris; Bittner, Dror M.; Cooper, Graham A.; Mullaney, John C.; Walker, Nick

    2017-06-01

    The rotational spectra complexes of argon-urea, argon-thiourea and water-thiourea have been measured by chirped-pulse Fourier transform microwave spectroscopy from 2-18.5 GHz. The sample was produced via laser vaporisation of a rod containing copper and the organic sample as a stream of argon was passed over the surface and subsequently expanded into the vacuum chamber cooling the sample. Argon was found to bind to π system of the carbonyl bond for both the urea and thiourea complexes.

  10. Single-run determination of polybrominated diphenyl ethers (PBDEs) di- to deca-brominated in fish meal, fish oil and fish feed by isotope dilution: application of automated sample purification and gas chromatography/ion trap tandem mass spectrometry (GC/ITMS).

    PubMed

    Blanco, Sonia Lucía; Vieites, Juan M

    2010-07-05

    The present paper describes the application of automated cleanup and fractionation procedures of the Power Prep system (Fluid Management Systems) for the determination of polybrominated diphenyl ethers (PBDEs) in feeding stuffs and fish meal and oil. Gas chromatography (GC) separation followed by ion trap tandem mass spectrometry detection in EI mode (ITMS) allowed the analysis of di- to deca-BDEs in the samples matrices used in fish aquaculture. The method developed enabled the determination of 26 native PBDE congeners and 11 (13)C(12)-labelled congeners, including deca-BDE 209, in a single-run analysis, using isotope dilution. The automated cleanup, consisting of a succession of multilayer silica and basic alumina columns previously applied by Wyrzykowska et al. (2009) [28] in combustion flue gas, was successfully applied in our complex matrices. The method allowed an increase in productivity, i.e. lower time was required to process samples, and simultaneous purification of several samples was achieved at a time, reducing analyst dedication and human error input. Average recoveries of 43-96% were obtained. GC/ITMS can overcome the complexity originating from the sample matrix, eliminating matrix effects by tandem MS, to enable the detection of congeners penta- to nona-BDEs where interferent masses were present. The provisional detection limits, estimated in the samples, were 5-30 pg for di-, tri-, tetra-, and penta-BDEs, 20-65 pg for hexa-, hepta-, octa- and nona-BDEs, and 105 pg for deca-BDE. Reduction of deca-BDE 209 blank values is of concern to ongoing research. Good accuracy was obtained by application of the whole procedure, representing an efficient, low-cost and fast alternative for routine analyses. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  12. Interaction of bisphenol A with dissolved organic matter in extractive and adsorptive removal processes.

    PubMed

    Zhu, Fei-Die; Choo, Kwang-Ho; Chang, Hyun-Shik; Lee, Byunghwan

    2012-05-01

    The fate of endocrine disrupting chemicals (EDCs) in natural and engineered systems is complicated due to their interactions with various water constituents. This study investigated the interaction of bisphenol A (BPA) with dissolved organic matter (DOM) and colloids present in surface water and secondary effluent as well as its adsorptive removal by powdered activated carbons. The solid phase micro-extraction (SPME) method followed by thermal desorption and gas chromatography-mass spectrometry (GC-MS) was utilized for determining the distribution of BPA molecules in water. The BPA removal by SPME decreased with the increased DOM content, where the formation of BPA-DOM complexes in an aqueous matrix was responsible for the reduced extraction of BPA. Colloidal particles in water samples sorbed BPA leading to the marked reduction of liquid phase BPA. BPA-DOM complexes had a negative impact on the adsorptive removal of BPA by powered activated carbons. The complex formation was characterized based on Fourier transform infrared (FTIR) and ultraviolet-visible (UV-Vis) spectroscopy, along with the calculation of molecular interactions between BPA and functional groups in DOM. It was found that the hydrogen bonding between DOM and BPA would be preferred over aromatic interactions. A pseudo-equilibrium molecular coordination model for the complexation between a BPA molecule and a hydroxyl group of the DOM was developed, which enabled estimation of the maximum sorption site and complex formation constant as well as prediction of organic complexes at various DOM levels. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Game Changing: NASA's Space Launch System and Science Mission Design

    NASA Technical Reports Server (NTRS)

    Creech, Stephen D.

    2013-01-01

    NASA s Marshall Space Flight Center (MSFC) is directing efforts to build the Space Launch System (SLS), a heavy-lift rocket that will carry the Orion Multi-Purpose Crew Vehicle (MPCV) and other important payloads far beyond Earth orbit (BEO). Its evolvable architecture will allow NASA to begin with Moon fly-bys and then go on to transport humans or robots to distant places such as asteroids and Mars. Designed to simplify spacecraft complexity, the SLS rocket will provide improved mass margins and radiation mitigation, and reduced mission durations. These capabilities offer attractive advantages for ambitious missions such as a Mars sample return, by reducing infrastructure requirements, cost, and schedule. For example, if an evolved expendable launch vehicle (EELV) were used for a proposed mission to investigate the Saturn system, a complicated trajectory would be required - with several gravity-assist planetary fly-bys - to achieve the necessary outbound velocity. The SLS rocket, using significantly higher C3 energies, can more quickly and effectively take the mission directly to its destination, reducing trip time and cost. As this paper will report, the SLS rocket will launch payloads of unprecedented mass and volume, such as "monolithic" telescopes and in-space infrastructure. Thanks to its ability to co-manifest large payloads, it also can accomplish complex missions in fewer launches. Future analyses will include reviews of alternate mission concepts and detailed evaluations of SLS figures of merit, helping the new rocket revolutionize science mission planning and design for years to come.

  14. Game changing: NASA's space launch system and science mission design

    NASA Astrophysics Data System (ADS)

    Creech, S. D.

    NASA's Marshall Space Flight Center (MSFC) is directing efforts to build the Space Launch System (SLS), a heavy-lift rocket that will carry the Orion Multi-Purpose Crew Vehicle (MPCV) and other important payloads far beyond Earth orbit (BEO). Its evolvable architecture will allow NASA to begin with Moon fly-bys and then go on to transport humans or robots to distant places such as asteroids and Mars. Designed to simplify spacecraft complexity, the SLS rocket will provide improved mass margins and radiation mitigation, and reduced mission durations. These capabilities offer attractive advantages for ambitious missions such as a Mars sample return, by reducing infrastructure requirements, cost, and schedule. For example, if an evolved expendable launch vehicle (EELV) were used for a proposed mission to investigate the Saturn system, a complicated trajectory would be required - with several gravity-assist planetary fly-bys - to achieve the necessary outbound velocity. The SLS rocket, using significantly higher characteristic energy (C3) energies, can more quickly and effectively take the mission directly to its destination, reducing trip time and cost. As this paper will report, the SLS rocket will launch payloads of unprecedented mass and volume, such as “ monolithic” telescopes and in-space infrastructure. Thanks to its ability to co-manifest large payloads, it also can accomplish complex missions in fewer launches. Future analyses will include reviews of alternate mission concepts and detailed evaluations of SLS figures of merit, helping the new rocket revolutionize science mission planning and design for years to come.

  15. Determination of antioxidants by a novel on-line HPLC-cupric reducing antioxidant capacity (CUPRAC) assay with post-column detection.

    PubMed

    Celik, Saliha Esin; Ozyürek, Mustafa; Güçlü, Kubilay; Apak, Reşat

    2010-07-26

    A novel on-line HPLC-cupric reducing antioxidant capacity (CUPRAC) method was developed for the selective determination of polyphenols (flavonoids, simple phenolic and hydroxycinnamic acids) in complex plant matrices. The method combines chromatographic separation, constituent analysis, and post-column identification of antioxidants in plant extracts. The separation of polyphenols was performed on a C18 column using gradient elution with two different mobile phase solutions, i.e., MeOH and 0.2% o-phosphoric acid. The HPLC-separated antioxidant polyphenols in the extracts react with copper(II)-neocuproine (Cu(II)-Nc) reagent in a post-column reaction coil to form a derivative. The reagent is reduced by antioxidants to the copper(I)-neocuproine (Cu(I)-Nc) chelate having maximum absorption at 450 nm. The negative peaks of antioxidant constituents were monitored by measuring the increase in absorbance due to Cu(I)-Nc. The detection limits of polyphenols at 450 nm (in the range of 0.17-3.46 microM) after post-column derivatization were comparable to those at 280 nm UV detection without derivatization. The developed method was successfully applied to the identification of antioxidant compounds in crude extracts of Camellia sinensis, Origanum marjorana and Mentha. The method is rapid, inexpensive, versatile, non-laborious, uses stable reagents, and enables the on-line qualitative and quantitative estimation of antioxidant constituents of complex plant samples. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD).

    PubMed

    Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor

    2014-05-01

    The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.

  17. Prevalence and Risk Factors of Human Intestinal Parasites in Roudehen, Tehran Province, Iran.

    PubMed

    Hemmati, Nasrin; Razmjou, Elham; Hashemi-Hafshejani, Saeideh; Motevalian, Abbas; Akhlaghi, Lameh; Meamar, Ahmad Reza

    2017-01-01

    Intestinal parasitic infections are among the most common infections and health problems worldwide. Due to the lack of epidemiologic information of such infections, the prevalence of, and the risk factors for, enteric parasites were investigated in residents of Roudehen, Tehran Province, Iran. In this cross-sectional study, 561 triple fecal samples were collected through a two-stage cluster-sampling protocol from Jun to Dec 2014. The samples were examined by formalin-ether concentration, culture, and with molecular methods. The prevalence of enteric parasites was 32.7% (95% CI 27.3-38). Blastocystis sp. was the most common intestinal protozoan (28.4%; 95% CI 23.7-33.0). The formalin-ether concentration and culture methods detected Blastocystis sp., Entamoeba coli , Giardia intestinalis , Dientamoeba fragilis , Iodamoeba butschlii , Entamoeba complex cysts or trophozoite , Chilomastix mesnilii , and Enterobius vermicularis . Single-round PCR assay for Entamoeba complex were identified Entamoeba dispar and E. moshkovskii . E. histolytica was not observed in any specimen. Multivariate analysis showed a significant association of parasites with water source and close animal contact. There was no correlation between infections and gender, age, occupation, education, or travel history. Protozoan infections were more common than helminth infections. This study revealed a high prevalence of enteric protozoan parasite infection among citizens of Rodehen. As most of the species detected are transmitted through a water-resistant cyst, public and individual education on personal hygiene should be considered to reduce transmission of intestinal parasites in the population.

  18. Determination of hexavalent chromium in exhaled breath condensate and environmental air among chrome plating workers.

    PubMed

    Goldoni, Matteo; Caglieri, Andrea; Poli, Diana; Vettori, Maria Vittoria; Corradi, Massimo; Apostoli, Pietro; Mutti, Antonio

    2006-03-15

    Chromium speciation has attracted attention because of the different toxicity of Cr(III), which is considered relatively non-toxic, and Cr(VI), which can cross cell membranes mainly as a chromate anion and has been classified as a class I human carcinogen. The aims of the present study were to measure soluble Cr(VI) levels in environmental samples, to develop a simple method of quantifying Cr(VI) in exhaled breath condensate (EBC), and to follow the kinetics of EBC Cr(VI) in chrome plating workers.Personal air samples were collected from 10 chrome platers; EBC was collected from the same workers immediately after the work shift on Tuesday and before the work shift on the following Wednesday. Environmental and EBC Cr(VI) levels were determined by means of colorimetry and electrothermal absorption atomic spectrometry, respectively.The method of detecting Cr(VI) in environmental air was based on the extraction of the Cr(VI)-diphenylcarbazide (Cr(VI)-DPC) complex in 1-butanol, whereas EBC Cr(VI) was determined using a solvent extraction of Cr(VI) as an ion pair with tetrabutylammonium ion, and subsequent direct determination of the complex (Cr(VI)-DPC) in EBC.Kinetic data showed that airborne Cr(VI) was reduced by 50% in airway lining fluid sampled at the end of exposure and that there was a further 50% reduction after about 15 h. The persistence of Cr(VI) in EBC supports the use of EBC in assessing target tissue levels of Cr(VI).

  19. Improved estimates of filtered total mercury loadings and total mercury concentrations of solids from potential sources to Sinclair Inlet, Kitsap County, Washington

    USGS Publications Warehouse

    Paulson, Anthony J.; Conn, Kathleen E.; DeWild, John F.

    2013-01-01

    Previous investigations examined sources and sinks of mercury to Sinclair Inlet based on historic and new data. This included an evaluation of mercury concentrations from various sources and mercury loadings from industrial discharges and groundwater flowing from the Bremerton naval complex to Sinclair Inlet. This report provides new data from four potential sources of mercury to Sinclair Inlet: (1) filtered and particulate total mercury concentrations of creek water during the wet season, (2) filtered and particulate total mercury releases from the Navy steam plant following changes in the water softening process and discharge operations, (3) release of mercury from soils to groundwater in two landfill areas at the Bremerton naval complex, and (4) total mercury concentrations of solids in dry dock sumps that were not affected by bias from sequential sampling. The previous estimate of the loading of filtered total mercury from Sinclair Inlet creeks was based solely on dry season samples. Concentrations of filtered total mercury in creek samples collected during wet weather were significantly higher than dry weather concentrations, which increased the estimated loading of filtered total mercury from creek basins from 27.1 to 78.1 grams per year. Changes in the concentrations and loading of filtered and particulate total mercury in the effluent of the steam plant were investigated after the water softening process was changed from ion-exchange to reverse osmosis and the discharge of stack blow-down wash began to be diverted to the municipal water-treatment plant. These changes reduced the concentrations of filtered and particulate total mercury from the steam plant of the Bremerton naval complex, which resulted in reduced loadings of filtered total mercury from 5.9 to 0.15 grams per year. Previous investigations identified three fill areas on the Bremerton naval complex, of which the western fill area is thought to be the largest source of mercury on the base. Studies of groundwater in the other two fill areas were conducted under worst-case higher high tidal conditions. A December 2011 study found that concentrations of filtered total mercury in the well in the fill area on the eastern boundary of the Bremerton naval complex were less than or equal to 11 nanograms per liter, indicating that releases from the eastern area were unlikely. In addition, concentrations of total mercury of solids were low (<3 milligrams per kilogram). In contrast, data from the November 2011 study indicated that the concentrations of filtered total mercury in the well located in the central fill area had tidally influenced concentrations of up to 500 nanograms per liter and elevated concentrations of total mercury of solids (29–41 milligrams per kilogram). This suggests that releases from this area, which has not been previously studied in detail, may be substantial. Previous measurements of total mercury of suspended solids in the dry dock discharges revealed high concentration of total mercury when suspended-solids concentrations were low. However, this result could have been owing to bias from sequential sampling during changing suspended‑solids concentrations. Sampling of two dry dock systems on the complex in a manner that precluded this bias confirmed that suspended-solids concentrations and total mercury concentrations of suspended solids varied considerably during pumping cycles. These new data result in revised estimates of solids loadings from the dry docks. Although most of the solids discharged by the dry docks seem to be recycled Operable Unit B Marine sediment, a total of about 3.2 metric tons of solids per year containing high concentrations of total mercury were estimated to be discharged by the two dry dock systems. A simple calculation, in which solids (from dry docks, the steam plant, and tidal flushing of the largest stormwater drain) are widely dispersed throughout Operable Unit B Marine, suggests that Bremerton naval complex solids would likely have little effect on Operable Unit B Marine sediments because of high concentrations of mercury already present in the sediment.

  20. An application of sample entropy to precipitation in Paraíba State, Brazil

    NASA Astrophysics Data System (ADS)

    Xavier, Sílvio Fernando Alves; da Silva Jale, Jader; Stosic, Tatijana; dos Santos, Carlos Antonio Costa; Singh, Vijay P.

    2018-05-01

    A climate system is characterized to be a complex non-linear system. In order to describe the complex characteristics of precipitation series in Paraíba State, Brazil, we aim the use of sample entropy, a kind of entropy-based algorithm, to evaluate the complexity of precipitation series. Sixty-nine meteorological stations are distributed over four macroregions: Zona da Mata, Agreste, Borborema, and Sertão. The results of the analysis show that intricacies of monthly average precipitation have differences in the macroregions. Sample entropy is able to reflect the dynamic change of precipitation series providing a new way to investigate complexity of hydrological series. The complexity exhibits areal variation of local water resource systems which can influence the basis for utilizing and developing resources in dry areas.

  1. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  2. Contribution of PCR Denaturing Gradient Gel Electrophoresis Combined with Mixed Chromatogram Software Separation for Complex Urinary Sample Analysis.

    PubMed

    Kotásková, Iva; Mališová, Barbora; Obručová, Hana; Holá, Veronika; Peroutková, Tereza; Růžička, Filip; Freiberger, Tomáš

    2017-01-01

    Complex samples are a challenge for sequencing-based broad-range diagnostics. We analysed 19 urinary catheter, ureteral Double-J catheter, and urine samples using 3 methodological approaches. Out of the total 84 operational taxonomic units, 37, 61, and 88% were identified by culture, PCR-DGGE-SS (PCR denaturing gradient gel electrophoresis followed by Sanger sequencing), and PCR-DGGE-RM (PCR- DGGE combined with software chromatogram separation by RipSeq Mixed tool), respectively. The latter approach was shown to be an efficient tool to complement culture in complex sample assessment. © 2017 S. Karger AG, Basel.

  3. Effects of Spatial Scale on Cognitive Play in Preschool Children.

    ERIC Educational Resources Information Center

    Delong, Alton J.; And Others

    1994-01-01

    Examined effects of a reduced-scale play environment on the temporal aspects of complex play behavior. Children playing with playdough in a 7 x 5 x 5-foot structure began complex play more quickly, played in longer segments, and spent slightly more time in complex play than when in full-size conditions, suggesting that scale-reduced environments…

  4. Sorption of cadmium and lead by clays from municipal incinerator ash- water suspensions

    USGS Publications Warehouse

    Roy, W.R.; Krapac, I.G.; Steele, J.D.

    1993-01-01

    The effect of Cl complexation in extracts of a flue gas-scrubber incinerator fly ash sample on the sorption of Cd and Pb by kaolinite and illite was investigated using batch-sorption methods. In the pH range of 5 to 9, Cl complexation may reduce sorption and thus increase the mobility of these metals. When an ash-water suspension was acidified to pH 6.85, the dissolution of Cl and Ca essentially eliminated Cd sorption because of complexation and cationic competition. Cadmium would be considered as either mobile or very mobile under these conditions. Lead was not soluble in the pH- 6.85 suspension. At pH 12, the approximate pH of water in contact with flue gas-scrubber fly ash, Cd was essentially insoluble and Pb occurred as anionic Pb hydroxide. Anionic Pb was sorbed by the two clays, and the extent of sorption was not influenced by Cl or carbonate complexation. Sorption constants, derived from isotherms, suggested that Pb would be relatively immobile in saturated soil-water systems. The recent concern that highly alkaline, flue gas-scrubber fly ash may release environmentally significant concentrations of mobile Pb when placed in an ash-disposal site with a soil liner should be reevaluated in light of this study.

  5. Quantifying complexity in translational research: an integrated approach

    PubMed Central

    Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.

    2014-01-01

    Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380

  6. Benchmarking Water Quality from Wastewater to Drinking Waters Using Reduced Transcriptome of Human Cells.

    PubMed

    Xia, Pu; Zhang, Xiaowei; Zhang, Hanxin; Wang, Pingping; Tian, Mingming; Yu, Hongxia

    2017-08-15

    One of the major challenges in environmental science is monitoring and assessing the risk of complex environmental mixtures. In vitro bioassays with limited key toxicological end points have been shown to be suitable to evaluate mixtures of organic pollutants in wastewater and recycled water. Omics approaches such as transcriptomics can monitor biological effects at the genome scale. However, few studies have applied omics approach in the assessment of mixtures of organic micropollutants. Here, an omics approach was developed for profiling bioactivity of 10 water samples ranging from wastewater to drinking water in human cells by a reduced human transcriptome (RHT) approach and dose-response modeling. Transcriptional expression of 1200 selected genes were measured by an Ampliseq technology in two cell lines, HepG2 and MCF7, that were exposed to eight serial dilutions of each sample. Concentration-effect models were used to identify differentially expressed genes (DEGs) and to calculate effect concentrations (ECs) of DEGs, which could be ranked to investigate low dose response. Furthermore, molecular pathways disrupted by different samples were evaluated by Gene Ontology (GO) enrichment analysis. The ability of RHT for representing bioactivity utilizing both HepG2 and MCF7 was shown to be comparable to the results of previous in vitro bioassays. Finally, the relative potencies of the mixtures indicated by RHT analysis were consistent with the chemical profiles of the samples. RHT analysis with human cells provides an efficient and cost-effective approach to benchmarking mixture of micropollutants and may offer novel insight into the assessment of mixture toxicity in water.

  7. Accelerated x-ray scatter projection imaging using multiple continuously moving pencil beams

    NASA Astrophysics Data System (ADS)

    Dydula, Christopher; Belev, George; Johns, Paul C.

    2017-03-01

    Coherent x-ray scatter varies with angle and photon energy in a manner dependent on the chemical composition of the scattering material, even for amorphous materials. Therefore, images generated from scattered photons can have much higher contrast than conventional projection radiographs. We are developing a scatter projection imaging prototype at the BioMedical Imaging and Therapy (BMIT) facility of the Canadian Light Source (CLS) synchrotron in Saskatoon, Canada. The best images are obtained using step-and-shoot scanning with a single pencil beam and area detector to capture sequentially the scatter pattern for each primary beam location on the sample. Primary x-ray transmission is recorded simultaneously using photodiodes. The technological challenge is to acquire the scatter data in a reasonable time. Using multiple pencil beams producing partially-overlapping scatter patterns reduces acquisition time but increases complexity due to the need for a disentangling algorithm to extract the data. Continuous sample motion, rather than step-and-shoot, also reduces acquisition time at the expense of introducing motion blur. With a five-beam (33.2 keV, 3.5 mm2 beam area) continuous sample motion configuration, a rectangular array of 12 x 100 pixels with 1 mm sampling width has been acquired in 0.4 minutes (3000 pixels per minute). The acquisition speed is 38 times the speed for single beam step-and-shoot. A system model has been developed to calculate detected scatter patterns given the material composition of the object to be imaged. Our prototype development, image acquisition of a plastic phantom and modelling are described.

  8. Single Particle-Inductively Coupled Plasma Mass Spectroscopy Analysis of Metallic Nanoparticles in Environmental Samples with Large Dissolved Analyte Fractions.

    PubMed

    Schwertfeger, D M; Velicogna, Jessica R; Jesmer, Alexander H; Scroggins, Richard P; Princz, Juliska I

    2016-10-18

    There is an increasing interest to use single particle-inductively coupled plasma mass spectroscopy (SP-ICPMS) to help quantify exposure to engineered nanoparticles, and their transformation products, released into the environment. Hindering the use of this analytical technique for environmental samples is the presence of high levels of dissolved analyte which impedes resolution of the particle signal from the dissolved. While sample dilution is often necessary to achieve the low analyte concentrations necessary for SP-ICPMS analysis, and to reduce the occurrence of matrix effects on the analyte signal, it is used here to also reduce the dissolved signal relative to the particulate, while maintaining a matrix chemistry that promotes particle stability. We propose a simple, systematic dilution series approach where by the first dilution is used to quantify the dissolved analyte, the second is used to optimize the particle signal, and the third is used as an analytical quality control. Using simple suspensions of well characterized Au and Ag nanoparticles spiked with the dissolved analyte form, as well as suspensions of complex environmental media (i.e., extracts from soils previously contaminated with engineered silver nanoparticles), we show how this dilution series technique improves resolution of the particle signal which in turn improves the accuracy of particle counts, quantification of particulate mass and determination of particle size. The technique proposed here is meant to offer a systematic and reproducible approach to the SP-ICPMS analysis of environmental samples and improve the quality and consistency of data generated from this relatively new analytical tool.

  9. Monte Carlo isotopic inventory analysis for complex nuclear systems

    NASA Astrophysics Data System (ADS)

    Phruksarojanakun, Phiphat

    Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.

  10. Bayesian Modeling of NMR Data: Quantifying Longitudinal Relaxation in Vivo, and in Vitro with a Tissue-Water-Relaxation Mimic (Crosslinked Bovine Serum Albumin).

    PubMed

    Meinerz, Kelsey; Beeman, Scott C; Duan, Chong; Bretthorst, G Larry; Garbow, Joel R; Ackerman, Joseph J H

    2018-01-01

    Recently, a number of MRI protocols have been reported that seek to exploit the effect of dissolved oxygen (O 2 , paramagnetic) on the longitudinal 1 H relaxation of tissue water, thus providing image contrast related to tissue oxygen content. However, tissue water relaxation is dependent on a number of mechanisms, and this raises the issue of how best to model the relaxation data. This problem, the model selection problem, occurs in many branches of science and is optimally addressed by Bayesian probability theory. High signal-to-noise, densely sampled, longitudinal 1 H relaxation data were acquired from rat brain in vivo and from a cross-linked bovine serum albumin (xBSA) phantom, a sample that recapitulates the relaxation characteristics of tissue water in vivo . Bayesian-based model selection was applied to a cohort of five competing relaxation models: (i) monoexponential, (ii) stretched-exponential, (iii) biexponential, (iv) Gaussian (normal) R 1 -distribution, and (v) gamma R 1 -distribution. Bayesian joint analysis of multiple replicate datasets revealed that water relaxation of both the xBSA phantom and in vivo rat brain was best described by a biexponential model, while xBSA relaxation datasets truncated to remove evidence of the fast relaxation component were best modeled as a stretched exponential. In all cases, estimated model parameters were compared to the commonly used monoexponential model. Reducing the sampling density of the relaxation data and adding Gaussian-distributed noise served to simulate cases in which the data are acquisition-time or signal-to-noise restricted, respectively. As expected, reducing either the number of data points or the signal-to-noise increases the uncertainty in estimated parameters and, ultimately, reduces support for more complex relaxation models.

  11. Aging and cardiovascular complexity: effect of the length of RR tachograms

    PubMed Central

    Nagaraj, Nithin

    2016-01-01

    As we age, our hearts undergo changes that result in a reduction in complexity of physiological interactions between different control mechanisms. This results in a potential risk of cardiovascular diseases which are the number one cause of death globally. Since cardiac signals are nonstationary and nonlinear in nature, complexity measures are better suited to handle such data. In this study, three complexity measures are used, namely Lempel–Ziv complexity (LZ), Sample Entropy (SampEn) and Effort-To-Compress (ETC). We determined the minimum length of RR tachogram required for characterizing complexity of healthy young and healthy old hearts. All the three measures indicated significantly lower complexity values for older subjects than younger ones. However, the minimum length of heart-beat interval data needed differs for the three measures, with LZ and ETC needing as low as 10 samples, whereas SampEn requires at least 80 samples. Our study indicates that complexity measures such as LZ and ETC are good candidates for the analysis of cardiovascular dynamics since they are able to work with very short RR tachograms. PMID:27957395

  12. A Prototype Recombinant-Protein Based Chlamydia pecorum Vaccine Results in Reduced Chlamydial Burden and Less Clinical Disease in Free-Ranging Koalas (Phascolarctos cinereus).

    PubMed

    Waugh, Courtney; Khan, Shahneaz Ali; Carver, Scott; Hanger, Jonathan; Loader, Joanne; Polkinghorne, Adam; Beagley, Kenneth; Timms, Peter

    2016-01-01

    Diseases associated with Chlamydia pecorum infection are a major cause of decline in koala populations in Australia. While koalas in care can generally be treated, a vaccine is considered the only option to effectively reduce the threat of infection and disease at the population level. In the current study, we vaccinated 30 free-ranging koalas with a prototype Chlamydia pecorum vaccine consisting of a recombinant chlamydial MOMP adjuvanted with an immune stimulating complex. An additional cohort of 30 animals did not receive any vaccine and acted as comparison controls. Animals accepted into this study were either uninfected (Chlamydia PCR negative) at time of initial vaccination, or infected (C. pecorum positive) at either urogenital (UGT) and/or ocular sites (Oc), but with no clinical signs of chlamydial disease. All koalas were vaccinated/sampled and then re-released into their natural habitat before re-capturing and re-sampling at 6 and 12 months. All vaccinated koalas produced a strong immune response to the vaccine, as indicated by high titres of specific plasma antibodies. The incidence of new infections in vaccinated koalas over the 12-month period post-vaccination was slightly less than koalas in the control group, however, this was not statistically significant. Importantly though, the vaccine was able to significantly reduce the infectious load in animals that were Chlamydia positive at the time of vaccination. This effect was evident at both the Oc and UGT sites and was stronger at 6 months than at 12 months post-vaccination. Finally, the vaccine was also able to reduce the number of animals that progressed to disease during the 12-month period. While the sample sizes were small (statistically speaking), results were nonetheless striking. This study highlights the potential for successful development of a Chlamydia vaccine for koalas in a wild setting.

  13. Predicting non return to work after orthopaedic trauma: the Wallis Occupational Rehabilitation RisK (WORRK) model.

    PubMed

    Luthi, François; Deriaz, Olivier; Vuistiner, Philippe; Burrus, Cyrille; Hilfiker, Roger

    2014-01-01

    Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker's background. Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients' data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. Non-RTW may be predicted with a simple model constructed with variables independent of the patient's education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers.

  14. Predicting Non Return to Work after Orthopaedic Trauma: The Wallis Occupational Rehabilitation RisK (WORRK) Model

    PubMed Central

    Luthi, François; Deriaz, Olivier; Vuistiner, Philippe; Burrus, Cyrille; Hilfiker, Roger

    2014-01-01

    Background Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker’s background. Methods Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients’ data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. Results At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. Conclusions Non-RTW may be predicted with a simple model constructed with variables independent of the patient’s education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers. PMID:24718689

  15. Mercury methylation and sulfate reduction rates in mangrove sediments, Rio de Janeiro, Brazil: The role of different microorganism consortia.

    PubMed

    Correia, Raquel Rose Silva; Guimarães, Jean Remy Davée

    2017-01-01

    Recent studies have shown Hg methylation in mangrove sediments, however, little is known about the different microorganism consortia involved. We investigated the participation of prokaryotes in general, iron-reducing bacteria-IRB, sulfate-reducing bacteria-SRB, methanogens and fungi in Hg methylation and sulfate reduction rates (SRR) in mangrove sediments using iron amendments for IRB and specific inhibitors for the other microorganisms. Sediment samples were collected from two mangrove zones, tidal flat and mangrove forest (named root sediments). Samples were incubated with 203 Hg or 35 SO 4 2- and Me 203 Hg/ 35 Sulfur were measured by liquid scintillation. Methylmercury (MeHg) formation was significantly reduced when SRB (87.7%), prokaryotes (76%) and methanogens (36.5%) were inhibited in root sediments, but only SRB (51.6%) and prokaryotes (57.3%) in tidal flat. However, in the tidal flat, inhibition of methanogens doubled Hg methylation (104.5%). All inhibitors (except fungicide) significantly reduced SRR in both zones. In iron amended tidal flat samples, Hg methylation increased 56.5% at 100 μg g -1 and decreased at 500 and 1000 μg g -1 (57.8 and 82%). In the roots region, however, MeHg formation gradually decreased in response to Fe amendments from 100 μg g -1 (37.7%) to 1000 μg g -1 (93%). SRR decreased in all iron amendments. This first simultaneous evaluation of Hg methylation and sulfate-reduction and of the effect of iron and inhibitors on both processes suggest that SRB are important Hg methylators in mangrove sediments. However, it also suggests that SRB activity could not explain all MeHg formation. This implies the direct or indirect participation of other microorganisms such as IRB and methanogens and a complex relationship among these groups. Copyright © 2016. Published by Elsevier Ltd.

  16. Novel Insights in the Fecal Egg Count Reduction Test for Monitoring Drug Efficacy against Soil-Transmitted Helminths in Large-Scale Treatment Programs

    PubMed Central

    Levecke, Bruno; Speybroeck, Niko; Dobson, Robert J.; Vercruysse, Jozef; Charlier, Johannes

    2011-01-01

    Background The fecal egg count reduction test (FECRT) is recommended to monitor drug efficacy against soil-transmitted helminths (STHs) in public health. However, the impact of factors inherent to study design (sample size and detection limit of the fecal egg count (FEC) method) and host-parasite interactions (mean baseline FEC and aggregation of FEC across host population) on the reliability of FECRT is poorly understood. Methodology/Principal Findings A simulation study was performed in which FECRT was assessed under varying conditions of the aforementioned factors. Classification trees were built to explore critical values for these factors required to obtain conclusive FECRT results. The outcome of this analysis was subsequently validated on five efficacy trials across Africa, Asia, and Latin America. Unsatisfactory (<85.0%) sensitivity and specificity results to detect reduced efficacy were found if sample sizes were small (<10) or if sample sizes were moderate (10–49) combined with highly aggregated FEC (k<0.25). FECRT remained inconclusive under any evaluated condition for drug efficacies ranging from 87.5% to 92.5% for a reduced-efficacy-threshold of 90% and from 92.5% to 97.5% for a threshold of 95%. The most discriminatory study design required 200 subjects independent of STH status (including subjects who are not excreting eggs). For this sample size, the detection limit of the FEC method and the level of aggregation of the FEC did not affect the interpretation of the FECRT. Only for a threshold of 90%, mean baseline FEC <150 eggs per gram of stool led to a reduced discriminatory power. Conclusions/Significance This study confirms that the interpretation of FECRT is affected by a complex interplay of factors inherent to both study design and host-parasite interactions. The results also highlight that revision of the current World Health Organization guidelines to monitor drug efficacy is indicated. We, therefore, propose novel guidelines to support future monitoring programs. PMID:22180801

  17. Real-Time Cellular Exometabolome Analysis with a Microfluidic-Mass Spectrometry Platform

    PubMed Central

    Marasco, Christina C.; Enders, Jeffrey R.; Seale, Kevin T.; McLean, John A.; Wikswo, John P.

    2015-01-01

    To address the challenges of tracking the multitude of signaling molecules and metabolites that is the basis of biological complexity, we describe a strategy to expand the analytical techniques for dynamic systems biology. Using microfluidics, online desalting, and mass spectrometry technologies, we constructed and validated a platform well suited for sampling the cellular microenvironment with high temporal resolution. Our platform achieves success in: automated cellular stimulation and microenvironment control; reduced non-specific adsorption to polydimethylsiloxane due to surface passivation; real-time online sample collection; near real-time sample preparation for salt removal; and real-time online mass spectrometry. When compared against the benchmark of “in-culture” experiments combined with ultraperformance liquid chromatography-electrospray ionization-ion mobility-mass spectrometry (UPLC-ESI-IM-MS), our platform alleviates the volume challenge issues caused by dilution of autocrine and paracrine signaling and dramatically reduces sample preparation and data collection time, while reducing undesirable external influence from various manual methods of manipulating cells and media (e.g., cell centrifugation). To validate this system biologically, we focused on cellular responses of Jurkat T cells to microenvironmental stimuli. Application of these stimuli, in conjunction with the cell’s metabolic processes, results in changes in consumption of nutrients and secretion of biomolecules (collectively, the exometabolome), which enable communication with other cells or tissues and elimination of waste. Naïve and experienced T-cell metabolism of cocaine is used as an exemplary system to confirm the platform’s capability, highlight its potential for metabolite discovery applications, and explore immunological memory of T-cell drug exposure. Our platform proved capable of detecting metabolomic variations between naïve and experienced Jurkat T cells and highlights the dynamics of the exometabolome over time. Upregulation of the cocaine metabolite, benzoylecgonine, was noted in experienced T cells, indicating potential cellular memory of cocaine exposure. These metabolomics distinctions were absent from the analogous, traditional “in-culture” UPLC-ESI-IM-MS experiment, further demonstrating this platform’s capabilities. PMID:25723555

  18. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the comparable compression levels.

  19. Application of a multivariate analysis method for non-target screening detection of persistent transformation products during the cork boiling wastewater treatment.

    PubMed

    Ponce-Robles, L; Oller, I; Agüera, A; Trinidad-Lozano, M J; Yuste, F J; Malato, S; Perez-Estrada, L A

    2018-08-15

    Cork boiling wastewater is a very complex mixture of naturally occurring compounds leached and partially oxidized during the boiling cycles. The effluent generated is recalcitrant and could cause a significant environmental impact. Moreover, if this untreated industrial wastewater enters a municipal wastewater treatment plant it could hamper or reduce the efficiency of most activated sludge degradation processes. Despite the efforts to treat the cork boiling wastewater for reusing purposes, is still not well-known how safe these compounds (original compounds and oxidation by-products) will be. The purpose of this work was to apply an HPLC-high resolution mass spectrometry method and subsequent non-target screening using a multivariate analysis method (PCA), to explore relationships between samples (treatments) and spectral features (masses or compounds) that could indicate changes in formation, degradation or polarity, during coagulation/flocculation (C/F) and photo-Fenton (PhF). Although, most of the signal intensities were reduced after the treatment line, 16 and 4 new peaks were detected to be formed after C/F and PhF processes respectively. The use of this non-target approach showed to be an effective strategy to explore, classify and detect transformation products during the treatment of an unknown complex mixture. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Biomass recalcitrance: a multi-scale, multi-factor, and conversion-specific property.

    PubMed

    McCann, Maureen C; Carpita, Nicholas C

    2015-07-01

    Recalcitrance of plant biomass to enzymatic hydrolysis for biofuel production is thought to be a property conferred by lignin or lignin-carbohydrate complexes. However, chemical catalytic and thermochemical conversion pathways, either alone or in combination with biochemical and fermentative pathways, now provide avenues to utilize lignin and to expand the product range beyond ethanol or butanol. To capture all of the carbon in renewable biomass, both lignin-derived aromatics and polysaccharide-derived sugars need to be transformed by catalysts to liquid hydrocarbons and high-value co-products. We offer a new definition of recalcitrance as those features of biomass which disproportionately increase energy requirements in conversion processes, increase the cost and complexity of operations in the biorefinery, and/or reduce the recovery of biomass carbon into desired products. The application of novel processing technologies applied to biomass reveal new determinants of recalcitrance that comprise a broad range of molecular, nanoscale, and macroscale factors. Sampling natural genetic diversity within a species, transgenic approaches, and synthetic biology approaches are all strategies that can be used to select biomass for reduced recalcitrance in various pretreatments and conversion pathways. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. Antecedents and consequences of intra-group conflict among nurses.

    PubMed

    Almost, Joan; Doran, Diane M; McGillis Hall, Linda; Spence Laschinger, Heather K

    2010-11-01

    To test a theoretical model linking selected antecedent variables to intra-group conflict among nurses, and subsequently conflict management style, job stress and job satisfaction. A contributing factor to the nursing shortage is job dissatisfaction as a result of conflict among nurses. To develop strategies to reduce conflict, research is needed to understand the causes and outcomes of conflict in nursing work environments. A predictive, non-experimental design was used in a random sample of 277 acute care nurses. Structural equation modelling was used to analyse the hypothesised model. Nurses' core self-evaluations, complexity of care and relationships with managers and nursing colleagues influenced their perceived level of conflict. Conflict management style partially mediated the relationship between conflict and job satisfaction. Job stress had a direct effect on job satisfaction and core self-evaluation had a direct effect on job stress. Conflict and its associated outcomes is a complex process, affected by dispositional, contextual and interpersonal factors. How nurses manage conflict may not prevent the negative effects of conflict, however, learning to manage conflict using collaboration and accommodation may help nurses experience greater job satisfaction. Strategies to manage and reduce conflict include building interactional justice practices and positive interpersonal relationships. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.

  2. A genome-wide association study implicates diacylglycerol kinase eta (DGKH) and several other genes in the etiology of bipolar disorder

    PubMed Central

    Baum, AE; Akula, N; Cabanero, M; Cardona, I; Corona, W; Klemens, B; Schulze, TG; Cichon, S; Rietschel, M; Nöthen, MM; Georgi, A; Schumacher, J; Schwarz, M; Jamra, R Abou; Höfels, S; Propping, P; Satagopan, J; Detera-Wadleigh, SD; Hardy, J; McMahon, FJ

    2008-01-01

    The genetic basis of bipolar disorder has long been thought to be complex, with the potential involvement of multiple genes, but methods to analyze populations with respect to this complexity have only recently become available. We have carried out a genome-wide association study of bipolar disorder by genotyping over 550,000 SNPs in two independent case-control samples of European origin. The initial association screen was performed using pooled DNA; selected SNPs were confirmed by individual genotyping. While DNA pooling reduces power to detect genetic associations, there is a substantial cost savings and gain in efficiency. A total of 88 SNPs representing 80 different genes met the prior criteria for replication in both samples. Effect sizes were modest: no single SNP of large effect was detected. Of 37 SNPs selected for individual genotyping, the strongest association signal was detected at a marker within the first intron of DGKH (p = 1.5 × 10−8, experiment-wide p<0.01, OR= 1.59). This gene encodes diacylglycerol kinase eta, a key protein in the lithium-sensitive phosphatidyl inositol pathway. This first genome-wide association study of bipolar disorder shows that several genes, each of modest effect, reproducibly influence disease risk. Bipolar disorder may be a polygenic disease. PMID:17486107

  3. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  4. Antifouling and ultrasensitive biosensing interface based on self-assembled peptide and aptamer on macroporous gold for electrochemical detection of immunoglobulin E in serum.

    PubMed

    Wang, Yu; Cui, Min; Jiao, Mingxia; Luo, Xiliang

    2018-06-25

    Accurate detection of protein biomarkers in complex media remains a challenge due to severe nonspecific adsorption and biofouling, and sensing interfaces that combine the high sensitivity and antifouling ability are highly desirable. Herein, an antifouling sensing interface capable of sensitively assaying immunoglobulin E (IgE) in biological samples was constructed. The sensing interface was fabricated through the self-assembly of a zwitterionic peptide and the IgE aptamer onto a macroporous Au substrate, which was electrochemically fabricated with the aid of multilayer polystyrene nanospheres self-assembled on glassy carbon electrode. Due to the huge surface area arising from porous morphology and high specificity of aptamer, the developed electrochemical biosensor exhibits ultrahigh sensitivity and selectivity towards IgE, with the linear range of 0.1-10 pg mL -1 , and a very low limit of detection down to 42 fg mL -1 . Interestingly, owing to the presence of the zwitterionic peptide, the biosensing interface can satisfyingly reduce the nonspecific adsorption and fouling effect. Consequently, the biosensor was successfully applied to detect IgE in complex biological samples, indicating great promise of this peptide-based sensing interface for antifouling assays. Graphical abstract ᅟ.

  5. High-Throughput Effect-Directed Analysis Using Downscaled in Vitro Reporter Gene Assays To Identify Endocrine Disruptors in Surface Water

    PubMed Central

    2018-01-01

    Effect-directed analysis (EDA) is a commonly used approach for effect-based identification of endocrine disruptive chemicals in complex (environmental) mixtures. However, for routine toxicity assessment of, for example, water samples, current EDA approaches are considered time-consuming and laborious. We achieved faster EDA and identification by downscaling of sensitive cell-based hormone reporter gene assays and increasing fractionation resolution to allow testing of smaller fractions with reduced complexity. The high-resolution EDA approach is demonstrated by analysis of four environmental passive sampler extracts. Downscaling of the assays to a 384-well format allowed analysis of 64 fractions in triplicate (or 192 fractions without technical replicates) without affecting sensitivity compared to the standard 96-well format. Through a parallel exposure method, agonistic and antagonistic androgen and estrogen receptor activity could be measured in a single experiment following a single fractionation. From 16 selected candidate compounds, identified through nontargeted analysis, 13 could be confirmed chemically and 10 were found to be biologically active, of which the most potent nonsteroidal estrogens were identified as oxybenzone and piperine. The increased fractionation resolution and the higher throughput that downscaling provides allow for future application in routine high-resolution screening of large numbers of samples in order to accelerate identification of (emerging) endocrine disruptors. PMID:29547277

  6. Earthquake sequence simulations with measured properties for JFAST core samples

    NASA Astrophysics Data System (ADS)

    Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro

    2017-08-01

    Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a-b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.

  7. The development of a short domain-general measure of working memory capacity.

    PubMed

    Oswald, Frederick L; McAbee, Samuel T; Redick, Thomas S; Hambrick, David Z

    2015-12-01

    Working memory capacity is one of the most frequently measured individual difference constructs in cognitive psychology and related fields. However, implementation of complex span and other working memory measures is generally time-consuming for administrators and examinees alike. Because researchers often must manage the tension between limited testing time and measuring numerous constructs reliably, a short and effective measure of working memory capacity would often be a major practical benefit in future research efforts. The current study developed a shortened computerized domain-general measure of working memory capacity by representatively sampling items from three existing complex working memory span tasks: operation span, reading span, and symmetry span. Using a large archival data set (Study 1, N = 4,845), we developed and applied a principled strategy for developing the reduced measure, based on testing a series of confirmatory factor analysis models. Adequate fit indices from these models lent support to this strategy. The resulting shortened measure was then administered to a second independent sample (Study 2, N = 172), demonstrating that the new measure saves roughly 15 min (30%) of testing time on average, and even up to 25 min depending on the test-taker. On the basis of these initial promising findings, several directions for future research are discussed.

  8. A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare.

    PubMed

    Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan

    2015-01-01

    The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.

  9. Incidence, Molecular Characteristics and Pathogenicity of Gibberella fujikuroi Species Complex Associated with Rice Seeds from Asian Countries

    PubMed Central

    Jeon, Young-Ah; Yu, Seung-Hun; Lee, Young Yi; Park, Hong-Jae; Lee, Sokyoung; Sung, Jung Sook; Kim, Yeon-Gyu

    2013-01-01

    Gibberella fujikuroi species complex (GFSC) was isolated from rice (Oryza sativa L.) seed samples from ten Asian countries and investigated for incidence of GFSC, molecular characteristics, and pathogenicity. Regardless of geographic origin, GFSC was detected with incidences ranging from 3% to 80%. Four species, Fusarium fujikuroi, F. concentricum, F. proliferatum, and F. verticillioides, were found to show an association with rice seeds, with F. fujikuroi being the predominant species. In phylogenetic analyses of DNA sequences, no relationship was found between species, isolates, and geographic sources of samples. Unidentified fragments of the β-tubulin gene were observed in ten isolates of F. fujikuroi and F. verticillioides. With the exception of three isolates of F. fujikuroi, F. fujikuroi, F. proliferatum, and F. verticillioides were found to have FUM1 (the fumonisin biosynthetic gene); however, FUM1 was not found in isolates of F. concentricum. Results of pathogenicity testing showed that all isolates caused reduced germination of rice seed. In addition, F. fujikuroi and F. concentricum caused typical symptoms of bakanae, leaf elongation and chlorosis, whereas F. proliferatum and F. verticillioides only caused stunting of seedlings. These findings provide insight into the characteristics of GFSC associated with rice seeds and might be helpful in development of strategies for management of bakanae. PMID:24493944

  10. Earthquake sequence simulations with measured properties for JFAST core samples.

    PubMed

    Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro

    2017-09-28

    Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a - b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Author(s).

  11. Metal–organic complexation in the marine environment

    PubMed Central

    Luther, George W; Rozan, Timothy F; Witter, Amy; Lewis, Brent

    2001-01-01

    We discuss the voltammetric methods that are used to assess metal–organic complexation in seawater. These consist of titration methods using anodic stripping voltammetry (ASV) and cathodic stripping voltammetry competitive ligand experiments (CSV-CLE). These approaches and a kinetic approach using CSV-CLE give similar information on the amount of excess ligand to metal in a sample and the conditional metal ligand stability constant for the excess ligand bound to the metal. CSV-CLE data using different ligands to measure Fe(III) organic complexes are similar. All these methods give conditional stability constants for which the side reaction coefficient for the metal can be corrected but not that for the ligand. Another approach, pseudovoltammetry, provides information on the actual metal–ligand complex(es) in a sample by doing ASV experiments where the deposition potential is varied more negatively in order to destroy the metal–ligand complex. This latter approach gives concentration information on each actual ligand bound to the metal as well as the thermodynamic stability constant of each complex in solution when compared to known metal–ligand complexes. In this case the side reaction coefficients for the metal and ligand are corrected. Thus, this method may not give identical information to the titration methods because the excess ligand in the sample may not be identical to some of the actual ligands binding the metal in the sample. PMID:16759421

  12. Convenient, inexpensive quantification of elemental sulfur by simultaneous in situ reduction and colorimetric detection.

    PubMed

    Kwasniewski, Misha T; Allison, Rachel B; Wilcox, Wayne F; Sacks, Gavin L

    2011-10-03

    Rapid, inexpensive, and convenient methods for quantifying elemental sulfur (S(0)) with low or sub-μgg(-1) limits of detection would be useful for a range of applications where S(0) can act as a precursor for noxious off-aromas, e.g., S(0) in pesticide residues on winegrapes or as a contaminant in drywall. However, existing quantification methods rely on toxic reagents, expensive and cumbersome equipment, or demonstrate poor selectivity. We have developed and optimized an inexpensive, rapid method (∼15 min per sample) for quantifying S(0) in complex matrices. Following dispersion of the sample in PEG-400 and buffering, S(0) is quantitatively reduced to H(2)S in situ by dithiothreitol and simultaneously quantified by commercially available colorimetric H(2)S detection tubes. By employing multiple tubes, the method demonstrated linearity from 0.03 to 100 μg S(0) g(-1) for a 5 g sample (R(2)=0.994, mean CV=6.4%), and the methodological detection limit was 0.01 μg S(0) g(-1). Interferences from sulfite or sulfate were not observed. Mean recovery of an S(0) containing sulfur fungicide in grape macerate was 84.7% with a mean CV of 10.4%. Mean recovery of S(0) in a colloidal sulfur preparation from a drywall matrix was 106.6% with a mean CV of 6.9%. Comparable methodological detection limits, sensitivity, and recoveries were achieved in grape juice, grape macerate and with 1g drywall samples, indicating that the methodology should be robust across a range of complex matrices. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Spatial complexity reduces interaction strengths in the meta-food web of a river floodplain mosaic

    USGS Publications Warehouse

    Bellmore, James Ryan; Baxter, Colden Vance; Connolly, Patrick J.

    2015-01-01

    Theory states that both the spatial complexity of landscapes and the strength of interactions between consumers and their resources are important for maintaining biodiversity and the 'balance of nature.' Spatial complexity is hypothesized to promote biodiversity by reducing potential for competitive exclusion; whereas, models show weak trophic interactions can enhance stability and maintain biodiversity by dampening destabilizing oscillations associated with strong interactions. Here we show that spatial complexity can reduce the strength of consumer-resource interactions in natural food webs. By sequentially aggregating food webs of individual aquatic habitat patches across a floodplain mosaic, we found that increasing spatial complexity resulted in decreases in the strength of interactions between predators and prey, owing to a greater proportion of weak interactions and a reduced proportion of strong interactions in the meta-food web. The main mechanism behind this pattern was that some patches provided predation refugia for species which were often strongly preyed upon in other patches. If weak trophic interactions do indeed promote stability, then our findings may signal an additional mechanism by which complexity and stability are linked in nature. In turn, this may have implications for how the values of landscape complexity, and the costs of biophysical homogenization, are assessed.

  14. Determination of the total concentration and speciation of metal ions in river, estuarine and seawater samples.

    PubMed

    Alberti, Giancarla; Biesuz, Raffaela; Pesavento, Maria

    2008-12-01

    Different natural water samples were investigated to determine the total concentration and the distribution of species for Cu(II), Pb(II), Al(III) and U(VI). The proposed method, named resin titration (RT), was developed in our laboratory to investigate the distribution of species for metal ions in complex matrices. It is a competition method, in which a complexing resin competes with natural ligands present in the sample to combine with the metal ions. In the present paper, river, estuarine and seawater samples, collected during a cruise in Adriatic Sea, were investigated. For each sample, two RTs were performed, using different complexing resins: the iminodiacetic Chelex 100 and the carboxylic Amberlite CG50. In this way, it was possible to detect different class of ligands. Satisfactory results have been obtained and are commented on critically. They were summarized by principal component analysis (PCA) and the correlations with physicochemical parameters allowed one to follow the evolution of the metals along the considered transect. It should be pointed out that, according to our findings, the ligands responsible for metal ions complexation are not the major components of the water system, since they form considerably weaker complexes.

  15. Mass spectrometer with electron source for reducing space charge effects in sample beam

    DOEpatents

    Houk, Robert S.; Praphairaksit, Narong

    2003-10-14

    A mass spectrometer includes an ion source which generates a beam including positive ions, a sampling interface which extracts a portion of the beam from the ion source to form a sample beam that travels along a path and has an excess of positive ions over at least part of the path, thereby causing space charge effects to occur in the sample beam due to the excess of positive ions in the sample beam, an electron source which adds electrons to the sample beam to reduce space charge repulsion between the positive ions in the sample beam, thereby reducing the space charge effects in the sample beam and producing a sample beam having reduced space charge effects, and a mass analyzer which analyzes the sample beam having reduced space charge effects.

  16. Revealing hidden clonal complexity in Mycobacterium tuberculosis infection by qualitative and quantitative improvement of sampling.

    PubMed

    Pérez-Lago, L; Palacios, J J; Herranz, M; Ruiz Serrano, M J; Bouza, E; García-de-Viedma, D

    2015-02-01

    The analysis of microevolution events, its functional relevance and impact on molecular epidemiology strategies, constitutes one of the most challenging aspects of the study of clonal complexity in infection by Mycobacterium tuberculosis. In this study, we retrospectively evaluated whether two improved sampling schemes could provide access to the clonal complexity that is undetected by the current standards (analysis of one isolate from one sputum). We evaluated in 48 patients the analysis by mycobacterial interspersed repetitive unit-variable number tandem repeat of M. tuberculosis isolates cultured from bronchial aspirate (BAS) or bronchoalveolar lavage (BAL) and, in another 16 cases, the analysis of a higher number of isolates from independent sputum samples. Analysis of the isolates from BAS/BAL specimens revealed clonal complexity in a very high proportion of cases (5/48); in most of these cases, complexity was not detected when the isolates from sputum samples were analysed. Systematic analysis of isolates from multiple sputum samples also improved the detection of clonal complexity. We found coexisting clonal variants in two of 16 cases that would have gone undetected in the analysis of the isolate from a single sputum specimen. Our results suggest that analysis of isolates from BAS/BAL specimens is highly efficient for recording the true clonal composition of M. tuberculosis in the lungs. When these samples are not available, we recommend increasing the number of isolates from independent sputum specimens, because they might not harbour the same pool of bacteria. Our data suggest that the degree of clonal complexity in tuberculosis has been underestimated because of the deficiencies inherent in a simplified procedure. Copyright © 2014 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  17. Evaluation of counting methods for oceanic radium-228

    NASA Astrophysics Data System (ADS)

    Orr, James C.

    1988-07-01

    Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.

  18. New spectrophotometric methods for the determinations of hydrogen sulfide present in the samples of lake water, industrial effluents, tender coconut, sugarcane juice and egg

    NASA Astrophysics Data System (ADS)

    Shyla, B.; Nagendrappa, G.

    2012-10-01

    The new methods are working on the principle that iron(III) is reduced to iron(II) by hydrogen sulfide, catechol and p-toluidine the system 1/hydrogen sulfide the system 2, in acidic medium followed by the reduced iron forming complex with 1,10-phenanthroline with λmax 510 nm. The other two methods are based on redox reactions between electrolytically generated manganese(III) sulfate taken in excess and hydrogen sulfide followed by the unreacted oxidant oxidizing diphenylamine λmax 570 the system 3/barium diphenylamine sulphonate λmax 540 nm, the system 4. The increase/decrease in the color intensity of the dye products of the systems 1 and 2 or 3 and 4 are proportional to the concentration of hydrogen sulfide with its quantification range 0.035-1.40 μg ml-1/0.14-1.40 μg ml-1.

  19. Indoor allergens, environmental avoidance, and allergic respiratory disease.

    PubMed

    Bush, Robert K

    2008-01-01

    Indoor allergen exposure to sources such as house-dust mites, pets, fungi, and insects plays a significant role in patients with allergic rhinitis and asthma. The identification of the major allergens has led to methods that can quantitate exposure, e.g., immunoassays for Der p 1 in settled dust samples. Sensitization and the development of allergic respiratory disease result from complex genetic and environmental interactions. New paradigms that examine the role of other environmental factors, including exposure to proteases that can activate eosinophils and initiate Th2 responses, and epigenetics, are being explored. Recommendations for specific environmental allergen avoidance measures are discussed for house-dust mites, cockroaches, animal dander, and fungi. Specific measures to reduce indoor allergen exposure when vigorously applied may reduce the risk of sensitization and symptoms of allergic respiratory disease, although further research will be necessary to establish cost-effective approaches.

  20. Method and apparatus for in-situ characterization of energy storage and energy conversion devices

    DOEpatents

    Christophersen, Jon P [Idaho Falls, ID; Motloch, Chester G [Idaho Falls, ID; Morrison, John L [Butte, MT; Albrecht, Weston [Layton, UT

    2010-03-09

    Disclosed are methods and apparatuses for determining an impedance of an energy-output device using a random noise stimulus applied to the energy-output device. A random noise signal is generated and converted to a random noise stimulus as a current source correlated to the random noise signal. A bias-reduced response of the energy-output device to the random noise stimulus is generated by comparing a voltage at the energy-output device terminal to an average voltage signal. The random noise stimulus and bias-reduced response may be periodically sampled to generate a time-varying current stimulus and a time-varying voltage response, which may be correlated to generate an autocorrelated stimulus, an autocorrelated response, and a cross-correlated response. Finally, the autocorrelated stimulus, the autocorrelated response, and the cross-correlated response may be combined to determine at least one of impedance amplitude, impedance phase, and complex impedance.

  1. Efficient high-quality volume rendering of SPH data.

    PubMed

    Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger

    2010-01-01

    High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.

  2. Recent developments and future trends in solid phase microextraction techniques towards green analytical chemistry.

    PubMed

    Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek

    2013-12-20

    Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Development and validation of a multiplex reaction analyzing eight miniSTRs of the X chromosome for identity and kinship testing with degraded DNA.

    PubMed

    Castañeda, María; Odriozola, Adrián; Gómez, Javier; Zarrabeitia, María T

    2013-07-01

    We report the development of an effective system for analyzing X chromosome-linked mini short tandem repeat loci with reduced-size amplicons (less than 220 bp), useful for analyzing highly degraded DNA samples. To generate smaller amplicons, we redesigned primers for eight X-linked microsatellites (DXS7132, DXS10079, DXS10074, DXS10075, DXS6801, DXS6809, DXS6789, and DXS6799) and established efficient conditions for a multiplex PCR system (miniX). The validation tests confirmed that it has good sensitivity, requiring as little as 20 pg of DNA, and performs well with DNA from paraffin-embedded tissues, thus showing potential for improved analysis and identification of highly degraded and/or very limited DNA samples. Consequently, this system may help to solve complex forensic cases, particularly when autosomal markers convey insufficient information.

  4. Western blotting using capillary electrophoresis.

    PubMed

    Anderson, Gwendolyn J; M Cipolla, Cynthia; Kennedy, Robert T

    2011-02-15

    A microscale Western blotting system based on separating sodium-dodecyl sulfate protein complexes by capillary gel electrophoresis followed by deposition onto a blotting membrane for immunoassay is described. In the system, the separation capillary is grounded through a sheath capillary to a mobile X-Y translation stage which moves a blotting membrane past the capillary outlet for protein deposition. The blotting membrane is moistened with a methanol and buffer mixture to facilitate protein adsorption. Although discrete protein zones could be detected, bands were broadened by ∼1.7-fold by transfer to membrane. A complete Western blot for lysozyme was completed in about one hour with 50 pg mass detection limit from low microgram per milliliter samples. These results demonstrate substantial reduction in time requirements and improvement in mass sensitivity compared to conventional Western blots. Western blotting using capillary electrophoresis shows promise to analyze low volume samples with reduced reagents and time, while retaining the information content of a typical Western blot.

  5. Development of force-detected THz-ESR measurement system and its application to metal porphyrin complexes

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideyuki; Okamoto, Tsubasa; Ohmichi, Eiji; Ohta, Hitoshi

    Electron spin resonance spectroscopy in the terahertz region (THz-ESR) is a promising technique to study biological materials such as metalloproteins because it directly probes the metal ion sites that play an important role in the emergence of functionality. By combining THz-ESR with force detection, the samples mass is reduced to the order of ng. This feature is of great advantage because the sample preparation process of biological materials is time-consuming. We developed a force-detected THz-ESR system utilizing optical interferometry for precise cantilever displacement measurement. In order to suppress the sensitivity fluctuation and instability of cantilever dynamics under high magnetic field, the tuning of interferometer is feedback-controlled during a measurement. By using this system, we successfully observed the ESR signal of hemin, which is a model substance of hemoglobin and myoglobin, in THz region.

  6. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  7. Comparative analysis of toxin detection in biological and enviromental samples

    NASA Astrophysics Data System (ADS)

    Ogert, Robert A.; Burans, James; O'Brien, Tom; Ligler, Frances S.

    1994-03-01

    The basic recognition schemes underlying the principles of standard enzyme-linked immunosorbent assay (ELISA) and radioimmunoassay (RIA) protocols are increasingly being adapted for use with new detection devices. A direct comparison was made using a fiber optic biosensor that employs evanescent wave detection and an ELISA using avidin-biotin. The assays were developed for the detection of Ricinus communis agglutinin II, also known as ricin or RCA60. Detection limits between the two methods were comparable for ricin in phosphate buffered saline (PBS), however results in complex samples differed slightly. In PBS, sensitivity for ricin was 1 ng/ml using the fiber optic device and 500 pg/ml using the ELISA. The fiber optic sensor could not detect ricin directly in urine or serum spiked with 5 ng/ml ricin, however, the ELISA showed detection but at reduced levels to the PBS control.

  8. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  9. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  10. Letter Report: Stable Hydrogen and Oxygen Isotope Analysis of B-Complex Groundwater Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Brady D.; Moran, James J.; Nims, Megan K.

    Report summarizing stable oxygen and hydrogen isotope analysis of two groundwater samples from the B-Complex. Results from analyses were compared to perched water and pore water analyses performed previously.

  11. Dry-season ultraviolet radiation primes litter for wet season decomposition in a Mediterranean grassland

    NASA Astrophysics Data System (ADS)

    Baker, N. R.; Allison, S. D.

    2013-12-01

    Traditional decomposition models developed in mesic ecosystems often consistently underestimate rates of decomposition in more arid ecosystems such as deserts and Mediterranean grasslands. Photodegradation of plant litter by ultraviolet radiation (UV) is hypothesized to be one of the mechanisms accounting for the greater-than-expected rates of decomposition observed in these ecosystems. Putatively, photodegradation preferentially degrades complex aromatic compounds in litter such as lignin, whose decomposition is considered a rate-limiting step in the microbial decomposition of plant litter. This study tested the effects of attenuated ultraviolet radiation on the decomposition of two litter types over the course of a year in a Southern California Mediterranean grassland. The two types of litter differed primarily in lignin content to test for a differential effect of UV on high-lignin versus low-lignin litter. Rates of litter mass loss, changes in litter chemistry, and changes in microbial activity and microbial biomass were observed, and assays of extracellular enzymes were conducted at 5 points through the year, beginning during the dry season and continuing until the end of the following dry season. Litter exposed to attenuated ultraviolet radiation during the dry season had lower rates of mass loss than litter exposed to ambient radiation (6.1% vs. 8.6%, respectively, p < 0.04). Extracellular enzyme activities were significantly affected by UV attenuation, as low lignin samples exposed to attenuated UV displayed elevated cellulase enzyme activity potential during the wet season, while high lignin samples displayed decreased oxidative enzyme activity potential during the wet season. For example, potential activity of the cellulase cellobiohydrolase in low-lignin, ambient UV samples was 5286 μmol/hr*g during the wet season, compared to 7969 μmol/hr*g in attenuated UV samples (p < 0.003). Conversely, potential activity of the oxidative enzyme peroxidase in high-lignin, ambient UV samples was 85.9 μmol/hr*g during the wet season, compared to 44.1 μmol/hr*g in attenuated UV samples (p < 0.028). This increased potential cellulase activity under attenuated UV may indicate that dry season photodegradation primes low-lignin litter for wet season decomposition, reducing the selective pressure for microbial decomposers to invest in costly extracellular enzyme production. Similarly, the reduced potential oxidative enzyme activity in high-lignin samples exposed to attenuated UV may indicate that photodegradation is necessary to facilitate the breakdown of more complex compounds such as lignin by microbial decomposers. We conclude that while abiotic factors such as photodegradation can have a significant effect on the mechanisms of plant matter decomposition in semiarid ecosystems, these effects are not only restricted to the dry season and may also facilitate wet season decomposition.

  12. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Browning, Nigel D.

    Traditionally, microscopists have worked with the Nyquist-Shannon theory of sampling, which states that to be able to reconstruct the image fully it needs to be sampled at a rate of at least twice the highest spatial frequency in the image. This sampling rate assumes that the image is sampled at regular intervals and that every pixel contains information that is crucial for the image (it even assumes that noise is important). Images in general, and especially low dose S/TEM images, contain significantly less information than can be encoded by a grid of pixels (which is why image compression works). Mathematicallymore » speaking, the image data has a low dimensional or sparse representation. Through the application of compressive sensing methods [1,2,3] this representation can be found using pre-designed measurements that are usually random for implementation simplicity. These measurements and the compressive sensing reconstruction algorithms have the added benefit of reducing noise. This reconstruction approach can be extended into higher dimensions, whereby the random sampling in each 2-D image can be extended into: a sequence of tomographic projections (i.e. tilt images); a sequence of video frames (i.e. incorporating temporal resolution and dynamics); spectral resolution (i.e. energy filtering an image to see the distribution of elements); and ptychography (i.e. sampling a full diffraction image at each location in a 2-D grid across the sample). This approach has been employed experimentally for materials science samples requiring low-dose imaging [2], and can be readily applied to biological samples. Figure 1 shows the resolution possible in a complex biological system, mouse pancreatic islet beta cells [4], when tomogram slices are reconstructed using subsampling. Reducing the number of pixels (1/6 pix and 1/3*1/3) shows minimal degradation compared to the reconstructions using all pixels (all data and 1/3 tilt). Although subsampling 1/6 of the tilts (1/6 of overall dose) degrades the reconstruction to the point that the cellular structures cannot be identified. Using 1/3 of both the pixels and the tilts provides a high quality image at 1/9 the overall dose even for this most basic and rapid demonstration of the CS methods. Figure 2 demonstrates the theoretical tomogram reconstruction quality (vertical axis) as undersampling (horizontal axis) is increased; we examined subsampling pixels and tilt-angles individually and a combined approach in which both pixels and tilts are sub-sampled. Note that subsampling pixels maintains high quality reconstructions (solid lines). Using the inpainting algorithm to obtain tomograms can automatically reduce the dose applied to the system by an order of magnitude. Perhaps the best way to understand the impact is to consider that by using inpainting (and with minimal hardware changes), a sample that can normally withstand a dose of ~10 e/Å2 can potentially be imaged with an “equivalent quality” to a dose level of 103 e/Å2. To put this in perspective, this is approaching the dose level used for the most advanced images, in terms of spatial resolution, for inorganic systems. While there are issues for biological specimens beyond dose (structural complexity being the most important one), this sampling approach allows the methods that are traditionally used for materials science to be applied to biological systems [5]. References: [1] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41. [2] L Kovarik, A Stevens, A Liyu et al. Appl. Phys. Lett. 109, 164102 (2016) [3] A Stevens, L Kovarik, P Abellan et al. Adv. Structural and Chemical Imaging 1(10), (2015), pp. 1. [4] MD Guay, W Czaja, MA Aronova et al. Scientific Reports 6, 27614 (2016) [5] Supported by the Chemical Imaging, Signature Discovery, and Analytics in Motion Initiatives at PNNL. PNNL is operated by Battelle Memorial Inst. for the US DOE; contract DE-AC05-76RL01830.« less

  14. Sample-Clock Phase-Control Feedback

    NASA Technical Reports Server (NTRS)

    Quirk, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy

    2012-01-01

    To demodulate a communication signal, a receiver must recover and synchronize to the symbol timing of a received waveform. In a system that utilizes digital sampling, the fidelity of synchronization is limited by the time between the symbol boundary and closest sample time location. To reduce this error, one typically uses a sample clock in excess of the symbol rate in order to provide multiple samples per symbol, thereby lowering the error limit to a fraction of a symbol time. For systems with a large modulation bandwidth, the required sample clock rate is prohibitive due to current technological barriers and processing complexity. With precise control of the phase of the sample clock, one can sample the received signal at times arbitrarily close to the symbol boundary, thus obviating the need, from a synchronization perspective, for multiple samples per symbol. Sample-clock phase-control feedback was developed for use in the demodulation of an optical communication signal, where multi-GHz modulation bandwidths would require prohibitively large sample clock frequencies for rates in excess of the symbol rate. A custom mixedsignal (RF/digital) offset phase-locked loop circuit was developed to control the phase of the 6.4-GHz clock that samples the photon-counting detector output. The offset phase-locked loop is driven by a feedback mechanism that continuously corrects for variation in the symbol time due to motion between the transmitter and receiver as well as oscillator instability. This innovation will allow significant improvements in receiver throughput; for example, the throughput of a pulse-position modulation (PPM) with 16 slots can increase from 188 Mb/s to 1.5 Gb/s.

  15. Surface-complexation synthesis of silica-supported high-loading well-dispersed reducible nano-Co3O4 catalysts using CoIII ammine hydroxo complexes

    NASA Astrophysics Data System (ADS)

    Zhang, Weidong; Pan, Feng; Li, Jinjun; Wang, Zhen; Ding, Wei; Qin, Yi; Wu, Feng

    2018-06-01

    Silica-supported highly dispersed cobalt oxides prepared by adsorption are likely to be poorly reducible Co-phyllosilicates or CoO species. Here we report the synthesis of silica-supported monodispersed spinel nano-Co3O4 catalysts by inner-sphere complexation using CoIII ammine hydroxo complexes as precursors. The precursors were facilely prepared by stirring ammoniacal CoII solutions exposed to air. The cobalt loadings (up to 188 mg/g) and particle sizes (3-10 nm) were tailored by successive complexation-calcination cycles. Such catalysts showed significantly superior reducibility and catalytic activity in complete propane oxidation in comparison to supported Co-phyllosilicates and CoO. A further development of this synthesis process may provide a variety of cobalt-based catalysts for important catalytic applications.

  16. Effects of Moringa oleifera seed extract on rumen fermentation in vitro.

    PubMed

    Hoffmann, E M; Muetzel, S; Becker, K

    2003-02-01

    Moringa oleifera is a pantropical tree of the family Moringaceae. A previously undescribed property of an aqueous extract from the seeds of this plant is the modulation of ruminal fermentation patterns, especially protein degradation, as demonstrated in a short-term batch incubation system. Gas, short chain fatty acids (SCFA) and cellulolytic enzyme activities were determined as general fermentation parameters. A dot blot assay able to directly detect true protein in rumen fluid samples was used to quantify protein degradation. For complex substrates the interpretation of protein degradation profiles was amended by polyacrylamide gel electrophoresis (PAGE) of the samples. When incubated with pure carbohydrates at a concentration of 1 mg ml(-1), the extract reduced microbial degradation of the model protein, bovine serum albumin (BSA), such that its concentration was at least 40% above the control after 12 h of incubation. Total protein degradation was thus delayed by approximately 9 h. When fermented along with wheat straw, leaf protein (Rubisco) was almost entirely protected during 12 h of fermentation. The degradation of soy proteins was retarded by at least 4-6 h, depending on the protein band. There were strong side effects on the fermentation of pure cellulose (SCFA yield-60% after 12 h), whereas cellobiose and starch fermentation were less affected (-18 and -8%, respectively). When the complex substrates were fermented, SCFA yield was reduced by approximately 30% after 12 h. In our work we clearly demonstrate the efficacy of the new substance, which is neither a tannin nor a saponin, in an in vitro system, using pure as well as complex substrates. The properties shown in vitro for the crude extract suggest that it could have a positive effect on the protein metabolism of ruminants under intensive management and that negative side effects can be overcome by an optimized dosage. If the chemical nature of the active substance and its mechanism of action can be clarified, it may provide an alternative to replace critical synthetic feed additives (such as antibiotics) for high yielding dairy cows.

  17. Quantitation of lead-210 (210Pb) using lead-203 (203Pb) as a "Massless" yield tracer.

    PubMed

    May, D; Nelson, A N; Schultz, M K

    2017-05-01

    Determination of Pb-210 ( 210 Pb) in aqueous solution is a common radioanalytical challenge in environmental science. Widely used methods for undertaking these analyses (e.g., ASTM D7535) rely on the use of stable lead (Pb) as a yield tracer that takes into account losses of 210 Pb that inevitably occur during elemental/radiochemical separations of the procedures. Although effective, these methods introduce technical challenges that can be difficult to track and potentially introduce uncertainty that can be difficult to quantify. Examples of these challenges include interference from endogenous stable Pb in complex sample matrices; contamination of stable Pb carrier with 210 Pb; and high detection limits due to counting efficiency limitations. We hypothesized that many of these challenges could be avoided by the use of the electron-capture, gamma-emitting isotope, 203 Pb as a chemical yield tracer in the analysis of 210 Pb. A series of experiments were performed to evaluate the efficacy of 203 Pb as a tracer. Four different matrices were analyzed, including a complex matrix (hydraulic-fracturing produced fluids); and samples comprising less complicated matrices (i.e., river water, deionized water, and tap water). Separation techniques and counting methodologies were also compared and optimized. Due to a relatively short-half life (52 h), 203 Pb tracer is effectively massless for the purposes of chemical separations, allowing for reduced chromatography column resin bed volumes. Because 203 Pb is a gamma emitter (279 keV; 81% intensity), recovery can be determined non-destructively in a variety of matrices, including liquid scintillation cocktail. The use of liquid scintillation as a counting methodology allowed for determination of 210 Pb activities via 210 Pb or 210 Po; and recoveries of greater than 90% are routinely achievable using this approach. The improved method for the analysis of 210 Pb in aqueous matrices allows for the analysis of complex matrices, at reduced cost, while providing greater counting flexibility in achieving acceptable detections limits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. What Can Quantum Optics Say about Computational Complexity Theory?

    NASA Astrophysics Data System (ADS)

    Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.

    2015-02-01

    Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.

  19. Geochemistry of Rock Samples Collected from the Iron Hill Carbonatite Complex, Gunnison County, Colorado

    USGS Publications Warehouse

    Van Gosen, Bradley S.

    2008-01-01

    A study conducted in 2006 by the U.S. Geological Survey collected 57 surface rock samples from nine types of intrusive rock in the Iron Hill carbonatite complex. This intrusive complex, located in Gunnison County of southwestern Colorado, is known for its classic carbonatite-alkaline igneous geology and petrology. The Iron Hill complex is also noteworthy for its diverse mineral resources, including enrichments in titanium, rare earth elements, thorium, niobium (columbium), and vanadium. This study was performed to reexamine the chemistry and metallic content of the major rock units of the Iron Hill complex by using modern analytical techniques, while providing a broader suite of elements than the earlier published studies. The report contains the geochemical analyses of the samples in tabular and digital spreadsheet format, providing the analytical results for 55 major and trace elements.

  20. Chemical fractionation-enhanced structural characterization of marine dissolved organic matter

    NASA Astrophysics Data System (ADS)

    Arakawa, N.; Aluwihare, L.

    2016-02-01

    Describing the molecular fingerprint of dissolved organic matter (DOM) requires sample processing methods and separation techniques that can adequately minimize its complexity. We have employed acid hydrolysis as a way to make the subcomponents of marine solid phase-extracted (PPL) DOM more accessible to analytical techniques. Using a combination of NMR and chemical derivatization or reduction analyzed by comprehensive (GCxGC) gas chromatography, we observed chemical features strikingly similar to terrestrial DOM. In particular, we observed reduced alicylic hydrocarbons believed to be the backbone of previously identified carboxylic rich alicyclic material (CRAM). Additionally, we found carbohydrates, amino acids and small lipids and acids.

  1. Genetic landscapes GIS Toolbox: tools to map patterns of genetic divergence and diversity.

    USGS Publications Warehouse

    Vandergast, Amy G.; Perry, William M.; Lugo, Roberto V.; Hathaway, Stacie A.

    2011-01-01

    The Landscape Genetics GIS Toolbox contains tools that run in the Geographic Information System software, ArcGIS, to map genetic landscapes and to summarize multiple genetic landscapes as average and variance surfaces. These tools can be used to visualize the distribution of genetic diversity across geographic space and to study associations between patterns of genetic diversity and geographic features or other geo-referenced environmental data sets. Together, these tools create genetic landscape surfaces directly from tables containing genetic distance or diversity data and sample location coordinates, greatly reducing the complexity of building and analyzing these raster surfaces in a Geographic Information System.

  2. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  3. Collaborative effects of wavefront shaping and optical clearing agent in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, Peter; Jo, YoungJu; Lee, KyeoReh; Tuchin, Valery V.; Jeong, Yong; Park, YongKeun

    2016-12-01

    We demonstrate that simultaneous application of optical clearing agents (OCAs) and complex wavefront shaping in optical coherence tomography (OCT) can provide significant enhancement of penetration depth and imaging quality. OCA reduces optical inhomogeneity of a highly scattering sample, and the wavefront shaping of illumination light controls multiple scattering, resulting in an enhancement of the penetration depth and signal-to-noise ratio. A tissue phantom study shows that concurrent applications of OCA and wavefront shaping successfully operate in OCT imaging. The penetration depth enhancement is further demonstrated for ex vivo mouse ears, revealing hidden structures inaccessible with conventional OCT imaging.

  4. [Transfusion supply optimization in multiple-discipline surgical hospital].

    PubMed

    Solov'eva, I N; Trekova, N A; Krapivkin, I A

    2016-01-01

    To define optimal variant of transfusion supply of hospital by blood components and to decrease donor blood expense via application of blood preserving technologies. Donor blood components expense, volume of hemotransfusions and their proportion for the period 2012-2014 were analyzed. Number of recipients of packed red cells, fresh-frozen plasma and packed platelets reduced 18.5%, 25% and 80% respectively. Need for donor plasma decreased 35%. Expense of autologous plasma in cardiac surgery was 76% of overall volume. Preoperative plasma sampling is introduced in patients with aortic aneurysm. Number of cardiac interventions performed without donor blood is increased 7-31% depending on its complexity.

  5. Sorption of trivalent lanthanides and actinides onto montmorillonite: Macroscopic, thermodynamic and structural evidence for ternary hydroxo and carbonato surface complexes on multiple sorption sites.

    PubMed

    Fernandes, M Marques; Scheinost, A C; Baeyens, B

    2016-08-01

    The credibility of long-term safety assessments of radioactive waste repositories may be greatly enhanced by a molecular level understanding of the sorption processes onto individual minerals present in the near- and far-fields. In this study we couple macroscopic sorption experiments to surface complexation modelling and spectroscopic investigations, including extended X-ray absorption fine structure (EXAFS) and time-resolved laser fluorescence spectroscopies (TRLFS), to elucidate the uptake mechanism of trivalent lanthanides and actinides (Ln/An(III)) by montmorillonite in the absence and presence of dissolved carbonate. Based on the experimental sorption isotherms for the carbonate-free system, the previously developed 2 site protolysis non electrostatic surface complexation and cation exchange (2SPNE SC/CE) model needed to be complemented with an additional surface complexation reaction onto weak sites. The fitting of sorption isotherms in the presence of carbonate required refinement of the previously published model by reducing the strong site capacity and by adding the formation of Ln/An(III)-carbonato complexes both on strong and weak sites. EXAFS spectra of selected Am samples and TRLFS spectra of selected Cm samples corroborate the model assumptions by showing the existence of different surface complexation sites and evidencing the formation of Ln/An(III) carbonate surface complexes. In the absence of carbonate and at low loadings, Ln/An(III) form strong inner-sphere complexes through binding to three Al(O,OH)6 octahedra, most likely by occupying vacant sites in the octahedral layers of montmorillonite, which are exposed on {010} and {110} edge faces. At higher loadings, Ln/An(III) binds to only one Al octahedron, forming a weaker, edge-sharing surface complex. In the presence of carbonate, we identified a ternary mono- or dicarbonato Ln/An(III) complex binding directly to one Al(O,OH)6 octahedron, revealing that type-A ternary complexes form with the one or two carbonate groups pointing away from the surface into the solution phase. Within the spectroscopically observable concentration range these complexes could only be identified on the weak sites, in line with the small strong site capacity suggested by the refined sorption model. When the solubility of carbonates was exceeded, formation of an Am carbonate hydroxide could be identified. The excellent agreement between the thermodynamic model parameters obtained by fitting the macroscopic data, and the spectroscopically identified mechanisms, demonstrates the mature state of the 2SPNE SC/CE model for predicting and quantifying the retention of Ln/An(III) elements by montmorillonite-rich clay rocks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  7. Local Rademacher Complexity: sharper risk bounds with and without unlabeled samples.

    PubMed

    Oneto, Luca; Ghio, Alessandro; Ridella, Sandro; Anguita, Davide

    2015-05-01

    We derive in this paper a new Local Rademacher Complexity risk bound on the generalization ability of a model, which is able to take advantage of the availability of unlabeled samples. Moreover, this new bound improves state-of-the-art results even when no unlabeled samples are available. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Overcoming Matrix Effects in a Complex Sample: Analysis of Multiple Elements in Multivitamins by Atomic Absorption Spectroscopy

    ERIC Educational Resources Information Center

    Arnold, Randy J.; Arndt, Brett; Blaser, Emilia; Blosser, Chris; Caulton, Dana; Chung, Won Sog; Fiorenza, Garrett; Heath, Wyatt; Jacobs, Alex; Kahng, Eunice; Koh, Eun; Le, Thao; Mandla, Kyle; McCory, Chelsey; Newman, Laura; Pithadia, Amit; Reckelhoff, Anna; Rheinhardt, Joseph; Skljarevski, Sonja; Stuart, Jordyn; Taylor, Cassie; Thomas, Scott; Tse, Kyle; Wall, Rachel; Warkentien, Chad

    2011-01-01

    A multivitamin tablet and liquid are analyzed for the elements calcium, magnesium, iron, zinc, copper, and manganese using atomic absorption spectrometry. Linear calibration and standard addition are used for all elements except calcium, allowing for an estimate of the matrix effects encountered for this complex sample. Sample preparation using…

  9. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    ERIC Educational Resources Information Center

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  10. Lasercom system architecture with reduced complexity

    NASA Technical Reports Server (NTRS)

    Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homayoon (Inventor)

    1994-01-01

    Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention, a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides the means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.

  11. LaserCom System Architecture With Reduced Complexity

    NASA Technical Reports Server (NTRS)

    Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homa-Yoon (Inventor)

    1996-01-01

    Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.

  12. Role of mitochondrial DNA damage and dysfunction in veterans with Gulf War Illness.

    PubMed

    Chen, Yang; Meyer, Joel N; Hill, Helene Z; Lange, Gudrun; Condon, Michael R; Klein, Jacquelyn C; Ndirangu, Duncan; Falvo, Michael J

    2017-01-01

    Gulf War Illness (GWI) is a chronic multi-symptom illness not currently diagnosed by standard medical or laboratory test that affects 30% of veterans who served during the 1990-1991 Gulf War. The clinical presentation of GWI is comparable to that of patients with certain mitochondrial disorders-i.e., clinically heterogeneous multisystem symptoms. Therefore, we hypothesized that mitochondrial dysfunction may contribute to both the symptoms of GWI as well as its persistence over time. We recruited 21 cases of GWI (CDC and Kansas criteria) and 7 controls to participate in this study. Peripheral blood samples were obtained in all participants and a quantitative polymerase chain reaction (QPCR) based assay was performed to quantify mitochondrial and nuclear DNA lesion frequency and mitochondrial DNA (mtDNA) copy number (mtDNAcn) from peripheral blood mononuclear cells. Samples were also used to analyze nuclear DNA lesion frequency and enzyme activity for mitochondrial complexes I and IV. Both mtDNA lesion frequency (p = 0.015, d = 1.13) and mtDNAcn (p = 0.001; d = 1.69) were elevated in veterans with GWI relative to controls. Nuclear DNA lesion frequency was also elevated in veterans with GWI (p = 0.344; d = 1.41), but did not reach statistical significance. Complex I and IV activity (p > 0.05) were similar between groups and greater mtDNA lesion frequency was associated with reduced complex I (r2 = -0.35, p = 0.007) and IV (r2 = -0.28, p < 0.01) enzyme activity. In conclusion, veterans with GWI exhibit greater mtDNA damage which is consistent with mitochondrial dysfunction.

  13. A mismatch between population health literacy and the complexity of health information: an observational study.

    PubMed

    Rowlands, Gillian; Protheroe, Joanne; Winkley, John; Richardson, Marty; Seed, Paul T; Rudd, Rima

    2015-06-01

    Low health literacy is associated with poorer health and higher mortality. Complex health materials are a barrier to health. To assess the literacy and numeracy skills required to understand and use commonly used English health information materials, and to describe population skills in relation to these. An English observational study comparing health materials with national working-age population skills. Health materials were sampled using a health literacy framework. Competency thresholds to understand and use the materials were identified. The proportion of the population above and below these thresholds, and the sociodemographic variables associated with a greater risk of being below the thresholds, were described. Sixty-four health materials were sampled. Two competency thresholds were identified: text (literacy) only, and text + numeracy; 2515/5795 participants (43%) were below the text-only threshold, while 2905/4767 (61%) were below the text + numeracy threshold. Univariable analyses of social determinants of health showed that those groups more at risk of socioeconomic deprivation had higher odds of being below the health literacy competency threshold than those at lower risk of deprivation. Multivariable analysis resulted in some variables becoming non-significant or reduced in effect. Levels of low health literacy mirror those found in other industrialised countries, with a mismatch between the complexity of health materials and the skills of the English adult working-age population. Those most in need of health information have the least access to it. Efficacious strategies are building population skills, improving health professionals' communication, and improving written health information. © British Journal of General Practice 2015.

  14. Simultaneous analysis and retention behavior of major isoflavonoids in Radix Puerariae lobatae and Radix Puerariae thomsonii by high performance liquid chromatography with cyclodextrins as a mobile phase modifier.

    PubMed

    Zeng, Aiguo; Xing, Jianfeng; Wang, Changhe; Song, Jie; Li, Cong; Yang, Xin; Yang, Guangde

    2012-01-27

    In order to differentiate two species of Radix Puerariae (Radix Puerariae lobatae and Radix Puerariae thomsonii) and to determine major isoflavonoids (puerarin, daidzin, daidzein and genistein) in the samples, a simple high performance liquid chromatography (HPLC) method with isocratic elution employing cyclodextrins (CDs) as mobile phase additives was developed. Various factors affecting the retention of isoflavonoids in the C(18) reversed-phase column, such as the nature of CDs, the concentration of hydroxypropyl-β-cyclodextrin (HP-β-CD) and the methanol percentage in the mobile phase, were studied. Experimental results confirmed that HP-β-CD, as a very effective mobile phase additive, could markedly reduce the retention of isoflavonoids, especially daidzein and genistein. The elution of four isoflavonoids could be achieved on a Kromasil(®) C(18) column within 56 min by using the methanol-water contained 5 mM HP-β-CD (25/75, v/v) mixture as the mobile phase. The formation of the inclusion complexes between isoflavonoids and HP-β-CD explained the modification of the retention of analytes. The apparent formation constants determined by HPLC confirmed that the stoichiometry of HP-β-CD-isoflavonoid complexes was 1:1, and the stability of the complexes depended on the size and property of isoflavonoids. The optimized method was successfully applied for the simultaneous determination of major isoflavonoids in P. lobatae and P. thomsonii samples. This work provides a useful method for the analysis of traditional Chinese herbs. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Time lens assisted photonic sampling extraction

    NASA Astrophysics Data System (ADS)

    Petrillo, Keith Gordon

    Telecommunication bandwidth demands have dramatically increased in recent years due to Internet based services like cloud computing and storage, large file sharing, and video streaming. Additionally, sensing systems such as wideband radar, magnetic imaging resonance systems, and complex modulation formats to handle large data transfer in telecommunications require high speed, high resolution analog-to-digital converters (ADCs) to interpret the data. Accurately processing and acquiring the information at next generation data rates from these systems has become challenging for electronic systems. The largest contributors to the electronic bottleneck are bandwidth and timing jitter which limit speed and reduce accuracy. Optical systems have shown to have at least three orders of magnitude increase in bandwidth capabilities and state of the art mode locked lasers have reduced timing jitters into thousands of attoseconds. Such features have encouraged processing signals without the use of electronics or using photonics to assist electronics. All optical signal processing has allowed the processing of telecommunication line rates up to 1.28 Tb/s and high resolution analog-to-digital converters in the 10s of gigahertz. The major drawback to these optical systems is the high cost of the components. The application of all optical processing techniques such as a time lens and chirped processing can greatly reduce bandwidth and cost requirements of optical serial to parallel converters and push photonically assisted ADCs into the 100s of gigahertz. In this dissertation, the building blocks to a high speed photonically assisted ADC are demonstrated, each providing benefits to its own respective application. A serial to parallel converter using a continuously operating time lens as an optical Fourier processor is demonstrated to fully convert a 160-Gb/s optical time division multiplexed signal to 16 10-Gb/s channels with error free operation. Using chirped processing, an optical sample and hold concept is demonstrated and analyzed as a resolution improvement to existing photonically assisted ADCs. Simulations indicate that the application of a continuously operating time lens to a photonically assisted sampling system can increase photonically sampled systems by an order of magnitude while acquiring properties similar to an optical sample and hold system.

  16. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  17. Impacts of road salts on leaching behavior of lead contaminated soil.

    PubMed

    Wu, Jingjing; Kim, Hwidong

    2017-02-15

    Research was conducted to explore the effects of road salts on lead leaching from lead contaminated soil samples that were collected in an old residence area in Erie, PA. The synthetic precipitate leaching procedure (SPLP) test was employed to evaluate lead leaching from one of the lead contaminated soils in the presence of various levels of road salts (5%, 10%, 20%, 30% and 40%). The results of the leaching test showed that lead leaching dramatically increased as the road salt content increased as a result of the formation of lead-chloride complexes, but different lead leaching patterns were observed in the presence of NaCl- and CaCl 2 -based road salts at a high content of road salts (>20%). Additional leaching tests that include 30% road salts and different soil samples showed a variety of leaching patterns by soil samples. The sequential extraction of each soil sample showed that a high fraction of organic matter bound lead was associated with lead contamination. The higher the fraction of organic matter bound lead contained in soil, the greater the effects of calcium on reducing lead leaching, observations showed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Development and psychometric evaluation of the Personal Growth Initiative Scale-II.

    PubMed

    Robitschek, Christine; Ashton, Matthew W; Spering, Cynthia C; Geiger, Nathaniel; Byers, Danielle; Schotts, G Christian; Thoen, Megan A

    2012-04-01

    The original Personal Growth Initiative Scale (PGIS; Robitschek, 1998) was unidimensional, despite theory identifying multiple components (e.g., cognition and behavior) of personal growth initiative (PGI). The present research developed a multidimensional measure of the complex process of PGI, while retaining the brief and psychometrically sound properties of the original scale. Study 1 focused on scale development, including theoretical derivation of items, assessing factor structure, reducing number of items, and refining the scale length using samples of college students. Study 2 consisted of confirmatory factor analysis with 3 independent samples of college students and community members. Lastly, Study 3 assessed test-retest reliability over 1-, 2-, 4-, and 6-week periods and tests of concurrent and discriminant validity using samples of college students. The final measure, the Personal Growth Initiative Scale-II (PGIS-II), includes 4 subscales: Readiness for Change, Planfulness, Using Resources, and Intentional Behavior. These studies provide exploratory and confirmatory evidence for the 4-factor structure, strong internal consistency for the subscales and overall score across samples, acceptable temporal stability at all assessed intervals, and concurrent and discriminant validity of the PGIS-II. Future directions for research and clinical practice are discussed.

  19. Influence of uranium on bacterial communities: a comparison of natural uranium-rich soils with controls.

    PubMed

    Mondani, Laure; Benzerara, Karim; Carrière, Marie; Christen, Richard; Mamindy-Pajany, Yannick; Février, Laureline; Marmier, Nicolas; Achouak, Wafa; Nardoux, Pascal; Berthomieu, Catherine; Chapon, Virginie

    2011-01-01

    This study investigated the influence of uranium on the indigenous bacterial community structure in natural soils with high uranium content. Radioactive soil samples exhibiting 0.26% - 25.5% U in mass were analyzed and compared with nearby control soils containing trace uranium. EXAFS and XRD analyses of soils revealed the presence of U(VI) and uranium-phosphate mineral phases, identified as sabugalite and meta-autunite. A comparative analysis of bacterial community fingerprints using denaturing gradient gel electrophoresis (DGGE) revealed the presence of a complex population in both control and uranium-rich samples. However, bacterial communities inhabiting uraniferous soils exhibited specific fingerprints that were remarkably stable over time, in contrast to populations from nearby control samples. Representatives of Acidobacteria, Proteobacteria, and seven others phyla were detected in DGGE bands specific to uraniferous samples. In particular, sequences related to iron-reducing bacteria such as Geobacter and Geothrix were identified concomitantly with iron-oxidizing species such as Gallionella and Sideroxydans. All together, our results demonstrate that uranium exerts a permanent high pressure on soil bacterial communities and suggest the existence of a uranium redox cycle mediated by bacteria in the soil.

  20. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  1. Effect of preceding crop on Fusarium species and mycotoxin contamination of wheat grains.

    PubMed

    Qiu, Jianbo; Dong, Fei; Yu, Mingzheng; Xu, Jianhong; Shi, Jianrong

    2016-10-01

    The Fusarium graminearum species complex infects several cereals and causes the reduction of grain yield and quality. Many factors influence the extent of Fusarium infection and mycotoxin levels. Such factors include crop rotation. In the present study, we explored the effect of rice or maize as former crops on mycotoxin accumulation in wheat grains. More than 97% of samples were contaminated with deoxynivalenol (DON). DON concentrations in wheat grains from rice and maize rotation fields were 884.37 and 235.78 µg kg(-1) . Zearalenone (ZEN) was detected in 45% of samples which were mainly collected from maize-wheat rotation systems. Fusarium strains were isolated and more F. graminearum sensu stricto (s. str.) isolates were cultured from wheat samples obtained from maize rotation fields. DON levels produced by Fusarium isolates from rice rotation fields were higher than those of samples from maize rotation fields. Rice-wheat rotation favours DON accumulation, while more ZEN contamination may occur in maize-wheat rotation models. Appropriate crop rotation may help to reduce toxin levels in wheat grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  2. Improvement for identification of heterophile antibody interference and AFP hook effect in immunoassays with multiplex suspension bead array system.

    PubMed

    Wang, Yajie; Yu, Jinsheng; Ren, Yuan; Liu, Li; Li, Haowen; Guo, Anchen; Shi, Congning; Fang, Fang; Juehne, Twyla; Yao, Jianer; Yang, Enhuan; Zhou, Xuelei; Kang, Xixiong

    2013-11-15

    A variety of immunoassays including multiplex suspension bead array have been developed for tumor marker detections; however, these assays could be compromised in their sensitivity and specificity by well-known heterophile antibody interference and hook effect. Using Luminex® multiplex suspension bead arrays, we modified protocols with two newly-developed solutions that can identify heterophile antibody interference and AFP hook effect. Effectiveness of the two solutions was assessed in serum samples from patients. Concentrations of 9 tumor markers in heterophile antibody positive samples assayed with Solution A, containing murine monoclonal antibodies and mouse serum, were significantly reduced when compared with those false high signals assayed without Solution A (all p<0.01). With incorporation of Solution H (fluorescent beads linked with AFP antigen), a new strategy for identification of AFP hook effect was established, and with this strategy AFP hook effect was identified effectively in serum samples with very high levels of AFP. Two proprietary solutions improve the identification of heterophile antibody interference and AFP hook effect. With these solutions, multiplex suspension bead arrays provide more reliable testing results in tumor marker detection where complex clinical serum samples are used. © 2013.

  3. Effect of the electrical currents generated by the intestinal smooth muscle layers on pancreatic enzyme activity: an in vitro study.

    PubMed

    Dabek, Marta; Podgurniak, Paweł; Piedra, Jose L Valverde; Szymańczyk, Sylwia; Filip, Rafał; Wojtasz-Pajak, Anna; Werpachowska, Eliza; Podgurniak, Malgorzata; Pierzynowski, Stefan G

    2007-05-01

    Gut enzymes in the small intestine are exposed to extremely low electrical currents (ELEC) generated by the smooth muscle. In the present study, the in vitro tests were undertaken to evaluate the effect of these electric currents on the activity of the proteolytic pancreatic digestive enzymes. A simulator generating the typical electrical activity of pig gut was used for these studies. The electric current emitted by the simulator was transmitted to the samples, containing enzyme and its substrate, using platinum plate electrodes. All samples were incubated at 37 degrees C for 1 h. The changes in optical density, corresponding to enzyme activity, in samples stimulated for 1 h with ELEC was compared with that not exposed to ELEC. The obtained results show that the electrical current with the characteristics of the myoelectrical migrating complex (MMC) has an influence on pancreatic enzyme activity. Increased endopeptidase and reduced exopeptidase activity was noticed in samples treated with ELEC. This observation can be of important as analyzed factors which can alter enzymatic activity of the gut, can thus also affect feed/food digestibility. (c) 2007 Wiley-Liss, Inc.

  4. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.

    PubMed

    Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  5. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    PubMed Central

    Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197

  6. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  7. An open-population hierarchical distance sampling model.

    PubMed

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  8. Influence of Uranium on Bacterial Communities: A Comparison of Natural Uranium-Rich Soils with Controls

    PubMed Central

    Mondani, Laure; Benzerara, Karim; Carrière, Marie; Christen, Richard; Mamindy-Pajany, Yannick; Février, Laureline; Marmier, Nicolas; Achouak, Wafa; Nardoux, Pascal; Berthomieu, Catherine; Chapon, Virginie

    2011-01-01

    This study investigated the influence of uranium on the indigenous bacterial community structure in natural soils with high uranium content. Radioactive soil samples exhibiting 0.26% - 25.5% U in mass were analyzed and compared with nearby control soils containing trace uranium. EXAFS and XRD analyses of soils revealed the presence of U(VI) and uranium-phosphate mineral phases, identified as sabugalite and meta-autunite. A comparative analysis of bacterial community fingerprints using denaturing gradient gel electrophoresis (DGGE) revealed the presence of a complex population in both control and uranium-rich samples. However, bacterial communities inhabiting uraniferous soils exhibited specific fingerprints that were remarkably stable over time, in contrast to populations from nearby control samples. Representatives of Acidobacteria, Proteobacteria, and seven others phyla were detected in DGGE bands specific to uraniferous samples. In particular, sequences related to iron-reducing bacteria such as Geobacter and Geothrix were identified concomitantly with iron-oxidizing species such as Gallionella and Sideroxydans. All together, our results demonstrate that uranium exerts a permanent high pressure on soil bacterial communities and suggest the existence of a uranium redox cycle mediated by bacteria in the soil. PMID:21998695

  9. Strategies to Reduce Hospitalizations of Children With Medical Complexity Through Complex Care: Expert Perspectives.

    PubMed

    Coller, Ryan J; Nelson, Bergen B; Klitzner, Thomas S; Saenz, Adrianna A; Shekelle, Paul G; Lerner, Carlos F; Chung, Paul J

    Interventions to reduce disproportionate hospital use among children with medical complexity (CMC) are needed. We conducted a rigorous, structured process to develop intervention strategies aiming to reduce hospitalizations within a complex care program population. A complex care medical home program used 1) semistructured interviews of caregivers of CMC experiencing acute, unscheduled hospitalizations and 2) literature review on preventing hospitalizations among CMC to develop key drivers for lowering hospital utilization and link them with intervention strategies. Using an adapted version of the RAND/UCLA Appropriateness Method, an expert panel rated each model for effectiveness at impacting each key driver and ultimately reducing hospitalizations. The complex care program applied these findings to select a final set of feasible intervention strategies for implementation. Intervention strategies focused on expanding access to familiar providers, enhancing general or technical caregiver knowledge and skill, creating specific and proactive crisis or contingency plans, and improving transitions between hospital and home. Activities aimed to facilitate family-centered, flexible implementation and consideration of all of the child's environments, including school and while traveling. Tailored activities and special attention to the highest utilizing subset of CMC were also critical for these interventions. A set of intervention strategies to reduce hospitalizations among CMC, informed by key drivers, can be created through a structured, reproducible process. Both this process and the results may be relevant to clinical programs and researchers aiming to reduce hospital utilization through the medical home for CMC. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  10. Fundamental study on non-invasive blood glucose sensing.

    PubMed

    Xu, K; Li, Q; Lu, Z; Jiang, J

    2002-01-01

    Diabetes is a disease which severely threatens the health of human beings. Unfortunately, current monitoring techniques with finger sticks discourage the regular use. Noninvasive spectroscopic measurement of blood glucose is a simple and painless technique, and reduces the long-term health care costs of diabetic patients due to no reagents. It is suitable for home use. Moreover, the establishment of the methodology not only applies to blood glucose noninvasive measurement, but also can be extended to noninvasive measurement of other analytes in body fluid, which will be of important significance for the development of the technique of clinical analysis. In this paper, some fundamental researches, which have been achieved in our laboratory in the field of non-invasive blood glucose measurement, were introduced. 1. Fundamental research was done for the glucose concentrations from simple to complex samples with near and middle infrared spectroscopy: (1) the relationship between the instrument precision and prediction accuracy of the glucose measurement; (2) the change of the result of the quantitative measurement with the change of the complexity of samples; (3) the attempt of increasing the prediction accuracy of the glucose measurement by improving the methods of modeling. The research results showed that it is feasible for non-invasive blood glucose measurement with near and middle infrared spectroscopy in theory, and the experimental results, from simple to complex samples, proved that it is effective for the methodology consisting of hardware and software. 2. According to the characteristics of human body measurement, the effects of measuring conditions on measurement results, such as: (1) the effect of measurement position; (2) the effect of measurement pressure; (3) the effect of measurement site; (4) the effect of measured individual, were investigated. With the fundamental researches, the special problems of human body measurement were solved. In addition, the practical and effective method of noninvasive human blood glucose measurement was proposed.

  11. Protective effects of some creatine derivatives in brain tissue anoxia.

    PubMed

    Perasso, Luisa; Lunardi, Gian Luigi; Risso, Federica; Pohvozcheva, Anna V; Leko, Maria V; Gandolfo, Carlo; Florio, Tullio; Cupello, Aroldo; Burov, Sergey V; Balestrino, Maurizio

    2008-05-01

    Some derivatives more lipophylic than creatine, thus theoretically being capable to better cross the blood-brain barrier, were studied for their protective effect in mouse hippocampal slices. We found that N-amidino-piperidine is harmful to brain tissue, and that phosphocreatine is ineffective. Creatine, creatine-Mg-complex (acetate) and phosphocreatine-Mg-complex (acetate) increased the latency to population spike disappearance during anoxia. Creatine and creatine-Mg-complex (acetate) also increased the latency of anoxic depolarization, while the delay induced by phosphocreatine-Mg-complex (acetate) was of borderline significance (P = 0.056). Phosphocreatine-Mg-complex (acetate) significantly reduced neuronal hyperexcitability during anoxia, an effect that no other compound (including creatine itself) showed. For all parameters except reduced hyperexcitability the effects statistically correlated with tissue levels of creatine or phosphocreatine. Summing up, exogenous phosphocreatine and N-amidino piperidine are not useful for brain protection, while chelates of both creatine and phosphocreatine do replicate some of the known protective effects of creatine. In addition, phosphocreatine-Mg-complex (acetate) also reduced neuronal hyperexcitability during anoxia.

  12. Structure-guided Protein Transition Modeling with a Probabilistic Roadmap Algorithm.

    PubMed

    Maximova, Tatiana; Plaku, Erion; Shehu, Amarda

    2016-07-07

    Proteins are macromolecules in perpetual motion, switching between structural states to modulate their function. A detailed characterization of the precise yet complex relationship between protein structure, dynamics, and function requires elucidating transitions between functionally-relevant states. Doing so challenges both wet and dry laboratories, as protein dynamics involves disparate temporal scales. In this paper we present a novel, sampling-based algorithm to compute transition paths. The algorithm exploits two main ideas. First, it leverages known structures to initialize its search and define a reduced conformation space for rapid sampling. This is key to address the insufficient sampling issue suffered by sampling-based algorithms. Second, the algorithm embeds samples in a nearest-neighbor graph where transition paths can be efficiently computed via queries. The algorithm adapts the probabilistic roadmap framework that is popular in robot motion planning. In addition to efficiently computing lowest-cost paths between any given structures, the algorithm allows investigating hypotheses regarding the order of experimentally-known structures in a transition event. This novel contribution is likely to open up new venues of research. Detailed analysis is presented on multiple-basin proteins of relevance to human disease. Multiscaling and the AMBER ff14SB force field are used to obtain energetically-credible paths at atomistic detail.

  13. Young Stellar Populations in MYStIX Star-forming Regions: Candidate Protostars

    NASA Astrophysics Data System (ADS)

    Romine, Gregory; Feigelson, Eric D.; Getman, Konstantin V.; Kuhn, Michael A.; Povich, Matthew S.

    2016-12-01

    The Massive Young Star-Forming Complex in Infrared and X-ray (MYStIX) project provides a new census on stellar members of massive star-forming regions within 4 kpc. Here the MYStIX Infrared Excess catalog and Chandra-based X-ray photometric catalogs are mined to obtain high-quality samples of Class I protostars using criteria designed to reduce extragalactic and Galactic field star contamination. A total of 1109 MYStIX Candidate Protostars (MCPs) are found in 14 star-forming regions. Most are selected from protoplanetary disk infrared excess emission, but 20% are found from their ultrahard X-ray spectra from heavily absorbed magnetospheric flare emission. Two-thirds of the MCP sample is newly reported here. The resulting samples are strongly spatially associated with molecular cores and filaments on Herschel far-infrared maps. This spatial agreement and other evidence indicate that the MCP sample has high reliability with relatively few “false positives” from contaminating populations. But the limited sensitivity and sparse overlap among the infrared and X-ray subsamples indicate that the sample is very incomplete with many “false negatives.” Maps, tables, and source descriptions are provided to guide further study of star formation in these regions. In particular, the nature of ultrahard X-ray protostellar candidates without known infrared counterparts needs to be elucidated.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apel, William A; Thompson, Vicki S

    A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immunemore » complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.« less

  15. Antibody profiling sensitivity through increased reporter antibody layering

    DOEpatents

    Apel, William A.; Thompson, Vicki S.

    2013-02-26

    A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immune complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.

  16. Rapid classification of biological components

    DOEpatents

    Thompson, Vicki S.; Barrett, Karen B.; Key, Diane E.

    2013-10-15

    A method is disclosed for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an illustrative embodiment of the invention, the analyte is a drug, such as marijuana, cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method involves attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein the locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to antigens in the array, thereby forming immune complexes; washing away antibodies that do not form immune complexes; and detecting the immune complexes, thereby forming an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to a subject's identity.

  17. Rapid classification of biological components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Vicki S.; Barrett, Karen B.; Key, Diane E.

    A method is disclosed for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an illustrative embodiment of the invention, the analyte is a drug, such as marijuana, cocaine, methamphetamine, methyltestosterone, or mesterolone. The method involves attaching antigens to the surface of a solid support in a preselected pattern to form an array wherein the locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to antigens in the array, thereby forming immunemore » complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, thereby forming an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.« less

  18. Antibody profiling sensitivity through increased reporter antibody layering

    DOEpatents

    Apel, William A.; Thompson, Vicki S.

    2017-03-28

    A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immune complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.

  19. Sequential derivatization of polar organic compounds in cloud water using O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride, N,O-bis(trimethylsilyl)trifluoroacetamide, and gas-chromatography/mass spectrometry analysis.

    PubMed

    Sagona, Jessica A; Dukett, James E; Hawley, Harmonie A; Mazurek, Monica A

    2014-10-03

    Cloud water samples from Whiteface Mountain, NY were used to develop a combined sampling and gas chromatography-mass spectrometric (GCMS) protocol for evaluating the complex mixture of highly polar organic compounds (HPOC) present in this atmospheric medium. Specific HPOC of interest were mono- and di keto-acids which are thought to originate from photochemical reactions of volatile unsaturated hydrocarbons from biogenic and manmade emissions and be a major fraction of atmospheric carbon. To measure HPOC mixtures and the individual keto-acids in cloud water, samples first must be derivatized for clean elution and measurement, and second, have low overall background of the target species as validated by GCMS analysis of field and laboratory blanks. Here, we discuss a dual derivatization method with PFBHA and BSTFA which targets only organic compounds that contain functional groups reacting with both reagents. The method also reduced potential contamination by minimizing the amount of sample processing from the field through the GCMS analysis steps. Once derivatized only gas chromatographic separation and selected ion monitoring (SIM) are needed to identify and quantify the polar organic compounds of interest. Concentrations of the detected total keto-acids in individual cloud water samples ranged from 27.8 to 329.3ngmL(-1) (ppb). Method detection limits for the individual HPOC ranged from 0.17 to 4.99ngmL(-1) and the quantification limits for the compounds ranged from 0.57 to 16.64ngmL(-1). The keto-acids were compared to the total organic carbon (TOC) results for the cloud water samples with concentrations of 0.607-3.350mgL(-1) (ppm). GCMS analysis of all samples and blanks indicated good control of the entire collection and analysis steps. Selected ion monitoring by GCMS of target keto-acids was essential for screening the complex organic carbon mixtures present at low ppb levels in cloud water. It was critical for ensuring high levels of quality assurance and quality control and for the correct identification and quantification of key marker compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

Top