Sample records for sample processing time

  1. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    PubMed

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  2. Sampling through time and phylodynamic inference with coalescent and birth–death models

    PubMed Central

    Volz, Erik M.; Frost, Simon D. W.

    2014-01-01

    Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173

  3. Process dependency of radiation hardness of rapid thermal reoxidized nitrided gate oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weishin Lu; Kuanchin Lin; Jenngwo Hwu

    The radiation hardness of MOS capacitors with various reoxidized nitrided oxide (RNO) structures is studied by changing the durations of rapid thermal processes during sample preparation and by applying irradiation-then-anneal (ITA) treatments on samples after preparation. It is found that the initial flatband voltage and midgap interface trap density of MOS capacitors exhibit turnaround'' dependency on the total time of nitridation and reoxidation processes. For samples with nitrided oxide (NO) structures, the radiation-induced variations of above parameters are also turnaround''-dependent on nitridation time. However, when the reoxidation process is performed, the radiation hardness for all samples will be gradually improvedmore » with increasing reoxidation time no matter what the nitridation time is. The most radiation-hard process for RNO structures is suggested. Finally, it is found that when ITA treatments are applied on samples after preparation, their radiation hardness is much improved.« less

  4. Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.

    PubMed

    Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David

    2018-05-01

    This study examines effectiveness of a project to enhance an autoverification (AV) system through application of Six Sigma (DMAIC) process improvement strategies. Similar AV systems set up at three sites underwent examination and modification to produce improved systems while monitoring proportions of samples autoverified, the time required for manual review and verification, sample processing time, and examining characteristics of tests not autoverified. This information was used to identify areas for improvement and monitor the impact of changes. Use of reference range based criteria had the greatest impact on the proportion of tests autoverified. To improve AV process, reference range based criteria was replaced with extreme value limits based on a 99.5% test result interval, delta check criteria were broadened, and new specimen consistency rules were implemented. Decision guidance tools were also developed to assist staff using the AV system. The mean proportion of tests and samples autoverified improved from <62% for samples and <80% for tests, to >90% for samples and >95% for tests across all three sites. The new AV system significantly decreased turn-around time and total sample review time (to about a third), however, time spent for manual review of held samples almost tripled. There was no evidence of compromise to the quality of testing process and <1% of samples held for exceeding delta check or extreme limits required corrective action. The Six Sigma (DMAIC) process improvement methodology was successfully applied to AV systems resulting in an increase in overall test and sample AV by >90%, improved turn-around time, reduced time for manual verification, and with no obvious compromise to quality or error detection. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  6. Automatic sample changer control software for automation of neutron activation analysis process in Malaysian Nuclear Agency

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Ibrahim, M. M.; Rahman, N. A. A.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.; Lombigit, L.; Azman, A.; Omar, S. A.

    2018-01-01

    Most of the procedures in neutron activation analysis (NAA) process that has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s were performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient especially for sample counting and measurement process. The sample needs to be changed and the measurement software needs to be setup for every one hour counting time. Both of these procedures are performed manually for every sample. Hence, an automatic sample changer system (ASC) that consists of hardware and software is developed to automate sample counting process for up to 30 samples consecutively. This paper describes the ASC control software for NAA process which is designed and developed to control the ASC hardware and call GammaVision software for sample measurement. The software is developed by using National Instrument LabVIEW development package.

  7. Assessment of laboratory test utilization for HIV/AIDS care in urban ART clinics of Lilongwe, Malawi.

    PubMed

    Palchaudhuri, Sonali; Tweya, Hannock; Hosseinipour, Mina

    2014-06-01

    The 2011 Malawi HIV guidelines promote CD4 monitoring for pre-ART assessment and considering HIVRNA monitoring for ART response assessment, while some clinics used CD4 for both. We assessed clinical ordering practices as compared to guidelines, and determined whether the samples were successfully and promptly processed. We conducted a retrospective review of all patients seen in from August 2010 through July 2011,, in two urban HIV-care clinics that utilized 6-monthly CD4 monitoring regardless of ART status. We calculated the percentage of patients on whom clinicians ordered CD4 or HIVRNA analysis. For all samples sent, we determined rates of successful lab-processing, and mean time to returned results. Of 20581 patients seen, 8029 (39%) had at least one blood draw for CD4 count. Among pre-ART patients, 2668/2844 (93.8%) had CD4 counts performed for eligibility. Of all CD4 samples sent, 8082/9207 (89%) samples were successfully processed. Of those, mean time to processing was 1.6 days (s.d 1.5) but mean time to results being available to clinician was 9.3 days (s.d. 3.7). Regarding HIVRNA, 172 patients of 17737 on ART had a blood draw and only 118/213 (55%) samples were successfully processed. Mean processing time was 39.5 days (s.d. 21.7); mean time to results being available to clinician was 43.1 days (s.d. 25.1). During the one-year evaluated, there were multiple lapses in processing HIVRNA samples for up to 2 months. Clinicians underutilize CD4 and HIVRNA as monitoring tools in HIV care. Laboratory processing failures and turnaround times are unacceptably high for viral load analysis. Alternative strategies need to be considered in order to meet laboratory monitoring needs.

  8. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  9. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    NASA Astrophysics Data System (ADS)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  10. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping; School of Electrical and Electronic Engineering, Nanyang Technological University

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, whichmore » converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.« less

  11. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  12. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR

    PubMed Central

    Mobli, Mehdi; Hoch, Jeffrey C.

    2017-01-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315

  13. Advancing microwave technology for dehydration processing of biologics.

    PubMed

    Cellemme, Stephanie L; Van Vorst, Matthew; Paramore, Elisha; Elliott, Gloria D

    2013-10-01

    Our prior work has shown that microwave processing can be effective as a method for dehydrating cell-based suspensions in preparation for anhydrous storage, yielding homogenous samples with predictable and reproducible drying times. In the current work an optimized microwave-based drying process was developed that expands upon this previous proof-of-concept. Utilization of a commercial microwave (CEM SAM 255, Matthews, NC) enabled continuous drying at variable low power settings. A new turntable was manufactured from Ultra High Molecular Weight Polyethylene (UHMW-PE; Grainger, Lake Forest, IL) to provide for drying of up to 12 samples at a time. The new process enabled rapid and simultaneous drying of multiple samples in containment devices suitable for long-term storage and aseptic rehydration of the sample. To determine sample repeatability and consistency of drying within the microwave cavity, a concentration series of aqueous trehalose solutions were dried for specific intervals and water content assessed using Karl Fischer Titration at the end of each processing period. Samples were dried on Whatman S-14 conjugate release filters (Whatman, Maidestone, UK), a glass fiber membrane used currently in clinical laboratories. The filters were cut to size for use in a 13 mm Swinnex(®) syringe filter holder (Millipore(™), Billerica, MA). Samples of 40 μL volume could be dehydrated to the equilibrium moisture content by continuous processing at 20% with excellent sample-to-sample repeatability. The microwave-assisted procedure enabled high throughput, repeatable drying of multiple samples, in a manner easily adaptable for drying a wide array of biological samples. Depending on the tolerance for sample heating, the drying time can be altered by changing the power level of the microwave unit.

  14. ISOLOK VALVE ACCEPTANCE TESTING FOR DWPF SME SAMPLING PROCESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Hera, K.; Coleman, C.

    2011-12-05

    Evaluation of the Defense Waste Processing Facility (DWPF) Chemical Process Cell (CPC) cycle time identified several opportunities to improve the CPC processing time. Of the opportunities, a focus area related to optimizing the equipment and efficiency of the sample turnaround time for DWPF Analytical Laboratory was identified. The Mechanical Systems & Custom Equipment Development (MS&CED) Section of the Savannah River National Laboratory (SRNL) evaluated the possibility of using an Isolok{reg_sign} sampling valve as an alternative to the Hydragard{reg_sign} valve for taking process samples. Previous viability testing was conducted with favorable results using the Isolok sampler and reported in SRNL-STI-2010-00749 (1).more » This task has the potential to improve operability, reduce maintenance time and decrease CPC cycle time. This report summarizes the results from acceptance testing which was requested in Task Technical Request (TTR) HLW-DWPF-TTR-2010-0036 (2) and which was conducted as outlined in Task Technical and Quality Assurance Plan (TTQAP) SRNL-RP-2011-00145 (3). The Isolok to be tested is the same model which was tested, qualified, and installed in the Sludge Receipt Adjustment Tank (SRAT) sample system. RW-0333P QA requirements apply to this task. This task was to qualify the Isolok sampler for use in the DWPF Slurry Mix Evaporator (SME) sampling process. The Hydragard, which is the current baseline sampling method, was used for comparison to the Isolok sampling data. The Isolok sampler is an air powered grab sampler used to 'pull' a sample volume from a process line. The operation of the sampler is shown in Figure 1. The image on the left shows the Isolok's spool extended into the process line and the image on the right shows the sampler retracted and then dispensing the liquid into the sampling container. To determine tank homogeneity, a Coliwasa sampler was used to grab samples at a high and low location within the mixing tank. Data from the two locations were compared to determine if the contents of the tank were well mixed. The Coliwasa sampler is a tube with a stopper at the bottom and is designed to obtain grab samples from specific locations within the drum contents. A position paper (4) was issued to address the prototypic flow loop issues and simulant selections. A statistically designed plan (5) was issued to address the total number of samples each sampler needed to pull, to provide the random order in which samples were pulled and to group samples for elemental analysis. The TTR required that the Isolok sampler perform as well as the Hydragard sampler during these tests to ensure the acceptability of the Isolok sampler for use in the DWPF sampling cells. Procedure No.L9.4-5015 was used to document the sample parameters and process steps. Completed procedures are located in R&D Engineering job folder 23269.« less

  15. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  16. Impact of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization method.

    PubMed

    do Nascimento, Cássio; dos Santos, Janine Navarro; Pedrazzi, Vinícius; Pita, Murillo Sucena; Monesi, Nadia; Ribeiro, Ricardo Faria; de Albuquerque, Rubens Ferreira

    2014-01-01

    Molecular diagnosis methods have been largely used in epidemiological or clinical studies to detect and quantify microbial species that may colonize the oral cavity in healthy or disease. The preservation of genetic material from samples remains the major challenge to ensure the feasibility of these methodologies. Long-term storage may compromise the final result. The aim of this study was to evaluate the effect of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization. Saliva and supragingival biofilm were taken from 10 healthy subjects, aliquoted (n=364) and processed according to proposed protocols: immediate processing and processed after 2 or 4 weeks, and 6 or 12 months of storage at 4°C, -20°C and -80°C. Either total or individual microbial counts were recorded in lower values for samples processed after 12 months of storage, irrespective of temperatures tested. Samples stored up to 6 months at cold temperatures showed similar counts to those immediately processed. The microbial incidence was also significantly reduced in samples stored during 12 months in all temperatures. Temperature and time of oral samples storage have relevant impact in the detection and quantification of bacterial and fungal species by Checkerboard DNA-DNA hybridization method. Samples should be processed immediately after collection or up to 6 months if conserved at cold temperatures to avoid false-negative results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    PubMed

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  18. Sampling and sample processing in pesticide residue analysis.

    PubMed

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  19. Role of aging time on the magnetic properties of Sm2Co17 permanent magnets processed through cold isostatic pressing

    NASA Astrophysics Data System (ADS)

    Ramudu, M.; Rajkumar, D. M.

    2018-04-01

    The effect of aging time on the magnetic properties of Sm2Co17 permanent magnets processed through a novel method of cold isostatic pressing was investigated. Sintered Sm2Co17 samples were subjected to different aging times in the range of 10-30 h and their respective microstructures were correlated with the magnetic properties obtained. The values of remanant magnetization (Br) were observed to be constant in samples aged from 10-20 h beyond which a gradual decrease in Br values was observed. The values of coercivity (Hc) displayed a sharp increase in samples aged from 10 to 20 h beyond which the coercivity values showed marginal improvement. Hence a good combination of magnetic properties could be achieved in samples aged for 20 h. A maximum energy product of 27 MGOe was achieved in the 20 h aged sample processed through a novel route.

  20. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    DTIC Science & Technology

    2016-05-12

    valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A

  1. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.

    PubMed

    Mobli, Mehdi; Hoch, Jeffrey C

    2014-11-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Sampling Operations on Big Data

    DTIC Science & Technology

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  3. Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture

    DTIC Science & Technology

    2016-07-10

    different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the

  4. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Frequency-time coherence for all-optical sampling without optical pulse source

    PubMed Central

    Preußler, Stefan; Raoof Mehrpoor, Gilda; Schneider, Thomas

    2016-01-01

    Sampling is the first step to convert an analogue optical signal into a digital electrical signal. The latter can be further processed and analysed by well-known electrical signal processing methods. Optical pulse sources like mode-locked lasers are commonly incorporated for all-optical sampling, but have several drawbacks. A novel approach for a simple all-optical sampling is to utilise the frequency-time coherence of each signal. The method is based on only using two coupled modulators driven with an electrical sine wave. Since no optical source is required, a simple integration in appropriate platforms, such as Silicon Photonics might be possible. The presented method grants all-optical sampling with electrically tunable bandwidth, repetition rate and time shift. PMID:27687495

  6. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  7. BACKWARD ESTIMATION OF STOCHASTIC PROCESSES WITH FAILURE EVENTS AS TIME ORIGINS1

    PubMed Central

    Gary Chan, Kwun Chuen; Wang, Mei-Cheng

    2011-01-01

    Stochastic processes often exhibit sudden systematic changes in pattern a short time before certain failure events. Examples include increase in medical costs before death and decrease in CD4 counts before AIDS diagnosis. To study such terminal behavior of stochastic processes, a natural and direct way is to align the processes using failure events as time origins. This paper studies backward stochastic processes counting time backward from failure events, and proposes one-sample nonparametric estimation of the mean of backward processes when follow-up is subject to left truncation and right censoring. We will discuss benefits of including prevalent cohort data to enlarge the identifiable region and large sample properties of the proposed estimator with related extensions. A SEER–Medicare linked data set is used to illustrate the proposed methodologies. PMID:21359167

  8. Decision making and sequential sampling from memory

    PubMed Central

    Shadlen, Michael N.; Shohamy, Daphna

    2016-01-01

    Decisions take time, and as a rule more difficult decisions take more time. But this only raises the question of what consumes the time. For decisions informed by a sequence of samples of evidence, the answer is straightforward: more samples are available with more time. Indeed the speed and accuracy of such decisions are explained by the accumulation of evidence to a threshold or bound. However, the same framework seems to apply to decisions that are not obviously informed by sequences of evidence samples. Here we proffer the hypothesis that the sequential character of such tasks involves retrieval of evidence from memory. We explore this hypothesis by focusing on value-based decisions and argue that mnemonic processes can account for regularities in choice and decision time. We speculate on the neural mechanisms that link sampling of evidence from memory to circuits that represent the accumulated evidence bearing on a choice. We propose that memory processes may contribute to a wider class of decisions that conform to the regularities of choice-reaction time predicted by the sequential sampling framework. PMID:27253447

  9. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  10. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  12. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  13. Effects of delayed laboratory processing on platelet serotonin levels.

    PubMed

    Sanner, Jennifer E; Frazier, Lorraine; Udtha, Malini

    2013-01-01

    Despite the availability of established guidelines for measuring platelet serotonin, these guidelines may be difficult to follow in a hospital setting where time to processing may vary from sample to sample. The purpose of this study was to evaluate the effect of the time to processing of human blood samples on the stability of the enzyme-linked immunosorbent assay (ELISA) for the determination of platelet serotonin levels in human plasma. Human blood samples collected from a convenience sample of eight healthy volunteers were analyzed to determine platelet serotonin levels from plasma collected in ethylene diamine tetra acetic acid (EDTA) tubes and stored at 4°C for 3 hr, 5 hr, 8 hr, and 12 hr. Refrigeration storage at 4°C for 3 hr, 5 hr, 8 hr, and 12 hr altered the platelet serotonin measurement when compared to immediate processing. The bias for the samples stored at 4°C for 3 hr was 102.3 (±217.39 ng/10(9) platelets), for 5 hr was 200.1 (±132.76 ng/10(9) platelets), for 8 hr was 146.9 (±221.41 ng/10(9) platelets), and for 12 hr was -67.6 (±349.60 ng/10(9) platelets). Results from this study show that accurate measurement of platelet serotonin levels is dependent on time to processing. Researchers should therefore follow a standardized laboratory guideline for obtaining immediate platelet serotonin levels after blood sample collection.

  14. Experimental Investigations of Non-Stationary Properties In Radiometer Receivers Using Measurements of Multiple Calibration References

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)

    2002-01-01

    Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.

  15. Precise turnaround time measurement of laboratory processes using radiofrequency identification technology.

    PubMed

    Mayer, Horst; Brümmer, Jens; Brinkmann, Thomas

    2011-01-01

    To implement Lean Six Sigma in our central laboratory we conducted a project to measure single pre-analytical steps influencing turnaround time (TAT) of emergency department (ED) serum samples. The traditional approach of extracting data from the Laboratory Information System (LIS) for a retrospective calculation of a mean TAT is not suitable. Therefore, we used radiofrequency identification (RFID) chips for real time tracking of individual samples at any pre-analytical step. 1,200 serum tubes were labelled with RFID chips and were provided to the emergency department. 3 RFID receivers were installed in the laboratory: at the outlet of the pneumatic tube system, at the centrifuge, and in the analyser area. In addition, time stamps of sample entry at the automated sample distributor and communication of results from the analyser were collected from LIS. 1,023 labelled serum tubes arrived at our laboratory. 899 RFID tags were used for TAT calculation. The following transfer times were determined (median 95th percentile in min:sec): pneumatic tube system --> centrifuge (01:25/04:48), centrifuge --> sample distributor (14:06/5:33), sample distributor --> analysis system zone (02:39/15:07), analysis system zone --> result communication (12:42/22:21). Total TAT was calculated at 33:19/57:40 min:sec. Manual processes around centrifugation were identified as a major part of TAT with 44%/60% (median/95th percentile). RFID is a robust, easy to use, and error-free technology and not susceptible to interferences in the laboratory environment. With this study design we were able to measure significant variations in a single manual sample transfer process. We showed that TAT is mainly influenced by manual steps around the centrifugation process and we concluded that centrifugation should be integrated in solutions for total laboratory automation.

  16. An analysis of the impact of pre-analytical factors on the urine proteome: Sample processing time, temperature, and proteolysis.

    PubMed

    Hepburn, Sophie; Cairns, David A; Jackson, David; Craven, Rachel A; Riley, Beverley; Hutchinson, Michelle; Wood, Steven; Smith, Matthew Welberry; Thompson, Douglas; Banks, Rosamonde E

    2015-06-01

    We have examined the impact of sample processing time delay, temperature, and the addition of protease inhibitors (PIs) on the urinary proteome and peptidome, an important aspect of biomarker studies. Ten urine samples from patients with varying pathologies were each divided and PIs added to one-half, with aliquots of each then processed and frozen immediately, or after a delay of 6 h at 4°C or room temperature (20-22°C), effectively yielding 60 samples in total. Samples were then analyzed by 2D-PAGE, SELDI-TOF-MS, and immunoassay. Interindividual variability in profiles was the dominant feature in all analyses. Minimal changes were observed by 2D-PAGE as a result of delay in processing, temperature, or PIs and no changes were seen in IgG, albumin, β2 -microglobulin, or α1 -microglobulin measured by immunoassay. Analysis of peptides showed clustering of some samples by presence/absence of PIs but the extent was very patient-dependent with most samples showing minimal effects. The extent of processing-induced changes and the benefit of PI addition are patient- and sample-dependent. A consistent processing methodology is essential within a study to avoid any confounding of the results. © 2014 The Authors PROTEOMICS Clinical Applications Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  17. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data.

    PubMed

    Matsui, Shigeyuki

    2005-05-15

    This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided. Copyright 2004 John Wiley & Sons, Ltd.

  18. Detectability of Granger causality for subsampled continuous-time neurophysiological processes.

    PubMed

    Barnett, Lionel; Seth, Anil K

    2017-01-01

    Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity from neurophysiological recordings. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Time lens assisted photonic sampling extraction

    NASA Astrophysics Data System (ADS)

    Petrillo, Keith Gordon

    Telecommunication bandwidth demands have dramatically increased in recent years due to Internet based services like cloud computing and storage, large file sharing, and video streaming. Additionally, sensing systems such as wideband radar, magnetic imaging resonance systems, and complex modulation formats to handle large data transfer in telecommunications require high speed, high resolution analog-to-digital converters (ADCs) to interpret the data. Accurately processing and acquiring the information at next generation data rates from these systems has become challenging for electronic systems. The largest contributors to the electronic bottleneck are bandwidth and timing jitter which limit speed and reduce accuracy. Optical systems have shown to have at least three orders of magnitude increase in bandwidth capabilities and state of the art mode locked lasers have reduced timing jitters into thousands of attoseconds. Such features have encouraged processing signals without the use of electronics or using photonics to assist electronics. All optical signal processing has allowed the processing of telecommunication line rates up to 1.28 Tb/s and high resolution analog-to-digital converters in the 10s of gigahertz. The major drawback to these optical systems is the high cost of the components. The application of all optical processing techniques such as a time lens and chirped processing can greatly reduce bandwidth and cost requirements of optical serial to parallel converters and push photonically assisted ADCs into the 100s of gigahertz. In this dissertation, the building blocks to a high speed photonically assisted ADC are demonstrated, each providing benefits to its own respective application. A serial to parallel converter using a continuously operating time lens as an optical Fourier processor is demonstrated to fully convert a 160-Gb/s optical time division multiplexed signal to 16 10-Gb/s channels with error free operation. Using chirped processing, an optical sample and hold concept is demonstrated and analyzed as a resolution improvement to existing photonically assisted ADCs. Simulations indicate that the application of a continuously operating time lens to a photonically assisted sampling system can increase photonically sampled systems by an order of magnitude while acquiring properties similar to an optical sample and hold system.

  20. Collection, transport and general processing of clinical specimens in Microbiology laboratory.

    PubMed

    Sánchez-Romero, M Isabel; García-Lechuz Moya, Juan Manuel; González López, Juan José; Orta Mira, Nieves

    2018-02-06

    The interpretation and the accuracy of the microbiological results still depend to a great extent on the quality of the samples and their processing within the Microbiology laboratory. The type of specimen, the appropriate time to obtain the sample, the way of sampling, the storage and transport are critical points in the diagnostic process. The availability of new laboratory techniques for unusual pathogens, makes necessary the review and update of all the steps involved in the processing of the samples. Nowadays, the laboratory automation and the availability of rapid techniques allow the precision and turn-around time necessary to help the clinicians in the decision making. In order to be efficient, it is very important to obtain clinical information to use the best diagnostic tools. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  1. Two-Dimensional Mathematical Modeling of the Pack Carburizing Process

    NASA Astrophysics Data System (ADS)

    Sarkar, S.; Gupta, G. S.

    2008-10-01

    Pack carburization is the oldest method among the case-hardening treatments, and sufficient attempts have not been made to understand this process in terms of heat and mass transfer, effect of alloying elements, dimensions of the sample, etc. Thus, a two-dimensional mathematical model in cylindrical coordinate is developed for simulating the pack carburization process for chromium-bearing steel in this study. Heat and mass balance equations are solved simultaneously, where the surface temperature of the sample varies with time, but the carbon potential at the surface during the process remains constant. The fully implicit finite volume technique is used to solve the governing equations. Good agreement has been found between the predicted and published data. The effect of temperature, carburizing time, dimensions of the sample, etc. on the pack carburizing process shows some interesting results. It is found that the two-dimensional model gives better insight into understanding the carburizing process.

  2. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  3. Indicator organisms in meat and poultry slaughter operations: their potential use in process control and the role of emerging technologies.

    PubMed

    Saini, Parmesh K; Marks, Harry M; Dreyfuss, Moshe S; Evans, Peter; Cook, L Victor; Dessai, Uday

    2011-08-01

    Measuring commonly occurring, nonpathogenic organisms on poultry products may be used for designing statistical process control systems that could result in reductions of pathogen levels. The extent of pathogen level reduction that could be obtained from actions resulting from monitoring these measurements over time depends upon the degree of understanding cause-effect relationships between processing variables, selected output variables, and pathogens. For such measurements to be effective for controlling or improving processing to some capability level within the statistical process control context, sufficiently frequent measurements would be needed to help identify processing deficiencies. Ultimately the correct balance of sampling and resources is determined by those characteristics of deficient processing that are important to identify. We recommend strategies that emphasize flexibility, depending upon sampling objectives. Coupling the measurement of levels of indicator organisms with practical emerging technologies and suitable on-site platforms that decrease the time between sample collections and interpreting results would enhance monitoring process control.

  4. Method for sampling and analysis of volatile biomarkers in process gas from aerobic digestion of poultry carcasses using time-weighted average SPME and GC-MS.

    PubMed

    Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J

    2017-10-01

    A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Root cause analysis of laboratory turnaround times for patients in the emergency department.

    PubMed

    Fernandes, Christopher M B; Worster, Andrew; Hill, Stephen; McCallum, Catherine; Eva, Kevin

    2004-03-01

    Laboratory investigations are essential to patient care and are conducted routinely in emergency departments (EDs). This study reports the turnaround times at an academic, tertiary care ED, using root cause analysis to identify potential areas of improvement. Our objectives were to compare the laboratory turnaround times with established benchmarks and identify root causes for delays. Turnaround and process event times for a consecutive sample of hemoglobin and potassium measurements were recorded during an 8-day study period using synchronized time stamps. A log transformation (ln [minutes + 1]) was performed to normalize the time data, which were then compared with established benchmarks using one-sample t tests. The turnaround time for hemoglobin was significantly less than the established benchmark (n = 140, t = -5.69, p < 0.001) and that of potassium was significantly greater (n = 121, t = 12.65, p < 0.001). The hemolysis rate was 5.8%, with 0.017% of samples needing recollection. Causes of delays included order-processing time, a high proportion (43%) of tests performed on patients who had been admitted but were still in the ED waiting for a bed, and excessive laboratory process times for potassium. The turnaround time for hemoglobin (18 min) met the established benchmark, but that for potassium (49 min) did not. Root causes for delay were order-processing time, excessive queue and instrument times for potassium and volume of tests for admitted patients. Further study of these identified causes of delays is required to see whether laboratory TATs can be reduced.

  6. Innovative application of the moisture analyzer for determination of dry mass content of processed cheese

    NASA Astrophysics Data System (ADS)

    Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena

    2018-04-01

    The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.

  7. Capillary absorption spectrometer and process for isotopic analysis of small samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The process also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  8. Bacteria and Bioactivity in Holder Pasteurized and Shelf-Stable Human Milk Products

    PubMed Central

    2017-01-01

    Abstract Background: Historically, Holder pasteurization has been used to pasteurize donor human milk available in a hospital setting. There is extensive research that provides an overview of the impact of Holder pasteurization on bioactive components of human milk. A shelf-stable (SS) human milk product, created using retort processing, recently became available; however, to our knowledge, little has been published about the effect of retort processing on human milk. Objective: We aimed to assess the ability of retort processing to eliminate bacteria and to quantify the difference in lysozyme and secretory immunoglobulin A (sIgA) activity between Holder pasteurized (HP) and SS human milk. Methods: Milk samples from 60 mothers were pooled. From this pool, 36 samples were taken: 12 samples were kept raw, 12 samples were HP, and 12 samples were retort processed to create an SS product. All samples were analyzed for total aerobic bacteria, coliform bacteria, Bacillus cereus, sIgA activity, and lysozyme activity. Raw samples served as the control. Results: One raw sample and 3 HP samples contained B. cereus at the time of culture. There were no detectable bacteria in SS samples at the time of culture. Raw samples had significantly greater lysozyme and sIgA activity than HP and SS samples (P < 0.0001). HP samples retained significantly more lysozyme and sIgA activity (54% and 87%, respectively) than SS samples (0% and 11%, respectively). Conclusions: Human milk processed using Holder pasteurization should continue to be screened for the presence of B. cereus. Clinicians should be aware of the differences in the retention of lysozyme and sIgA activity in HP and SS products when making feeding decisions for medically fragile or immunocompromised infants to ensure that patients are receiving the maximum immune protection. PMID:29955718

  9. Constant strain rate experiments and constitutive modeling for a class of bitumen

    NASA Astrophysics Data System (ADS)

    Reddy, Kommidi Santosh; Umakanthan, S.; Krishnan, J. Murali

    2012-08-01

    The mechanical properties of bitumen vary with the nature of the crude source and the processing methods employed. To understand the role of the processing conditions played in the mechanical properties, bitumen samples derived from the same crude source but processed differently (blown and blended) are investigated. The samples are subjected to constant strain rate experiments in a parallel plate rheometer. The torque applied to realize the prescribed angular velocity for the top plate and the normal force applied to maintain the gap between the top and bottom plate are measured. It is found that when the top plate is held stationary, the time taken by the torque to be reduced by a certain percentage of its maximum value is different from the time taken by the normal force to decrease by the same percentage of its maximum value. Further, the time at which the maximum torque occurs is different from the time at which the maximum normal force occurs. Since the existing constitutive relations for bitumen cannot capture the difference in the relaxation times for the torque and normal force, a new rate type constitutive model, incorporating this response, is proposed. Although the blended and blown bitumen samples used in this study correspond to the same grade, the mechanical responses of the two samples are not the same. This is also reflected in the difference in the values of the material parameters in the model proposed. The differences in the mechanical properties between the differently processed bitumen samples increase further with aging. This has implications for the long-term performance of the pavement.

  10. Zinc coated sheet steel for press hardening

    NASA Astrophysics Data System (ADS)

    Ghanbari, Zahra N.

    Galvanized steels are of interest to enhance corrosion resistance of press-hardened steels, but concerns related to liquid metal embrittlement have been raised. The objective of this study was to assess the soak time and temperature conditions relevant to the hot-stamping process during which Zn penetration did or did not occur in galvanized 22MnB5 press-hardening steel. A GleebleRTM 3500 was used to heat treat samples using hold times and temperatures similar to those used in industrial hot-stamping. Deformation at both elevated temperature and room temperature were conducted to assess the coating and substrate behavior related to forming (at high temperature) and service (at room temperature). The extent of alloying between the coating and substrate was assessed on undeformed samples heat treated under similar conditions to the deformed samples. The coating transitioned from an α + Gamma1 composition to an α (bcc Fe-Zn) phase with increased soak time. This transition likely corresponded to a decrease in availability of Zn-rich liquid in the coating during elevated temperature deformation. Penetration of Zn into the substrate sheet in the undeformed condition was not observed for any of the processing conditions examined. The number and depth of cracks in the coating and substrate steel was also measured in the hot-ductility samples. The number of cracks appeared to increase, while the depth of cracks appeared to decrease, with increasing soak time and increasing soak temperature. The crack depth appeared to be minimized in the sample soaked at the highest soak temperature (900 °C) for intermediate and extended soak times (300 s or 600 s). Zn penetration into the substrate steel was observed in the hot-ductility samples soaked at each hold temperature for the shortest soak time (10 s) before being deformed at elevated temperature. Reduction of area and elongation measurements showed that the coated sample soaked at the highest temperature and longest soak time maintained the highest ductility when compared to the uncoated sample processed under the sample conditions. Fractography of the hot-ductility samples showed features associated with increased ductility with increased soak time for all soak temperatures. Heat treatments (without elevated temperature deformation) and subsequent room temperature deformation were conducted to investigate the "in-service" behavior of 22MnB5. The uncoated and coated specimens deformed at room temperature showed similar ultimate tensile strength and ductility values. The only notable differences in the room temperature mechanical behavior of uncoated and coated samples processed under the same conditions were a result of differences in the substrate microstructure. All samples appeared to have ductile fracture features; features characteristic of liquid metal embrittlement were not observed.

  11. Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.

    PubMed

    Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy

    2015-12-30

    While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Pad ultrasonic batch dyeing of causticized lyocell fabric with reactive dyes.

    PubMed

    Babar, Aijaz Ahmed; Peerzada, Mazhar Hussain; Jhatial, Abdul Khalique; Bughio, Noor-Ul-Ain

    2017-01-01

    Conventionally, cellulosic fabric dyed with reactive dyes requires significant amount of salt. However, the dyeing of a solvent spun regenerated cellulosic fiber is a critical process. This paper presents the dyeing results of lyocell fabrics dyed with conventional pad batch (CPB) and pad ultrasonic batch (PUB) processes. The dyeing of lyocell fabrics was carried out with two commercial dyes namely Drimarine Blue CL-BR and Ramazol Blue RGB. Dyeing parameters including concentration of sodium hydroxide, sodium carbonate and dwell time were compared for the two processes. The outcomes show that PUB dyed samples offered reasonably higher color yield and dye fixation than CPB dyed samples. A remarkable reduction of 12h in batching time, 18ml/l in NaOH and 05g/l in Na 2 CO 3 quantity was observed for PUB processed samples producing similar results compared to CPB process, making PUB a more economical, productive and an environment friendly process. Color fastness examination witnessed identical results for both PUB and CPB methods. No significant change in surface morphology of PUB processed samples was observed through scanning electron microscope (SEM) analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Explosive-induced shock damage in copper and recompression of the damaged region

    DOE PAGES

    Turley, William D.; Stevens, Gerald D.; Hixson, Robert Stewart; ...

    2016-08-31

    Here, we have studied the dynamic spall process for copper samples in contact with detonating low-performance explosives. When a triangular shaped shock wave from detonation moves through a sample and reflects from the free surface, tension develops immediately, one or more damaged layers can form, and a spall scab can separate from the sample and move ahead of the remaining target material. For dynamic experiments, we used time-resolved velocimetry and x-ray radiography. Soft-recovered samples were analyzed using optical imaging and microscopy. Computer simulations were used to guide experiment design. We observe that for some target thicknesses the spall scab continuesmore » to run ahead of the rest of the sample, but for thinner samples, the detonation product gases accelerate the sample enough for it to impact the spall scab several microseconds or more after the initial damage formation. Our data also show signatures in the form of a late-time reshock in the time-resolved data, which support this computational prediction. A primary goal of this research was to study the wave interactions and damage processes for explosives-loaded copper and to look for evidence of this postulated recompression event. We found both experimentally and computationally that we could tailor the magnitude of the initial and recompression shocks by varying the explosive drive and the copper sample thickness; thin samples had a large recompression after spall, whereas thick samples did not recompress at all. Samples that did not recompress had spall scabs that completely separated from the sample, whereas samples with recompression remained intact. This suggests that the hypothesized recompression process closes voids in the damage layer or otherwise halts the spall formation process. This is a somewhat surprising and, in some ways controversial, result, and the one that warrants further research in the shock compression community.« less

  14. Explosive-induced shock damage in copper and recompression of the damaged region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turley, William D.; Stevens, Gerald D.; Hixson, Robert Stewart

    Here, we have studied the dynamic spall process for copper samples in contact with detonating low-performance explosives. When a triangular shaped shock wave from detonation moves through a sample and reflects from the free surface, tension develops immediately, one or more damaged layers can form, and a spall scab can separate from the sample and move ahead of the remaining target material. For dynamic experiments, we used time-resolved velocimetry and x-ray radiography. Soft-recovered samples were analyzed using optical imaging and microscopy. Computer simulations were used to guide experiment design. We observe that for some target thicknesses the spall scab continuesmore » to run ahead of the rest of the sample, but for thinner samples, the detonation product gases accelerate the sample enough for it to impact the spall scab several microseconds or more after the initial damage formation. Our data also show signatures in the form of a late-time reshock in the time-resolved data, which support this computational prediction. A primary goal of this research was to study the wave interactions and damage processes for explosives-loaded copper and to look for evidence of this postulated recompression event. We found both experimentally and computationally that we could tailor the magnitude of the initial and recompression shocks by varying the explosive drive and the copper sample thickness; thin samples had a large recompression after spall, whereas thick samples did not recompress at all. Samples that did not recompress had spall scabs that completely separated from the sample, whereas samples with recompression remained intact. This suggests that the hypothesized recompression process closes voids in the damage layer or otherwise halts the spall formation process. This is a somewhat surprising and, in some ways controversial, result, and the one that warrants further research in the shock compression community.« less

  15. Explosive-induced shock damage in copper and recompression of the damaged region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turley, W. D., E-mail: turleywd@nv.doe.gov; Stevens, G. D.; La Lone, B. M.

    We have studied the dynamic spall process for copper samples in contact with detonating low-performance explosives. When a triangular shaped shock wave from detonation moves through a sample and reflects from the free surface, tension develops immediately, one or more damaged layers can form, and a spall scab can separate from the sample and move ahead of the remaining target material. For dynamic experiments, we used time-resolved velocimetry and x-ray radiography. Soft-recovered samples were analyzed using optical imaging and microscopy. Computer simulations were used to guide experiment design. We observe that for some target thicknesses the spall scab continues tomore » run ahead of the rest of the sample, but for thinner samples, the detonation product gases accelerate the sample enough for it to impact the spall scab several microseconds or more after the initial damage formation. Our data also show signatures in the form of a late-time reshock in the time-resolved data, which support this computational prediction. A primary goal of this research was to study the wave interactions and damage processes for explosives-loaded copper and to look for evidence of this postulated recompression event. We found both experimentally and computationally that we could tailor the magnitude of the initial and recompression shocks by varying the explosive drive and the copper sample thickness; thin samples had a large recompression after spall, whereas thick samples did not recompress at all. Samples that did not recompress had spall scabs that completely separated from the sample, whereas samples with recompression remained intact. This suggests that the hypothesized recompression process closes voids in the damage layer or otherwise halts the spall formation process. This is a somewhat surprising and, in some ways controversial, result, and the one that warrants further research in the shock compression community.« less

  16. Rapid microscale in-gel processing and digestion of proteins using surface acoustic waves.

    PubMed

    Kulkarni, Ketav P; Ramarathinam, Sri H; Friend, James; Yeo, Leslie; Purcell, Anthony W; Perlmutter, Patrick

    2010-06-21

    A new method for in-gel sample processing and tryptic digestion of proteins is described. Sample preparation, rehydration, in situ digestion and peptide extraction from gel slices are dramatically accelerated by treating the gel slice with surface acoustic waves (SAWs). Only 30 minutes total workflow time is required for this new method to produce base peak chromatograms (BPCs) of similar coverage and intensity to those observed for traditional processing and overnight digestion. Simple set up, good reproducibility, excellent peptide recoveries, rapid turnover of samples and high confidence protein identifications put this technology at the fore-front of the next generation of proteomics sample processing tools.

  17. Semiautomated Sample Preparation for Protein Stability and Formulation Screening via Buffer Exchange.

    PubMed

    Ying, William; Levons, Jaquan K; Carney, Andrea; Gandhi, Rajesh; Vydra, Vicky; Rubin, A Erik

    2016-06-01

    A novel semiautomated buffer exchange process workflow was developed to enable efficient early protein formulation screening. An antibody fragment protein, BMSdab, was used to demonstrate the workflow. The process afforded 60% to 80% cycle time and scientist time savings and significant material efficiencies. These efficiencies ultimately facilitated execution of this stability work earlier in the drug development process, allowing this tool to inform the developability of potential candidates for development from a formulation perspective. To overcome the key technical challenges, the protein solution was buffer-exchanged by centrifuge filtration into formulations for stability screening in a 96-well plate with an ultrafiltration membrane, leveraging automated liquid handling and acoustic volume measurements to allow several cycles of exchanges. The formulations were transferred into a vacuum manifold and sterile filtered into a rack holding 96 glass vials. The vials were sealed with a capmat of individual caps and placed in stability stations. Stability of the samples prepared by this process and by the standard process was demonstrated to be comparable. This process enabled screening a number of formulations of a protein at an early pharmaceutical development stage with a short sample preparation time. © 2015 Society for Laboratory Automation and Screening.

  18. An analysis of the impact of pre‐analytical factors on the urine proteome: Sample processing time, temperature, and proteolysis

    PubMed Central

    Hepburn, Sophie; Cairns, David A.; Jackson, David; Craven, Rachel A.; Riley, Beverley; Hutchinson, Michelle; Wood, Steven; Smith, Matthew Welberry; Thompson, Douglas

    2015-01-01

    Purpose We have examined the impact of sample processing time delay, temperature, and the addition of protease inhibitors (PIs) on the urinary proteome and peptidome, an important aspect of biomarker studies. Experimental design Ten urine samples from patients with varying pathologies were each divided and PIs added to one‐half, with aliquots of each then processed and frozen immediately, or after a delay of 6 h at 4°C or room temperature (20–22°C), effectively yielding 60 samples in total. Samples were then analyzed by 2D‐PAGE, SELDI‐TOF‐MS, and immunoassay. Results Interindividual variability in profiles was the dominant feature in all analyses. Minimal changes were observed by 2D‐PAGE as a result of delay in processing, temperature, or PIs and no changes were seen in IgG, albumin, β2‐microglobulin, or α1‐microglobulin measured by immunoassay. Analysis of peptides showed clustering of some samples by presence/absence of PIs but the extent was very patient‐dependent with most samples showing minimal effects. Conclusions and clinical relevance The extent of processing‐induced changes and the benefit of PI addition are patient‐ and sample‐dependent. A consistent processing methodology is essential within a study to avoid any confounding of the results. PMID:25400092

  19. Impact of the New Abbott mPLUS Feature on Clinical Laboratory Efficiencies of Abbott RealTime Assays for Detection of HIV-1, Hepatitis C Virus, Hepatitis B Virus, Chlamydia trachomatis, and Neisseria gonorrhoeae

    PubMed Central

    Jones, Sara; Wiesneth, Russ; Barry, Cathy; Webb, Erika; Belova, Larissa; Dolan, Peggy; Ho, Shiaolan; Abravaya, Klara; Cloherty, Gavin

    2013-01-01

    Diagnostic laboratories are under increasing pressure to improve and expand their services. Greater flexibility in sample processing is a critical factor that can improve the time to results while reducing reagent waste, making laboratories more efficient and cost-effective. The introduction of the Abbott mPLUS feature, with the capacity for extended use of amplification reagents, significantly increases the flexibility of the m2000 platform and enables laboratories to customize their workflows based on sample arrival patterns. The flexibility in sample batch size offered by mPLUS enables significant reductions in processing times. For hepatitis B virus tests, a reduction in sample turnaround times of up to 30% (105 min) was observed for batches of 12 samples compared with those for batches of 24 samples; for Chlamydia trachomatis/Neisseria gonorrhoeae tests, the ability to run batches of 24 samples reduced the turnaround time by 83% (54 min) compared with that for batches of 48 samples. Excellent correlations between mPLUS and m2000 standard condition results were observed for all RealTime viral load assays evaluated in this study, with correlation r values of 0.998 for all assays tested. For the qualitative RealTime C. trachomatis/N. gonorrhoeae assay, the overall agreements between the two conditions tested were >98% for C. trachomatis and 100% for N. gonorrhoeae. Comparable precision results were observed for the two conditions tested for all RealTime assays. The enhanced mPLUS capability provides clinical laboratories with increased efficiencies to meet increasingly stringent turnaround time requirements without increased costs associated with discarding partially used amplification reagents. PMID:24088850

  20. Impact of the New Abbott mPLUS feature on clinical laboratory efficiencies of abbott RealTime assays for detection of HIV-1, Hepatitis C Virus, Hepatitis B Virus, Chlamydia trachomatis, and Neisseria gonorrhoeae.

    PubMed

    Lucic, Danijela; Jones, Sara; Wiesneth, Russ; Barry, Cathy; Webb, Erika; Belova, Larissa; Dolan, Peggy; Ho, Shiaolan; Abravaya, Klara; Cloherty, Gavin

    2013-12-01

    Diagnostic laboratories are under increasing pressure to improve and expand their services. Greater flexibility in sample processing is a critical factor that can improve the time to results while reducing reagent waste, making laboratories more efficient and cost-effective. The introduction of the Abbott mPLUS feature, with the capacity for extended use of amplification reagents, significantly increases the flexibility of the m2000 platform and enables laboratories to customize their workflows based on sample arrival patterns. The flexibility in sample batch size offered by mPLUS enables significant reductions in processing times. For hepatitis B virus tests, a reduction in sample turnaround times of up to 30% (105 min) was observed for batches of 12 samples compared with those for batches of 24 samples; for Chlamydia trachomatis/Neisseria gonorrhoeae tests, the ability to run batches of 24 samples reduced the turnaround time by 83% (54 min) compared with that for batches of 48 samples. Excellent correlations between mPLUS and m2000 standard condition results were observed for all RealTime viral load assays evaluated in this study, with correlation r values of 0.998 for all assays tested. For the qualitative RealTime C. trachomatis/N. gonorrhoeae assay, the overall agreements between the two conditions tested were >98% for C. trachomatis and 100% for N. gonorrhoeae. Comparable precision results were observed for the two conditions tested for all RealTime assays. The enhanced mPLUS capability provides clinical laboratories with increased efficiencies to meet increasingly stringent turnaround time requirements without increased costs associated with discarding partially used amplification reagents.

  1. Variability in, variability out: best practice recommendations to standardize pre-analytical variables in the detection of circulating and tissue microRNAs.

    PubMed

    Khan, Jenna; Lieberman, Joshua A; Lockwood, Christina M

    2017-05-01

    microRNAs (miRNAs) hold promise as biomarkers for a variety of disease processes and for determining cell differentiation. These short RNA species are robust, survive harsh treatment and storage conditions and may be extracted from blood and tissue. Pre-analytical variables are critical confounders in the analysis of miRNAs: we elucidate these and identify best practices for minimizing sample variation in blood and tissue specimens. Pre-analytical variables addressed include patient-intrinsic variation, time and temperature from sample collection to storage or processing, processing methods, contamination by cells and blood components, RNA extraction method, normalization, and storage time/conditions. For circulating miRNAs, hemolysis and blood cell contamination significantly affect profiles; samples should be processed within 2 h of collection; ethylene diamine tetraacetic acid (EDTA) is preferred while heparin should be avoided; samples should be "double spun" or filtered; room temperature or 4 °C storage for up to 24 h is preferred; miRNAs are stable for at least 1 year at -20 °C or -80 °C. For tissue-based analysis, warm ischemic time should be <1 h; cold ischemic time (4 °C) <24 h; common fixative used for all specimens; formalin fix up to 72 h prior to processing; enrich for cells of interest; validate candidate biomarkers with in situ visualization. Most importantly, all specimen types should have standard and common workflows with careful documentation of relevant pre-analytical variables.

  2. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    PubMed

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  3. Paraffin wax removal from metal injection moulded cocrmo alloy compact by solvent debinding process

    NASA Astrophysics Data System (ADS)

    Dandang, N. A. N.; Harun, W. S. W.; Khalil, N. Z.; Ahmad, A. H.; Romlay, F. R. M.; Johari, N. A.

    2017-10-01

    One of the most crucial and time consuming phase in metal injection moulding (MIM) process is “debinding”. These days, in metal injection moulding process, they had recounted that first debinding practice was depend on thermal binder degradation, which demanding more than 200 hours for complete removal of binder. Fortunately, these days world had introduced multi-stage debinding techniques to simplified the debinding time process. This research study variables for solvent debinding which are temperature and soaking time for samples made by MIM CoCrMo powder. Since wax as the key principal in the binder origination, paraffin wax will be removed together with stearic acid from the green bodies. Then, debinding process is conducted at 50, 60 and 70°C for 30-240 minutes. It is carried out in n-heptane solution. Percentage weight loss of the binder were measured. Lastly, scanning electron microscope (SEM) analysis and visual inspection were observed for the surface of brown compact. From the results, samples debound at 70°C exhibited a significant amount of binder loss; nevertheless, sample collapse, brittle surface and cracks were detected. But, at 60°C temperature and time of 4 hours proven finest results as it shows sufficient binder loss, nonappearance of surface cracks and easy to handle. Overall, binder loss is directly related to solvent debinding temperature and time.

  4. Rapid DNA analysis for automated processing and interpretation of low DNA content samples.

    PubMed

    Turingan, Rosemary S; Vasantgadkar, Sameer; Palombo, Luke; Hogan, Catherine; Jiang, Hua; Tan, Eugene; Selden, Richard F

    2016-01-01

    Short tandem repeat (STR) analysis of casework samples with low DNA content include those resulting from the transfer of epithelial cells from the skin to an object (e.g., cells on a water bottle, or brim of a cap), blood spatter stains, and small bone and tissue fragments. Low DNA content (LDC) samples are important in a wide range of settings, including disaster response teams to assist in victim identification and family reunification, military operations to identify friend or foe, criminal forensics to identify suspects and exonerate the innocent, and medical examiner and coroner offices to identify missing persons. Processing LDC samples requires experienced laboratory personnel, isolated workstations, and sophisticated equipment, requires transport time, and involves complex procedures. We present a rapid DNA analysis system designed specifically to generate STR profiles from LDC samples in field-forward settings by non-technical operators. By performing STR in the field, close to the site of collection, rapid DNA analysis has the potential to increase throughput and to provide actionable information in real time. A Low DNA Content BioChipSet (LDC BCS) was developed and manufactured by injection molding. It was designed to function in the fully integrated Accelerated Nuclear DNA Equipment (ANDE) instrument previously designed for analysis of buccal swab and other high DNA content samples (Investigative Genet. 4(1):1-15, 2013). The LDC BCS performs efficient DNA purification followed by microfluidic ultrafiltration of the purified DNA, maximizing the quantity of DNA available for subsequent amplification and electrophoretic separation and detection of amplified fragments. The system demonstrates accuracy, precision, resolution, signal strength, and peak height ratios appropriate for casework analysis. The LDC rapid DNA analysis system is effective for the generation of STR profiles from a wide range of sample types. The technology broadens the range of sample types that can be processed and minimizes the time between sample collection, sample processing and analysis, and generation of actionable intelligence. The fully integrated Expert System is capable of interpreting a wide range or sample types and input DNA quantities, allowing samples to be processed and interpreted without a technical operator.

  5. DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR THE BENCH STEAM REFORMER TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BANNING DL

    2010-08-03

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.« less

  6. Explicit analytical tuning rules for digital PID controllers via the magnitude optimum criterion.

    PubMed

    Papadopoulos, Konstantinos G; Yadav, Praveen K; Margaris, Nikolaos I

    2017-09-01

    Analytical tuning rules for digital PID type-I controllers are presented regardless of the process complexity. This explicit solution allows control engineers 1) to make an accurate examination of the effect of the controller's sampling time to the control loop's performance both in the time and frequency domain 2) to decide when the control has to be I, PI and when the derivative, D, term has to be added or omitted 3) apply this control action to a series of stable benchmark processes regardless of their complexity. The former advantages are considered critical in industry applications, since 1) most of the times the choice of the digital controller's sampling time is based on heuristics and past criteria, 2) there is little a-priori knowledge of the controlled process making the choice of the type of the controller a trial and error exercise 3) model parameters change often depending on the control loop's operating point making in this way, the problem of retuning the controller's parameter a much challenging issue. Basis of the proposed control law is the principle of the PID tuning via the Magnitude Optimum criterion. The final control law involves the controller's sampling time T s within the explicit solution of the controller's parameters. Finally, the potential of the proposed method is justified by comparing its performance with the conventional PID tuning when controlling the same process. Further investigation regarding the choice of the controller's sampling time T s is also presented and useful conclusions for control engineers are derived. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Innovation leading the way: application of lean manufacturing to sample management.

    PubMed

    Allen, M; Wigglesworth, M J

    2009-06-01

    Historically, sample management successfully focused on providing compound quality and tracking distribution within a diverse geographic. However, if a competitive advantage is to be delivered in a changing environment of outsourcing, efficiency and customer service must now improve or face reconstruction. The authors have used discrete event simulation to model the compound process from chemistry to assay and applied lean manufacturing techniques to analyze and improve these processes. In doing so, they identified a value-adding process time of just 11 min within a procedure that took days. Modeling also allowed the analysis of equipment and human resources necessary to complete the expected demand in an acceptable cycle time. Layout and location of sample management and screening departments are key in allowing process integration, creating rapid flow of work, and delivering these efficiencies. Following this analysis and minor process changes, the authors have demonstrated for 2 programs that solid compounds can be converted to assay-ready plates in less than 4 h. In addition, it is now possible to deliver assay data from these compounds within the same working day, allowing chemistry teams more flexibility and more time to execute the next chemistry round. Additional application of lean manufacturing principles has the potential to further decrease cycle times while using fewer resources.

  8. Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra

    NASA Astrophysics Data System (ADS)

    Rezakhah, Saeid; Maleki, Yasaman

    2016-07-01

    Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.

  9. Microwave Photonic Architecture for Direction Finding of LPI Emitters: Front End Analog Circuit Design and Component Characterization

    DTIC Science & Technology

    2016-09-01

    design to control the phase shifters was complex, and the calibration process was time consuming. During the redesign process, we carried out...signals in time domain with a maximum sampling frequency of 20 Giga samples per second. In the previous tests of the design , the performance of...PHOTONIC ARCHITECTURE FOR DIRECTION FINDING OF LPI EMITTERS: FRONT-END ANALOG CIRCUIT DESIGN AND COMPONENT CHARACTERIZATION by Chew K. Tan

  10. Oxidation and Hydration of U 3O 8 Materials Following Controlled Exposure to Temperature and Humidity

    DOE PAGES

    Tamasi, Alison L.; Boland, Kevin S.; Czerwinski, Kenneth; ...

    2015-03-18

    Chemical signatures correlated with uranium oxide processing are of interest to forensic science for inferring sample provenance. Identification of temporal changes in chemical structures of process uranium materials as a function of controlled temperatures and relative humidities may provide additional information regarding sample history. In our study, a high-purity α-U 3O 8 sample and three other uranium oxide samples synthesized from reaction routes used in nuclear conversion processes were stored under controlled conditions over 2–3.5 years, and powder X-ray diffraction analysis and X-ray absorption spectroscopy were employed to characterize chemical speciation. We measured signatures from the α-U 3O 8 samplemore » indicated that the material oxidized and hydrated after storage under high humidity conditions over time. Impurities, such as uranyl fluoride or schoepites, were initially detectable in the other uranium oxide samples. After storage under controlled conditions, the analyses of the samples revealed oxidation over time, although the signature of the uranyl fluoride impurity diminished. The presence of schoepite phases in older uranium oxide material is likely indicative of storage under high humidity and should be taken into account for assessing sample history. Finally, the absence of a signature from a chemical impurity, such as uranyl fluoride hydrate, in an older material may not preclude its presence at the initial time of production.« less

  11. Oxidation and Hydration of U 3 O 8 Materials Following Controlled Exposure to Temperature and Humidity

    DOE PAGES

    Tamasi, Alison L.; Boland, Kevin S.; Czerwinski, Kenneth; ...

    2015-03-18

    Chemical signatures correlated with uranium oxide processing are of interest to forensic science for inferring sample provenance. Identification of temporal changes in chemical structures of process uranium materials as a function of controlled temperatures and relative humidities may provide additional information regarding sample history. In our study, a high-purity α-U 3O 8 sample and three other uranium oxide samples synthesized from reaction routes used in nuclear conversion processes were stored under controlled conditions over 2–3.5 years, and powder X-ray diffraction analysis and X-ray absorption spectroscopy were employed to characterize chemical speciation. We measured signatures from the α-U 3O 8 samplemore » indicated that the material oxidized and hydrated after storage under high humidity conditions over time. Impurities, such as uranyl fluoride or schoepites, were initially detectable in the other uranium oxide samples. After storage under controlled conditions, the analyses of the samples revealed oxidation over time, although the signature of the uranyl fluoride impurity diminished. The presence of schoepite phases in older uranium oxide material is likely indicative of storage under high humidity and should be taken into account for assessing sample history. Finally, the absence of a signature from a chemical impurity, such as uranyl fluoride hydrate, in an older material may not preclude its presence at the initial time of production. LA-UR-15-21495.« less

  12. Signal processor for processing ultrasonic receiver signals

    DOEpatents

    Fasching, George E.

    1980-01-01

    A signal processor is provided which uses an analog integrating circuit in conjunction with a set of digital counters controlled by a precision clock for sampling timing to provide an improved presentation of an ultrasonic transmitter/receiver signal. The signal is sampled relative to the transmitter trigger signal timing at precise times, the selected number of samples are integrated and the integrated samples are transferred and held for recording on a strip chart recorder or converted to digital form for storage. By integrating multiple samples taken at precisely the same time with respect to the trigger for the ultrasonic transmitter, random noise, which is contained in the ultrasonic receiver signal, is reduced relative to the desired useful signal.

  13. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  14. Data Streaming for Metabolomics: Accelerating Data Processing and Analysis from Days to Minutes

    PubMed Central

    2016-01-01

    The speed and throughput of analytical platforms has been a driving force in recent years in the “omics” technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition. PMID:27983788

  15. Data streaming for metabolomics: Accelerating data processing and analysis from days to minutes

    DOE PAGES

    Montenegro-Burke, J. Rafael; Aisporna, Aries E.; Benton, H. Paul; ...

    2016-12-16

    The speed and throughput of analytical platforms has been a driving force in recent years in the “omics” technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, whichmore » capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Here, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition.« less

  16. Data Streaming for Metabolomics: Accelerating Data Processing and Analysis from Days to Minutes.

    PubMed

    Montenegro-Burke, J Rafael; Aisporna, Aries E; Benton, H Paul; Rinehart, Duane; Fang, Mingliang; Huan, Tao; Warth, Benedikt; Forsberg, Erica; Abe, Brian T; Ivanisevic, Julijana; Wolan, Dennis W; Teyton, Luc; Lairson, Luke; Siuzdak, Gary

    2017-01-17

    The speed and throughput of analytical platforms has been a driving force in recent years in the "omics" technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition.

  17. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  18. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  19. Advanced Engine Health Management Applications of the SSME Real-Time Vibration Monitoring System

    NASA Technical Reports Server (NTRS)

    Fiorucci, Tony R.; Lakin, David R., II; Reynolds, Tracy D.; Turner, James E. (Technical Monitor)

    2000-01-01

    The Real Time Vibration Monitoring System (RTVMS) is a 32-channel high speed vibration data acquisition and processing system developed at Marshall Space Flight Center (MSFC). It Delivers sample rates as high as 51,200 samples/second per channel and performs Fast Fourier Transform (FFT) processing via on-board digital signal processing (DSP) chips in a real-time format. Advanced engine health assessment is achieved by utilizing the vibration spectra to provide accurate sensor validation and enhanced engine vibration redlines. Discrete spectral signatures (such as synchronous) that are indicators of imminent failure can be assessed and utilized to mitigate catastrophic engine failures- a first in rocket engine health assessment. This paper is presented in viewgraph form.

  20. Total-Evidence Dating under the Fossilized Birth–Death Process

    PubMed Central

    Zhang, Chi; Stadler, Tanja; Klopfstein, Seraina; Heath, Tracy A.; Ronquist, Fredrik

    2016-01-01

    Bayesian total-evidence dating involves the simultaneous analysis of morphological data from the fossil record and morphological and sequence data from recent organisms, and it accommodates the uncertainty in the placement of fossils while dating the phylogenetic tree. Due to the flexibility of the Bayesian approach, total-evidence dating can also incorporate additional sources of information. Here, we take advantage of this and expand the analysis to include information about fossilization and sampling processes. Our work is based on the recently described fossilized birth–death (FBD) process, which has been used to model speciation, extinction, and fossilization rates that can vary over time in a piecewise manner. So far, sampling of extant and fossil taxa has been assumed to be either complete or uniformly at random, an assumption which is only valid for a minority of data sets. We therefore extend the FBD process to accommodate diversified sampling of extant taxa, which is standard practice in studies of higher-level taxa. We verify the implementation using simulations and apply it to the early radiation of Hymenoptera (wasps, ants, and bees). Previous total-evidence dating analyses of this data set were based on a simple uniform tree prior and dated the initial radiation of extant Hymenoptera to the late Carboniferous (309 Ma). The analyses using the FBD prior under diversified sampling, however, date the radiation to the Triassic and Permian (252 Ma), slightly older than the age of the oldest hymenopteran fossils. By exploring a variety of FBD model assumptions, we show that it is mainly the accommodation of diversified sampling that causes the push toward more recent divergence times. Accounting for diversified sampling thus has the potential to close the long-discussed gap between rocks and clocks. We conclude that the explicit modeling of fossilization and sampling processes can improve divergence time estimates, but only if all important model aspects, including sampling biases, are adequately addressed. PMID:26493827

  1. Total-Evidence Dating under the Fossilized Birth-Death Process.

    PubMed

    Zhang, Chi; Stadler, Tanja; Klopfstein, Seraina; Heath, Tracy A; Ronquist, Fredrik

    2016-03-01

    Bayesian total-evidence dating involves the simultaneous analysis of morphological data from the fossil record and morphological and sequence data from recent organisms, and it accommodates the uncertainty in the placement of fossils while dating the phylogenetic tree. Due to the flexibility of the Bayesian approach, total-evidence dating can also incorporate additional sources of information. Here, we take advantage of this and expand the analysis to include information about fossilization and sampling processes. Our work is based on the recently described fossilized birth-death (FBD) process, which has been used to model speciation, extinction, and fossilization rates that can vary over time in a piecewise manner. So far, sampling of extant and fossil taxa has been assumed to be either complete or uniformly at random, an assumption which is only valid for a minority of data sets. We therefore extend the FBD process to accommodate diversified sampling of extant taxa, which is standard practice in studies of higher-level taxa. We verify the implementation using simulations and apply it to the early radiation of Hymenoptera (wasps, ants, and bees). Previous total-evidence dating analyses of this data set were based on a simple uniform tree prior and dated the initial radiation of extant Hymenoptera to the late Carboniferous (309 Ma). The analyses using the FBD prior under diversified sampling, however, date the radiation to the Triassic and Permian (252 Ma), slightly older than the age of the oldest hymenopteran fossils. By exploring a variety of FBD model assumptions, we show that it is mainly the accommodation of diversified sampling that causes the push toward more recent divergence times. Accounting for diversified sampling thus has the potential to close the long-discussed gap between rocks and clocks. We conclude that the explicit modeling of fossilization and sampling processes can improve divergence time estimates, but only if all important model aspects, including sampling biases, are adequately addressed. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  2. Optimizing Urine Processing Protocols for Protein and Metabolite Detection.

    PubMed

    Siddiqui, Nazema Y; DuBois, Laura G; St John-Williams, Lisa; Will, Thompson J; Grenier, Carole; Burke, Emily; Fraser, Matthew O; Amundsen, Cindy L; Murphy, Susan K

    In urine, factors such as timing of voids, and duration at room temperature (RT) may affect the quality of recovered protein and metabolite data. Additives may aid with detection, but can add more complexity in sample collection or analysis. We aimed to identify the optimal urine processing protocol for clinically-obtained urine samples that allows for the highest protein and metabolite yields with minimal degradation. Healthy women provided multiple urine samples during the same day. Women collected their first morning (1 st AM) void and another "random void". Random voids were aliquotted with: 1) no additive; 2) boric acid (BA); 3) protease inhibitor (PI); or 4) both BA + PI. Of these aliquots, some were immediately stored at 4°C, and some were left at RT for 4 hours. Proteins and individual metabolites were quantified, normalized to creatinine concentrations, and compared across processing conditions. Sample pools corresponding to each processing condition were analyzed using mass spectrometry to assess protein degradation. Ten Caucasian women between 35-65 years of age provided paired 1 st morning and random voided urine samples. Normalized protein concentrations were slightly higher in 1 st AM compared to random "spot" voids. The addition of BA did not significantly change proteins, while PI significantly improved normalized protein concentrations, regardless of whether samples were immediately cooled or left at RT for 4 hours. In pooled samples, there were minimal differences in protein degradation under the various conditions we tested. In metabolite analyses, there were significant differences in individual amino acids based on the timing of the void. For comparative translational research using urine, information about void timing should be collected and standardized. For urine samples processed in the same day, BA does not appear to be necessary while the addition of PI enhances protein yields, regardless of 4°C or RT storage temperature.

  3. Rapid-viability PCR method for detection of live, virulent Bacillus anthracis in environmental samples.

    PubMed

    Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R

    2011-09-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.

  4. Accurate Sample Time Reconstruction of Inertial FIFO Data.

    PubMed

    Stieber, Sebastian; Dorsch, Rainer; Haubelt, Christian

    2017-12-13

    In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO) interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts-introduced by fabrication inaccuracies, temperature changes and wear-out effects-onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS) technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  5. A novel heterogeneous training sample selection method on space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  6. High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN.

    PubMed

    Coggins, Brian E; Zhou, Pei

    2008-12-01

    Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.

  7. High Resolution 4-D Spectroscopy with Sparse Concentric Shell Sampling and FFT-CLEAN

    PubMed Central

    Coggins, Brian E.; Zhou, Pei

    2009-01-01

    SUMMARY Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise. PMID:18853260

  8. The effect of silica-coating by sol-gel process on resin-zirconia bonding.

    PubMed

    Lung, Christie Ying Kei; Kukk, Edwin; Matinlinna, Jukka Pekka

    2013-01-01

    The effect of silica-coating by sol-gel process on the bond strength of resin composite to zirconia was evaluated and compared against the sandblasting method. Four groups of zirconia samples were silica-coated by sol-gel process under varied reagent ratios of ethanol, water, ammonia and tetraethyl orthosilicate and for different deposition times. One control group of zirconia samples were treated with sandblasting. Within each of these five groups, one subgroup of samples was kept in dry storage while another subgroup was aged by thermocycling for 6,000 times. Besides shear bond testing, the surface topography and surface elemental composition of silica-coated zirconia samples were also examined using scanning electron microscopy and X-ray photoelectron spectroscopy. Comparison of silica coating methods revealed significant differences in bond strength among the Dry groups (p<0.001) and Thermocycled groups (p<0.001). Comparison of sol-gel deposition times also revealed significant differences in bond strength among the Dry groups (p<0.01) and Thermocycled groups (p<0.001). Highest bond strengths were obtained after 141-h deposition: Dry (7.97±3.72 MPa); Thermocycled (2.33±0.79 MPa). It was concluded that silica-coating of zirconia by sol-gel process resulted in weaker resin bonding than by sandblasting.

  9. The experimental research on response characteristics of coal samples under the uniaxial loading process

    NASA Astrophysics Data System (ADS)

    Jia, Bing; Wei, Jian-Ping; Wen, Zhi-Hui; Wang, Yun-Gang; Jia, Lin-Xing

    2017-11-01

    In order to study the response characteristics of infrasound in coal samples under the uniaxial loading process, coal samples were collected from GengCun mine. Coal rock stress loading device, acoustic emission tested system and infrasound tested system were used to test the infrasonic signal and acoustic emission signal under uniaxial loading process. The tested results were analyzed by the methods of wavelet filter, threshold denoise, time-frequency analysis and so on. The results showed that in the loading process, the change of the infrasonic wave displayed the characteristics of stage, and it could be divided into three stages: initial stage with a certain amount infrasound events, middle stage with few infrasound events, and late stage gradual decrease. It had a good consistency with changing characteristics of acoustic emission. At the same time, the frequency of infrasound was very low. It can propagate over a very long distance with little attenuation, and the characteristics of the infrasound before the destruction of the coal samples were obvious. A method of using the infrasound characteristics to predict the destruction of coal samples was proposed. This is of great significance to guide the prediction of geological hazards in coal mines.

  10. Parallel detecting, spectroscopic ellipsometers/polarimeters

    DOEpatents

    Furtak, Thomas E.

    2002-01-01

    The parallel detecting spectroscopic ellipsometer/polarimeter sensor has no moving parts and operates in real-time for in-situ monitoring of the thin film surface properties of a sample within a processing chamber. It includes a multi-spectral source of radiation for producing a collimated beam of radiation directed towards the surface of the sample through a polarizer. The thus polarized collimated beam of radiation impacts and is reflected from the surface of the sample, thereby changing its polarization state due to the intrinsic material properties of the sample. The light reflected from the sample is separated into four separate polarized filtered beams, each having individual spectral intensities. Data about said four individual spectral intensities is collected within the processing chamber, and is transmitted into one or more spectrometers. The data of all four individual spectral intensities is then analyzed using transformation algorithms, in real-time.

  11. Adaptive Sensing of Time Series with Application to Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Cabrol, Nathalie A.; Furlong, Michael; Hardgrove, Craig; Low, Bryan K. H.; Moersch, Jeffrey; Wettergreen, David

    2013-01-01

    We address the problem of adaptive informationoptimal data collection in time series. Here a remote sensor or explorer agent throttles its sampling rate in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility -- all collected datapoints lie in the past, but its resource allocation decisions require predicting far into the future. Our solution is to continually fit a Gaussian process model to the latest data and optimize the sampling plan on line to maximize information gain. We compare the performance characteristics of stationary and nonstationary Gaussian process models. We also describe an application based on geologic analysis during planetary rover exploration. Here adaptive sampling can improve coverage of localized anomalies and potentially benefit mission science yield of long autonomous traverses.

  12. [Redesigning the hospital discharge process].

    PubMed

    Martínez-Ramos, M; Flores-Pardo, E; Uris-Sellés, J

    2016-01-01

    The aim of this article is to show that the redesign and planning process of hospital discharge advances the departure time of the patient from a hospital environment. Quasi-experimental study conducted from January 2011 to April 2013, in a local hospital. The cases analysed were from medical and surgical nursing units. The process was redesigned to coordinate all the professionals involved in the process. The hospital discharge improvement process improvement was carried out by forming a working group, the analysis of retrospective data, identifying areas for improvement, and its redesign. The dependent variable was the time of patient administrative discharge. The sample was classified as pre-intervention, inter-intervention, and post-intervention, depending on the time point of the study. The final sample included 14,788 patients after applying the inclusion and exclusion criteria. The mean discharge release time decreased significantly by 50 min between pre-intervention and post-intervention periods. The release time in patients with planned discharge was one hour and 25 min less than in patients with unplanned discharge. Process redesign is a useful strategy to improve the process of hospital discharge. Besides planning the discharge, it is shown that the patient leaving the hospital before 12 midday is a key factor. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  13. Synthesis and characterization of nanocrystalline Co-Fe-Nb-Ta-B alloy

    NASA Astrophysics Data System (ADS)

    Raanaei, Hossein; Fakhraee, Morteza

    2017-09-01

    In this research work, structural and magnetic evolution of Co57Fe13Nb8Ta4B18 alloy, during mechanical alloying process, have been investigated by using, X-ray diffraction, scanning electron microscopy, transmission electron microscopy, electron dispersive X-ray spectroscopy, differential thermal analysis and also vibrating sample magnetometer. It is observed that at 120 milling time, the crystallite size reaches to about 7.8 nm. Structural analyses show that, the solid solution of the initial powder mixture occurs at160 h milling time. The coercivity behavior demonstrates a rise, up to 70 h followed by decreasing tendency up to final stage of milling process. Thermal analysis of 160 h milling time sample reveals two endothermic peaks. The characterization of annealed milled sample for 160 h milling time at 427 °C shows crystallite size growth accompanied by increasing in saturation magnetization.

  14. IN SITU BIOREMEDIATION IN A LANDFILL: HOLDING TIME STUDY OF LEACHATE CHEMICAL AND MICROBIAL PARAMETERS

    EPA Science Inventory

    Processing and analyzing solid waste samples from large and costly sampling events in a timely manner is often difficult. As part of a Cooperative Research and Development Agreement (CRADA), the U.S. EPA and Waste Management Inc. (WMI) are investigating the conversion of landfill...

  15. Mechanisms and kinetics of granulated sewage sludge combustion.

    PubMed

    Kijo-Kleczkowska, Agnieszka; Środa, Katarzyna; Kosowska-Golachowska, Monika; Musiał, Tomasz; Wolski, Krzysztof

    2015-12-01

    This paper investigates sewage sludge disposal methods with particular emphasis on combustion as the priority disposal method. Sewage sludge incineration is an attractive option because it minimizes odour, significantly reduces the volume of the starting material and thermally destroys organic and toxic components of the off pads. Additionally, it is possible that ashes could be used. Currently, as many as 11 plants use sewage sludge as fuel in Poland; thus, this technology must be further developed in Poland while considering the benefits of co-combustion with other fuels. This paper presents the results of experimental studies aimed at determining the mechanisms (defining the fuel combustion region by studying the effects of process parameters, including the size of the fuel sample, temperature in the combustion chamber and air velocity, on combustion) and kinetics (measurement of fuel temperature and mass changes) of fuel combustion in an air stream under different thermal conditions and flow rates. The combustion of the sludge samples during air flow between temperatures of 800 and 900°C is a kinetic-diffusion process. This process determines the sample size, temperature of its environment, and air velocity. The adopted process parameters, the time and ignition temperature of the fuel by volatiles, combustion time of the volatiles, time to reach the maximum temperature of the fuel surface, maximum temperature of the fuel surface, char combustion time, and the total process time, had significant impacts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Microfluidic-Based Robotic Sampling System for Radioactive Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jack D. Law; Julia L. Tripp; Tara E. Smith

    A novel microfluidic based robotic sampling system has been developed for sampling and analysis of liquid solutions in nuclear processes. This system couples the use of a microfluidic sample chip with a robotic system designed to allow remote, automated sampling of process solutions in-cell and facilitates direct coupling of the microfluidic sample chip with analytical instrumentation. This system provides the capability for near real time analysis, reduces analytical waste, and minimizes the potential for personnel exposure associated with traditional sampling methods. A prototype sampling system was designed, built and tested. System testing demonstrated operability of the microfluidic based sample systemmore » and identified system modifications to optimize performance.« less

  17. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  18. Introducing automation to the molecular diagnosis of Trypanosoma cruzi infection: A comparative study of sample treatments, DNA extraction methods and real-time PCR assays.

    PubMed

    Abras, Alba; Ballart, Cristina; Llovet, Teresa; Roig, Carme; Gutiérrez, Cristina; Tebar, Silvia; Berenguer, Pere; Pinazo, María-Jesús; Posada, Elizabeth; Gascón, Joaquim; Schijman, Alejandro G; Gállego, Montserrat; Muñoz, Carmen

    2018-01-01

    Polymerase chain reaction (PCR) has become a useful tool for the diagnosis of Trypanosoma cruzi infection. The development of automated DNA extraction methodologies and PCR systems is an important step toward the standardization of protocols in routine diagnosis. To date, there are only two commercially available Real-Time PCR assays for the routine laboratory detection of T. cruzi DNA in clinical samples: TCRUZIDNA.CE (Diagnostic Bioprobes Srl) and RealCycler CHAG (Progenie Molecular). Our aim was to evaluate the RealCycler CHAG assay taking into account the whole process. We assessed the usefulness of an automated DNA extraction system based on magnetic particles (EZ1 Virus Mini Kit v2.0, Qiagen) combined with a commercially available Real-Time PCR assay targeting satellite DNA (SatDNA) of T. cruzi (RealCycler CHAG), a methodology used for routine diagnosis in our hospital. It was compared with a well-known strategy combining a commercial DNA isolation kit based on silica columns (High Pure PCR Template Preparation Kit, Roche Diagnostics) with an in-house Real-Time PCR targeting SatDNA. The results of the two methodologies were in almost perfect agreement, indicating they can be used interchangeably. However, when variations in protocol factors were applied (sample treatment, extraction method and Real-Time PCR), the results were less convincing. A comprehensive fine-tuning of the whole procedure is the key to successful results. Guanidine EDTA-blood (GEB) samples are not suitable for DNA extraction based on magnetic particles due to inhibition, at least when samples are not processed immediately. This is the first study to evaluate the RealCycler CHAG assay taking into account the overall process, including three variables (sample treatment, extraction method and Real-Time PCR). Our findings may contribute to the harmonization of protocols between laboratories and to a wider application of Real-Time PCR in molecular diagnostic laboratories associated with health centers.

  19. Microwave Pasteurization of Cooked Pasta: Effect of Process Parameters on Texture and Quality for Heat-and-Eat and Ready-to-Eat Meals.

    PubMed

    Joyner Melito, Helen S; Jones, Kari E; Rasco, Barbara A

    2016-06-01

    Pasta presents a challenge to microwave processing due to its unique cooking requirements. The objective of this study was to determine the effects of microwave processing on pasta physicochemical and mechanical properties. Fettuccine pasta was parboiled for selected times, then pasteurized using a Microwave Assisted Pasteurization System and stored under refrigeration for 1 wk. Samples were analyzed using microscopy, mechanical testing, and chemical analyses after storage. While no significant differences were observed for free amylose among fresh samples, samples parboiled for ≤6 min had significantly higher free amylose, suggesting reduced starch retrogradation. Increased heat treatment increased degree of protein polymerization, observed in microstructures as increased gluten strand thickness and network density. Firmness and extensibility increased with increased parboil time; however, extension data indicated an overall weakening of microwave-treated pasta regardless of total cooking time. Overall, microwave pasteurization was shown to be a viable cooking method for pasta. © 2016 Institute of Food Technologists®

  20. Non-destructive Determination of Disintegration Time and Dissolution in Immediate Release Tablets by Terahertz Transmission Measurements.

    PubMed

    Markl, Daniel; Sauerwein, Johanna; Goodwin, Daniel J; van den Ban, Sander; Zeitler, J Axel

    2017-05-01

    The aim of this study was to establish the suitability of terahertz (THz) transmission measurements to accurately measure and predict the critical quality attributes of disintegration time and the amount of active pharmaceutical ingredient (API) dissolved after 15, 20 and 25 min for commercial tablets processed at production scale. Samples of 18 batches of biconvex tablets from a production-scale design of experiments study into exploring the design space of a commercial tablet manufacturing process were used. The tablet production involved the process steps of high-shear wet granulation, fluid-bed drying and subsequent compaction. The 18 batches were produced using a 4 factor split plot design to study the effects of process changes on the disintegration time. Non-destructive and contactless terahertz transmission measurements of the whole tablets without prior sample preparation were performed to measure the effective refractive index and absorption coefficient of 6 tablets per batch. The disintegration time (R 2  = 0.86) and API dissolved after 15 min (R 2  = 0.96) linearly correlates with the effective refractive index, n eff , measured at terahertz frequencies. In contrast, no such correlation could be established from conventional hardness measurements. The magnitude of n eff represents the optical density of the sample and thus it reflects both changes in tablet porosity as well as granule density. For the absorption coefficient, α eff , we observed a better correlation with dissolution after 20 min (R 2  = 0.96) and a weaker correlation with disintegration (R 2  = 0.83) compared to n eff . The measurements of n eff and α eff provide promising predictors for the disintegration and dissolution time of tablets. The high penetration power of terahertz radiation makes it possible to sample a significant volume proportion of a tablet without any prior sample preparation. Together with the short measurement time (seconds), the potential to measure content uniformity and the fact that the method requires no chemometric models this technology shows clear promise to be established as a process analyser to non-destructively predict critical quality attributes of tablets.

  1. Donders revisited: Discrete or continuous temporal processing underlying reaction time distributions?

    PubMed

    Bao, Yan; Yang, Taoxi; Lin, Xiaoxiong; Pöppel, Ernst

    2016-09-01

    Differences of reaction times to specific stimulus configurations are used as indicators of cognitive processing stages. In this classical experimental paradigm, continuous temporal processing is implicitly assumed. Multimodal response distributions indicate, however, discrete time sampling, which is often masked by experimental conditions. Differences in reaction times reflect discrete temporal mechanisms that are pre-semantically implemented and suggested to be based on entrained neural oscillations. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  2. Development of a new protocol for rapid bacterial identification and susceptibility testing directly from urine samples.

    PubMed

    Zboromyrska, Y; Rubio, E; Alejo, I; Vergara, A; Mons, A; Campo, I; Bosch, J; Marco, F; Vila, J

    2016-06-01

    The current gold standard method for the diagnosis of urinary tract infections (UTI) is urine culture that requires 18-48 h for the identification of the causative microorganisms and an additional 24 h until the results of antimicrobial susceptibility testing (AST) are available. The aim of this study was to shorten the time of urine sample processing by a combination of flow cytometry for screening and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) for bacterial identification followed by AST directly from urine. The study was divided into two parts. During the first part, 675 urine samples were processed by a flow cytometry device and a cut-off value of bacterial count was determined to select samples for direct identification by MALDI-TOF-MS at ≥5 × 10(6) bacteria/mL. During the second part, 163 of 1029 processed samples reached the cut-off value. The sample preparation protocol for direct identification included two centrifugation and two washing steps. Direct AST was performed by the disc diffusion method if a reliable direct identification was obtained. Direct MALDI-TOF-MS identification was performed in 140 urine samples; 125 of the samples were positive by urine culture, 12 were contaminated and 3 were negative. Reliable direct identification was obtained in 108 (86.4%) of the 125 positive samples. AST was performed in 102 identified samples, and the results were fully concordant with the routine method among 83 monomicrobial infections. In conclusion, the turnaround time of the protocol described to diagnose UTI was about 1 h for microbial identification and 18-24 h for AST. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  3. Particle-sampling statistics in laser anemometers Sample-and-hold systems and saturable systems

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1983-01-01

    The effect of the data-processing system on the particle statistics obtained with laser anemometry of flows containing suspended particles is examined. Attention is given to the sample and hold processor, a pseudo-analog device which retains the last measurement until a new measurement is made, followed by time-averaging of the data. The second system considered features a dead time, i.e., a saturable system with a significant reset time with storage in a data buffer. It is noted that the saturable system operates independent of the particle arrival rate. The probabilities of a particle arrival in a given time period are calculated for both processing systems. It is shown that the system outputs are dependent on the mean particle flow rate, the flow correlation time, and the flow statistics, indicating that the particle density affects both systems. The results are significant for instances of good correlation between the particle density and velocity, such as occurs near the edge of a jet.

  4. Mobile membrane introduction tandem mass spectrometry for on-the-fly measurements and adaptive sampling of VOCs around oil and gas projects in Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Krogh, E.; Gill, C.; Bell, R.; Davey, N.; Martinsen, M.; Thompson, A.; Simpson, I. J.; Blake, D. R.

    2012-12-01

    The release of hydrocarbons into the environment can have significant environmental and economic consequences. The evolution of smaller, more portable mass spectrometers to the field can provide spatially and temporally resolved information for rapid detection, adaptive sampling and decision support. We have deployed a mobile platform membrane introduction mass spectrometer (MIMS) for the in-field simultaneous measurement of volatile and semi-volatile organic compounds. In this work, we report instrument and data handling advances that produce geographically referenced data in real-time and preliminary data where these improvements have been combined with high precision ultra-trace VOCs analysis to adaptively sample air plumes near oil and gas operations in Alberta, Canada. We have modified a commercially available ion-trap mass spectrometer (Griffin ICX 400) with an in-house temperature controlled capillary hollow fibre polydimethylsiloxane (PDMS) polymer membrane interface and in-line permeation tube flow cell for a continuously infused internal standard. The system is powered by 24 VDC for remote operations in a moving vehicle. Software modifications include the ability to run continuous, interlaced tandem mass spectrometry (MS/MS) experiments for multiple contaminants/internal standards. All data are time and location stamped with on-board GPS and meteorological data to facilitate spatial and temporal data mapping. Tandem MS/MS scans were employed to simultaneously monitor ten volatile and semi-volatile analytes, including benzene, toluene, ethylbenzene and xylene (BTEX), reduced sulfur compounds, halogenated organics and naphthalene. Quantification was achieved by calibrating against a continuously infused deuterated internal standard (toluene-d8). Time referenced MS/MS data were correlated with positional data and processed using Labview and Matlab to produce calibrated, geographical Google Earth data-visualizations that enable adaptive sampling protocols. This real-time approach has been employed in a moving vehicle to identify and track downwind plumes of fugitive VOC emissions near hydrocarbon upgrading and chemical processing facilities in Fort Saskatchewan, Alberta. This information was relayed to a trailing vehicle, which collected stationary grab samples in evacuated canisters for ultra trace analysis of over seventy VOC analytes. In addition, stationary time series data were collected and compared with grab samples co-located with our sampling line. Spatially and temporally resolved, time referenced MS/MS data for several air contaminants associated with oil and gas processing were processed in real time to produce geospatial data for visualization in Google Earth. This information was used to strategically locate grab samples for high precision, ultra trace analysis.

  5. Wastewater Biosolid Composting Optimization Based on UV-VNIR Spectroscopy Monitoring

    PubMed Central

    Temporal-Lara, Beatriz; Melendez-Pastor, Ignacio; Gómez, Ignacio; Navarro-Pedreño, Jose

    2016-01-01

    Conventional wastewater treatment generates large amounts of organic matter–rich sludge that requires adequate treatment to avoid public health and environmental problems. The mixture of wastewater sludge and some bulking agents produces a biosolid to be composted at adequate composting facilities. The composting process is chemically and microbiologically complex and requires an adequate aeration of the biosolid (e.g., with a turner machine) for proper maturation of the compost. Adequate (near) real-time monitoring of the compost maturity process is highly difficult and the operation of composting facilities is not as automatized as other industrial processes. Spectroscopic analysis of compost samples has been successfully employed for compost maturity assessment but the preparation of the solid compost samples is difficult and time-consuming. This manuscript presents a methodology based on a combination of a less time-consuming compost sample preparation and ultraviolet, visible and short-wave near-infrared spectroscopy. Spectroscopic measurements were performed with liquid compost extract instead of solid compost samples. Partial least square (PLS) models were developed to quantify chemical fractions commonly employed for compost maturity assessment. Effective regression models were obtained for total organic matter (residual predictive deviation—RPD = 2.68), humification ratio (RPD = 2.23), total exchangeable carbon (RPD = 2.07) and total organic carbon (RPD = 1.66) with a modular and cost-effective visible and near infrared (VNIR) spectroradiometer. This combination of a less time-consuming compost sample preparation with a versatile sensor system provides an easy-to-implement, efficient and cost-effective protocol for compost maturity assessment and near-real-time monitoring. PMID:27854280

  6. Wastewater Biosolid Composting Optimization Based on UV-VNIR Spectroscopy Monitoring.

    PubMed

    Temporal-Lara, Beatriz; Melendez-Pastor, Ignacio; Gómez, Ignacio; Navarro-Pedreño, Jose

    2016-11-15

    Conventional wastewater treatment generates large amounts of organic matter-rich sludge that requires adequate treatment to avoid public health and environmental problems. The mixture of wastewater sludge and some bulking agents produces a biosolid to be composted at adequate composting facilities. The composting process is chemically and microbiologically complex and requires an adequate aeration of the biosolid (e.g., with a turner machine) for proper maturation of the compost. Adequate (near) real-time monitoring of the compost maturity process is highly difficult and the operation of composting facilities is not as automatized as other industrial processes. Spectroscopic analysis of compost samples has been successfully employed for compost maturity assessment but the preparation of the solid compost samples is difficult and time-consuming. This manuscript presents a methodology based on a combination of a less time-consuming compost sample preparation and ultraviolet, visible and short-wave near-infrared spectroscopy. Spectroscopic measurements were performed with liquid compost extract instead of solid compost samples. Partial least square (PLS) models were developed to quantify chemical fractions commonly employed for compost maturity assessment. Effective regression models were obtained for total organic matter (residual predictive deviation-RPD = 2.68), humification ratio (RPD = 2.23), total exchangeable carbon (RPD = 2.07) and total organic carbon (RPD = 1.66) with a modular and cost-effective visible and near infrared (VNIR) spectroradiometer. This combination of a less time-consuming compost sample preparation with a versatile sensor system provides an easy-to-implement, efficient and cost-effective protocol for compost maturity assessment and near-real-time monitoring.

  7. Time-instant sampling based encoding of time-varying acoustic spectrum

    NASA Astrophysics Data System (ADS)

    Sharma, Neeraj Kumar

    2015-12-01

    The inner ear has been shown to characterize an acoustic stimuli by transducing fluid motion in the inner ear to mechanical bending of stereocilia on the inner hair cells (IHCs). The excitation motion/energy transferred to an IHC is dependent on the frequency spectrum of the acoustic stimuli, and the spatial location of the IHC along the length of the basilar membrane (BM). Subsequently, the afferent auditory nerve fiber (ANF) bundle samples the encoded waveform in the IHCs by synapsing with them. In this work we focus on sampling of information by afferent ANFs from the IHCs, and show computationally that sampling at specific time instants is sufficient for decoding of time-varying acoustic spectrum embedded in the acoustic stimuli. The approach is based on sampling the signal at its zero-crossings and higher-order derivative zero-crossings. We show results of the approach on time-varying acoustic spectrum estimation from cricket call signal recording. The framework gives a time-domain and non-spatial processing perspective to auditory signal processing. The approach works on the full band signal, and is devoid of modeling any bandpass filtering mimicking the BM action. Instead, we motivate the approach from the perspective of event-triggered sampling by afferent ANFs on the stimuli encoded in the IHCs. Though the approach gives acoustic spectrum estimation but it is shallow on its complete understanding for plausible bio-mechanical replication with current mammalian auditory mechanics insights.

  8. Development of automation software for neutron activation analysis process in Malaysian nuclear agency

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Rahman, N. A. A.; Ibrahim, M. M.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.

    2017-01-01

    Neutron Activation Analysis (NAA) process has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s. Most of the procedures established especially from sample registration to sample analysis are performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient. Hence, a software to support the system automation is developed to provide an effective method to replace redundant manual data entries and produce faster sample analysis and calculation process. This paper describes the design and development of automation software for NAA process which consists of three sub-programs. The sub-programs are sample registration, hardware control and data acquisition; and sample analysis. The data flow and connection between the sub-programs will be explained. The software is developed by using National Instrument LabView development package.

  9. Qualitative analysis of Pb liquid sample using laser-induced breakdown spectroscopy (LIBS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suyanto, Hery; Rupiasih, Ni Nyoman; Winardi, T. B.

    2013-09-03

    Qualitative analysis of liquid sample containing 1,000 ppm of Pb was performed by using LIBS technique. In order to avoid splashing off of the liquid sample during laser irradiation, a sample pretreatment was done, namely the liquid sample was absorbed by using commercial available stomach medicine. Two kinds of absorbent materials were chosen in this experiment, first containing 125 mg activated carbon and second 600 mg activated attapulgite. These absorbent materials were used since carbon sample gives better absorption of infrared laser irradiation used in this experiment. In order to characterize the absorption process, three treatments were conducted in thismore » experiment; first, without heating the sample but varying the absorption time before laser irradiation; second by varying the heating temperature after certain time of absorption process and third by varying the temperature only. The maximum emission intensity of Pb I 405.7 nm was found in the second treatment of heating the sample till 85°C after 30 minutes absorption of the liquid sample in both absorbent materials.« less

  10. Genealogical Properties of Subsamples in Highly Fecund Populations

    NASA Astrophysics Data System (ADS)

    Eldon, Bjarki; Freund, Fabian

    2018-03-01

    We consider some genealogical properties of nested samples. The complete sample is assumed to have been drawn from a natural population characterised by high fecundity and sweepstakes reproduction (abbreviated HFSR). The random gene genealogies of the samples are—due to our assumption of HFSR—modelled by coalescent processes which admit multiple mergers of ancestral lineages looking back in time. Among the genealogical properties we consider are the probability that the most recent common ancestor is shared between the complete sample and the subsample nested within the complete sample; we also compare the lengths of `internal' branches of nested genealogies between different coalescent processes. The results indicate how `informative' a subsample is about the properties of the larger complete sample, how much information is gained by increasing the sample size, and how the `informativeness' of the subsample varies between different coalescent processes.

  11. Towards a Mobile Ecogenomic sensor: the Third Generation Environmental Sample Processor (3G-ESP).

    NASA Astrophysics Data System (ADS)

    Birch, J. M.; Pargett, D.; Jensen, S.; Roman, B.; Preston, C. M.; Ussler, W.; Yamahara, K.; Marin, R., III; Hobson, B.; Zhang, Y.; Ryan, J. P.; Scholin, C. A.

    2016-02-01

    Researchers are increasingly using one or more autonomous platforms to characterize ocean processes that change in both space and time. Conceptually, studying processes that change quickly both spatially and temporally seems relatively straightforward. One needs to sample in many locations synoptically over time, or follow a coherent water mass and sample it repeatedly. However, implementing either approach presents many challenges. For example, acquiring samples over days to weeks far from shore, without human intervention, requires multiple systems to work together seamlessly, and the level of autonomy, navigation and communications needed to conduct the work exposes the complexity of these requirements. We are addressing these challenges by developing a new generation of robotic systems that are primarily aimed at studies of microbial-mediated processes. As a step towards realizing this new capability, we have taken lessons learned from our second-generation Environmental Sample Processor (2G-ESP), a robotic microbiology "lab-in-a-can" and have re-engineered the system for use on a Tethys-class Long Range AUV (LRAUV). The new instrument is called the third-generation ESP (3G-ESP), and its integration with the LRAUV provides mobility and a persistent presence not seen before in microbial oceanography. The 3G-ESP autonomously filters a water sample and then either preserves that material for eventual return to a laboratory, or processes the sample in real-time for further downstream molecular analytical analyses. The 3G ESP modularizes hardware needed for the collection and preparation of a sample from subsequent molecular analyses by the use of self-contained "cartridges". Cartridges currently come in two forms: one for the preservation of a sample, and the other for onboard homogenization and handoff for downstream processing via one or more analytical devices. The 3G-ESP is designed as a stand-alone instrument, and thus could be deployed on a variety of platforms. This presentation will focus on results from early deployments of the prototype 3G-ESP/LRAUV, the challenges encountered in cartridge design, ESP/LRAUV integration, and operational capabilities that show the potential of mobile, ecogenomic sensors in the ocean sciences.

  12. Experimental and numerical study on plasma nitriding of AISI P20 mold steel

    NASA Astrophysics Data System (ADS)

    Nayebpashaee, N.; Vafaeenezhad, H.; Kheirandish, Sh.; Soltanieh, M.

    2016-09-01

    In this study, plasma nitriding was used to fabricate a hard protective layer on AISI P20 steel, at three process temperatures (450°C, 500°C, and 550°C) and over a range of time periods (2.5, 5, 7.5, and 10 h), and at a fixed gas N2:H2 ratio of 75vol%:25vol%. The morphology of samples was studied using optical microscopy and scanning electron microscopy, and the formed phase of each sample was determined by X-ray diffraction. The elemental depth profile was measured by energy dispersive X-ray spectroscopy, wavelength dispersive spectroscopy, and glow dispersive spectroscopy. The hardness profile of the samples was identified, and the microhardness profile from the surface to the sample center was recorded. The results show that ɛ-nitride is the dominant species after carrying out plasma nitriding in all strategies and that the plasma nitriding process improves the hardness up to more than three times. It is found that as the time and temperature of the process increase, the hardness and hardness depth of the diffusion zone considerably increase. Furthermore, artificial neural networks were used to predict the effects of operational parameters on the mechanical properties of plastic mold steel. The plasma temperature, running time of imposition, and target distance to the sample surface were all used as network inputs; Vickers hardness measurements were given as the output of the model. The model accurately reproduced the experimental outcomes under different operational conditions; therefore, it can be used in the effective simulation of the plasma nitriding process in AISI P20 steel.

  13. Modeling and optimization of red currants vacuum drying process by response surface methodology (RSM).

    PubMed

    Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir

    2016-07-15

    Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Rapid-Viability PCR Method for Detection of Live, Virulent Bacillus anthracis in Environmental Samples ▿

    PubMed Central

    Létant, Sonia E.; Murphy, Gloria A.; Alfaro, Teneile M.; Avila, Julie R.; Kane, Staci R.; Raber, Ellen; Bunt, Thomas M.; Shah, Sanjiv R.

    2011-01-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples. PMID:21764960

  15. Conducting On-orbit Gene Expression Analysis on ISS: WetLab-2

    NASA Technical Reports Server (NTRS)

    Parra, Macarena; Almeida, Eduardo; Boone, Travis; Jung, Jimmy; Lera, Matthew P.; Ricco, Antonio; Souza, Kenneth; Wu, Diana; Richey, C. Scott

    2013-01-01

    WetLab-2 will enable expanded genomic research on orbit by developing tools that support in situ sample collection, processing, and analysis on ISS. This capability will reduce the time-to-results for investigators and define new pathways for discovery on the ISS National Lab. The primary objective is to develop a research platform on ISS that will facilitate real-time quantitative gene expression analysis of biological samples collected on orbit. WetLab-2 will be capable of processing multiple sample types ranging from microbial cultures to animal tissues dissected on orbit. WetLab-2 will significantly expand the analytical capabilities onboard ISS and enhance science return from ISS.

  16. Dynamic Speckle Imaging with Low-Cost Devices

    ERIC Educational Resources Information Center

    Vannoni, Maurizio; Trivi, Marcelo; Arizaga, Ricardo; Rabal, Hector; Molesini, Giuseppe

    2008-01-01

    Light from a rough sample surface illuminated with a laser consists of a speckle pattern. If the surface evolves with time, the pattern becomes dynamic, following the activity of the sample. This phenomenon is used both in research and in industry to monitor processes and systems that change with time. The measuring equipment generally includes…

  17. Current Protocols in Pharmacology

    PubMed Central

    2016-01-01

    Determination of drug or drug metabolite concentrations in biological samples, particularly in serum or plasma, is fundamental to describing the relationships between administered dose, route of administration, and time after dose to the drug concentrations achieved and to the observed effects of the drug. A well-characterized, accurate analytical method is needed, but it must also be established that the analyte concentration in the sample at the time of analysis is the same as the concentration at sample acquisition. Drugs and metabolites may be susceptible to degradation in samples due to metabolism or to physical and chemical processes, resulting in a lower measured concentration than was in the original sample. Careful examination of analyte stability during processing and storage and adjustment of procedures and conditions to maximize that stability are a critical part of method validation for the analysis, and can ensure the accuracy of the measured concentrations. PMID:27960029

  18. A survey of the use of soy in processed Turkish meat products and detection of genetic modification.

    PubMed

    Ulca, Pelin; Balta, Handan; Senyuva, Hamide Z

    2014-01-01

    To screen for possible illegal use of soybeans in meat products, the performance characteristics of a commercial polymer chain reaction (PCR) kit for detection of soybean DNA in raw and cooked meat products were established. Minced chicken and beef products containing soybean at levels from 0.1% to 10.0% were analysed by real-time PCR to amplify the soybean lectin gene. The PCR method could reliably detect the addition of soybean at a level of 0.1%. A survey of 38 Turkish processed meat products found only six samples to be negative for the presence of soybean. In 32 (84%) positive samples, 13 (34%) contained levels of soy above 0.1%. Of soybean positive samples, further DNA analysis was conducted by real-time PCR to detect whether genetically modified (GM) soybean had been used. Of 32 meat samples containing soybean, two samples were positive for GM modification.

  19. Capillary absorption spectrometer and process for isotopic analysis of small samples

    DOEpatents

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2016-03-29

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The method also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  20. Multivariate survivorship analysis using two cross-sectional samples.

    PubMed

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  1. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    PubMed

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  2. Concreteness of idiographic worry and anticipatory processing.

    PubMed

    McGowan, Sarah Kate; Stevens, Elizabeth S; Behar, Evelyn; Judah, Matt R; Mills, Adam C; Grant, DeMond M

    2017-03-01

    Worry and anticipatory processing are forms of repetitive negative thinking (RNT) that are associated with maladaptive characteristics and negative consequences. One key maladaptive characteristic of worry is its abstract nature (Goldwin & Behar, 2012; Stöber & Borkovec, 2002). Several investigations have relied on inductions of worry that are social-evaluative in nature, which precludes distinctions between worry and RNT about social-evaluative situations. The present study examined similarities and distinctions between worry and anticipatory processing on potentially important maladaptive characteristics. Participants (N = 279) engaged in idiographic periods of uninstructed mentation, worry, and anticipatory processing and provided thought samples during each minute of each induction. Thought samples were assessed for concreteness, degree of verbal-linguistic activity, and degree of imagery-based activity. Both worry and anticipatory processing were characterized by reduced concreteness, increased abstraction of thought over time, and a predominance of verbal-linguistic activity. However, worry was more abstract, more verbal-linguistic, and less imagery-based relative to anticipatory processing. Finally, worry demonstrated reductions in verbal-linguistic activity over time, whereas anticipatory processing demonstrated reductions in imagery-based activity over time. Worry was limited to non-social topics to distinguish worry from anticipatory processing, and may not represent worry that is social in nature. Generalizability may also be limited by use of an undergraduate sample. Results from the present study provide support for Stöber's theory regarding the reduced concreteness of worry, and suggest that although worry and anticipatory processing share some features, they also contain characteristics unique to each process. Published by Elsevier Ltd.

  3. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    PubMed

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  4. Metabolic profiling of body fluids and multivariate data analysis.

    PubMed

    Trezzi, Jean-Pierre; Jäger, Christian; Galozzi, Sara; Barkovits, Katalin; Marcus, Katrin; Mollenhauer, Brit; Hiller, Karsten

    2017-01-01

    Metabolome analyses of body fluids are challenging due pre-analytical variations, such as pre-processing delay and temperature, and constant dynamical changes of biochemical processes within the samples. Therefore, proper sample handling starting from the time of collection up to the analysis is crucial to obtain high quality samples and reproducible results. A metabolomics analysis is divided into 4 main steps: 1) Sample collection, 2) Metabolite extraction, 3) Data acquisition and 4) Data analysis. Here, we describe a protocol for gas chromatography coupled to mass spectrometry (GC-MS) based metabolic analysis for biological matrices, especially body fluids. This protocol can be applied on blood serum/plasma, saliva and cerebrospinal fluid (CSF) samples of humans and other vertebrates. It covers sample collection, sample pre-processing, metabolite extraction, GC-MS measurement and guidelines for the subsequent data analysis. Advantages of this protocol include: •Robust and reproducible metabolomics results, taking into account pre-analytical variations that may occur during the sampling process•Small sample volume required•Rapid and cost-effective processing of biological samples•Logistic regression based determination of biomarker signatures for in-depth data analysis.

  5. Evaluation of process errors in bed load sampling using a Dune Model

    USGS Publications Warehouse

    Gomez, Basil; Troutman, Brent M.

    1997-01-01

    Reliable estimates of the streamwide bed load discharge obtained using sampling devices are dependent upon good at-a-point knowledge across the full width of the channel. Using field data and information derived from a model that describes the geometric features of a dune train in terms of a spatial process observed at a fixed point in time, we show that sampling errors decrease as the number of samples collected increases, and the number of traverses of the channel over which the samples are collected increases. It also is preferable that bed load sampling be conducted at a pace which allows a number of bed forms to pass through the sampling cross section. The situations we analyze and simulate pertain to moderate transport conditions in small rivers. In such circumstances, bed load sampling schemes typically should involve four or five traverses of a river, and the collection of 20–40 samples at a rate of five or six samples per hour. By ensuring that spatial and temporal variability in the transport process is accounted for, such a sampling design reduces both random and systematic errors and hence minimizes the total error involved in the sampling process.

  6. Assessing NIR & MIR Spectral Analysis as a Method for Soil C Estimation Across a Network of Sampling Sites

    NASA Astrophysics Data System (ADS)

    Spencer, S.; Ogle, S.; Borch, T.; Rock, B.

    2008-12-01

    Monitoring soil C stocks is critical to assess the impact of future climate and land use change on carbon sinks and sources in agricultural lands. A benchmark network for soil carbon monitoring of stock changes is being designed for US agricultural lands with 3000-5000 sites anticipated and re-sampling on a 5- to10-year basis. Approximately 1000 sites would be sampled per year producing around 15,000 soil samples to be processed for total, organic, and inorganic carbon, as well as bulk density and nitrogen. Laboratory processing of soil samples is cost and time intensive, therefore we are testing the efficacy of using near-infrared (NIR) and mid-infrared (MIR) spectral methods for estimating soil carbon. As part of an initial implementation of national soil carbon monitoring, we collected over 1800 soil samples from 45 cropland sites in the mid-continental region of the U.S. Samples were processed using standard laboratory methods to determine the variables above. Carbon and nitrogen were determined by dry combustion and inorganic carbon was estimated with an acid-pressure test. 600 samples are being scanned using a bench- top NIR reflectance spectrometer (30 g of 2 mm oven-dried soil and 30 g of 8 mm air-dried soil) and 500 samples using a MIR Fourier-Transform Infrared Spectrometer (FTIR) with a DRIFT reflectance accessory (0.2 g oven-dried ground soil). Lab-measured carbon will be compared to spectrally-estimated carbon contents using Partial Least Squares (PLS) multivariate statistical approach. PLS attempts to develop a soil C predictive model that can then be used to estimate C in soil samples not lab-processed. The spectral analysis of soil samples either whole or partially processed can potentially save both funding resources and time to process samples. This is particularly relevant for the implementation of a national monitoring network for soil carbon. This poster will discuss our methods, initial results and potential for using NIR and MIR spectral approaches to either replace or augment traditional lab-based carbon analyses of soils.

  7. A unified method to process biosolids samples for the recovery of bacterial, viral, and helminths pathogens.

    PubMed

    Alum, Absar; Rock, Channah; Abbaszadegan, Morteza

    2014-01-01

    For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.

  8. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  9. A generative inference framework for analysing patterns of cultural change in sparse population data with evidence for fashion trends in LBK culture.

    PubMed

    Kandler, Anne; Shennan, Stephen

    2015-12-06

    Cultural change can be quantified by temporal changes in frequency of different cultural artefacts and it is a central question to identify what underlying cultural transmission processes could have caused the observed frequency changes. Observed changes, however, often describe the dynamics in samples of the population of artefacts, whereas transmission processes act on the whole population. Here we develop a modelling framework aimed at addressing this inference problem. To do so, we firstly generate population structures from which the observed sample could have been drawn randomly and then determine theoretical samples at a later time t2 produced under the assumption that changes in frequencies are caused by a specific transmission process. Thereby we also account for the potential effect of time-averaging processes in the generation of the observed sample. Subsequent statistical comparisons (e.g. using Bayesian inference) of the theoretical and observed samples at t2 can establish which processes could have produced the observed frequency data. In this way, we infer underlying transmission processes directly from available data without any equilibrium assumption. We apply this framework to a dataset describing pottery from settlements of some of the first farmers in Europe (the LBK culture) and conclude that the observed frequency dynamic of different types of decorated pottery is consistent with age-dependent selection, a preference for 'young' pottery types which is potentially indicative of fashion trends. © 2015 The Author(s).

  10. The optical properties of α-Fe2O3 nanostructures synthesized with different immersion time

    NASA Astrophysics Data System (ADS)

    Ahmad, W. R. W.; Mamat, M. H.; Zoolfakar, A. S.; Khusaimi, Z.; Yusof, M. M.; Ismail, A. S.; Saidi, S. A.; Rusop, M.

    2018-05-01

    In this study, nanostructured hematite (α-Fe2O3) thin films have been prepared successfully by sonicated immersion method on fluorine-doped tin oxide (FTO) coated glass substrate. The effect of the immersion time on the structural and optical properties of α-Fe2O3 nanostructure were investigated for a variation of immersion time ranging from 1 to 4 hour. From the characterization results, the surface morphology of the sample prepared in 4 hours immersion process has exhibited highest porosity, and the highest absorbance properties were found in the same sample. These results suggest that the different time duration during immersion process play important roles in optical properties of α-Fe2O3 nanostructures.

  11. System for sensing droplet formation time delay in a flow cytometer

    DOEpatents

    Van den Engh, Ger; Esposito, Richard J.

    1997-01-01

    A droplet flow cytometer system which includes a system to optimize the droplet formation time delay based on conditions actually experienced includes an automatic droplet sampler which rapidly moves a plurality of containers stepwise through the droplet stream while simultaneously adjusting the droplet time delay. Through the system sampling of an actual substance to be processed can be used to minimize the effect of the substances variations or the determination of which time delay is optimal. Analysis such as cell counting and the like may be conducted manually or automatically and input to a time delay adjustment which may then act with analysis equipment to revise the time delay estimate actually applied during processing. The automatic sampler can be controlled through a microprocessor and appropriate programming to bracket an initial droplet formation time delay estimate. When maximization counts through volume, weight, or other types of analysis exists in the containers, the increment may then be reduced for a more accurate ultimate setting. This may be accomplished while actually processing the sample without interruption.

  12. Task-Dependent Behavioral Dynamics Make the Case for Temporal Integration in Multiple Strategies during Odor Processing

    PubMed Central

    Brown, Austin; Mehta, Nisarg; Vujovic, Mark; Amina, Tasneem; Fixsen, Bethany

    2017-01-01

    Differing results in olfactory-based decision-making research regarding the amount of time that rats and mice use to identify odors have led to some disagreements about odor-processing mechanics, including whether or not rodents use temporal integration (i.e., sniffing longer to identify odors better). Reported differences in behavioral strategies may be due to the different types of tasks used in different laboratories. Some researchers have reported that animals performing two-alternative choice (TAC) tasks need only 1–2 sniffs and do not increase performance with longer sampling. Others have reported that animals performing go/no-go (GNG) tasks increase sampling times and performance for difficult discriminations, arguing for temporal integration. We present results from four experiments comparing GNG and TAC tasks over several behavioral variables (e.g., performance, sampling duration). When rats know only one task, they perform better in GNG than in TAC. However, performance was not statistically different when rats learned and were tested in both tasks. Rats sample odors longer in GNG than in TAC, even when they know both tasks and perform them in the same or different sessions. Longer sampling is associated with better performance for both tasks in difficult discriminations, which supports the case for temporal integration over ≥2–6 sniffs in both tasks. These results illustrate that generalizations from a single task about behavioral or cognitive abilities (e.g., processing, perception) do not capture the full range of complexity and can significantly impact inferences about general abilities in sensory perception. SIGNIFICANCE STATEMENT Behavioral tasks and training and testing history affect measured outcomes in cognitive tests. Rats sample odors longer in a go/no-go (GNG) than in a two-alternative choice (TAC) task, performing better in GNG unless they know both tasks. Odor-sampling time is extended in both tasks when the odors to be discriminated are very similar. Rats may extend sampling time to integrate odor information up to ∼0.5 s (2–6 sniffs). Such factors as task, task parameters, and training history affect decision times and performance, making it important to use multiple tasks when making inferences about sensory or cognitive processing. PMID:28336570

  13. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    PubMed

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  14. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  15. Involvement of hippocampal NMDA receptors in encoding and consolidation, but not retrieval, processes of spontaneous object location memory in rats.

    PubMed

    Yamada, Kazuo; Arai, Misaki; Suenaga, Toshiko; Ichitani, Yukio

    2017-07-28

    The hippocampus is thought to be involved in object location recognition memory, yet the contribution of hippocampal NMDA receptors to the memory processes, such as encoding, retention and retrieval, is unknown. First, we confirmed that hippocampal infusion of a competitive NMDA receptor antagonist, AP5 (2-amino-5-phosphonopentanoic acid, 20-40nmol), impaired performance of spontaneous object location recognition test but not that of novel object recognition test in Wistar rats. Next, the effects of hippocampal AP5 treatment on each process of object location recognition memory were examined with three different injection times using a 120min delay-interposed test: 15min before the sample phase (Time I), immediately after the sample phase (Time II), and 15min before the test phase (Time III). The blockade of hippocampal NMDA receptors before and immediately after the sample phase, but not before the test phase, markedly impaired performance of object location recognition test, suggesting that hippocampal NMDA receptors play an important role in encoding and consolidation/retention, but not retrieval, of spontaneous object location memory. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Atmospheric vs. anaerobic processing of metabolome samples for the metabolite profiling of a strict anaerobic bacterium, Clostridium acetobutylicum.

    PubMed

    Lee, Sang-Hyun; Kim, Sooah; Kwon, Min-A; Jung, Young Hoon; Shin, Yong-An; Kim, Kyoung Heon

    2014-12-01

    Well-established metabolome sample preparation is a prerequisite for reliable metabolomic data. For metabolome sampling of a Gram-positive strict anaerobe, Clostridium acetobutylicum, fast filtration and metabolite extraction with acetonitrile/methanol/water (2:2:1, v/v) at -20°C under anaerobic conditions has been commonly used. This anaerobic metabolite processing method is laborious and time-consuming since it is conducted in an anaerobic chamber. Also, there have not been any systematic method evaluation and development of metabolome sample preparation for strict anaerobes and Gram-positive bacteria. In this study, metabolome sampling and extraction methods were rigorously evaluated and optimized for C. acetobutylicum by using gas chromatography/time-of-flight mass spectrometry-based metabolomics, in which a total of 116 metabolites were identified. When comparing the atmospheric (i.e., in air) and anaerobic (i.e., in an anaerobic chamber) processing of metabolome sample preparation, there was no significant difference in the quality and quantity of the metabolomic data. For metabolite extraction, pure methanol at -20°C was a better solvent than acetonitrile/methanol/water (2:2:1, v/v/v) at -20°C that is frequently used for C. acetobutylicum, and metabolite profiles were significantly different depending on extraction solvents. This is the first evaluation of metabolite sample preparation under aerobic processing conditions for an anaerobe. This method could be applied conveniently, efficiently, and reliably to metabolome analysis for strict anaerobes in air. © 2014 Wiley Periodicals, Inc.

  17. Robowell: An automated process for monitoring ground water quality using established sampling protocols

    USGS Publications Warehouse

    Granato, G.E.; Smith, K.P.

    1999-01-01

    Robowell is an automated process for monitoring selected ground water quality properties and constituents by pumping a well or multilevel sampler. Robowell was developed and tested to provide a cost-effective monitoring system that meets protocols expected for manual sampling. The process uses commercially available electronics, instrumentation, and hardware, so it can be configured to monitor ground water quality using the equipment, purge protocol, and monitoring well design most appropriate for the monitoring site and the contaminants of interest. A Robowell prototype was installed on a sewage treatment plant infiltration bed that overlies a well-studied unconfined sand and gravel aquifer at the Massachusetts Military Reservation, Cape Cod, Massachusetts, during a time when two distinct plumes of constituents were released. The prototype was operated from May 10 to November 13, 1996, and quality-assurance/quality-control measurements demonstrated that the data obtained by the automated method was equivalent to data obtained by manual sampling methods using the same sampling protocols. Water level, specific conductance, pH, water temperature, dissolved oxygen, and dissolved ammonium were monitored by the prototype as the wells were purged according to U.S Geological Survey (USGS) ground water sampling protocols. Remote access to the data record, via phone modem communications, indicated the arrival of each plume over a few days and the subsequent geochemical reactions over the following weeks. Real-time availability of the monitoring record provided the information needed to initiate manual sampling efforts in response to changes in measured ground water quality, which proved the method and characterized the screened portion of the plume in detail through time. The methods and the case study described are presented to document the process for future use.

  18. Reliable noninvasive prenatal testing by massively parallel sequencing of circulating cell-free DNA from maternal plasma processed up to 24h after venipuncture.

    PubMed

    Buysse, Karen; Beulen, Lean; Gomes, Ingrid; Gilissen, Christian; Keesmaat, Chantal; Janssen, Irene M; Derks-Willemen, Judith J H T; de Ligt, Joep; Feenstra, Ilse; Bekker, Mireille N; van Vugt, John M G; Geurts van Kessel, Ad; Vissers, Lisenka E L M; Faas, Brigitte H W

    2013-12-01

    Circulating cell-free fetal DNA (ccffDNA) in maternal plasma is an attractive source for noninvasive prenatal testing (NIPT). The amount of total cell-free DNA significantly increases 24h after venipuncture, leading to a relative decrease of the ccffDNA fraction in the blood sample. In this study, we evaluated the downstream effects of extended processing times on the reliability of aneuploidy detection by massively parallel sequencing (MPS). Whole blood from pregnant women carrying normal and trisomy 21 (T21) fetuses was collected in regular EDTA anti-coagulated tubes and processed within 6h, 24 and 48h after venipuncture. Samples of all three different time points were further analyzed by MPS using Z-score calculation and the percentage of ccffDNA based on X-chromosome reads. Both T21 samples were correctly identified as such at all time-points. However, after 48h, a higher deviation in Z-scores was noticed. Even though the percentage of ccffDNA in a plasma sample has been shown previously to significantly decrease 24h after venipuncture, the percentages based on MPS results did not show a significant decrease after 6, 24 or 48h. The quality and quantity of ccffDNA extracted from plasma samples processed up to 24h after venipuncture are sufficiently high for reliable downstream NIPT analysis by MPS. Furthermore, we show that it is important to determine the percentage of ccffDNA in the fraction of the sample that is actually used for NIPT, as downstream procedures might influence the fetal or maternal fraction. © 2013.

  19. Influence of Surface Finishing on the Oxidation Behaviour of VPS MCrAlY Coatings

    NASA Astrophysics Data System (ADS)

    Fossati, Alessio; di Ferdinando, Martina; Bardi, Ugo; Scrivani, Andrea; Giolli, Carlo

    2012-03-01

    CoNiCrAlY coatings were produced by means of the vacuum plasma spraying (VPS) process onto CMSX-4 single crystal nickel superalloy disk substrates. As-sprayed samples were annealed at high temperatures in low vacuum. Three kinds of finishing processes were carried out, producing three types of samples: as-sprayed, mechanically smoothed by grinding, ground and PVD coated by using aluminum targets in an oxygen atmosphere. Samples were tested under isothermal conditions, in air, at 1000 °C, and up to 5000 h. Morphological, microstructural and compositional analyses were performed on the coated samples in order to assess the high temperature oxidation behavior provided by the three different surface finishing processes. Several differences were observed: grinding operations decrease the oxidation resistance, whereas the PVD process can increase the performances over longer time with respect of the as-sprayed samples.

  20. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    PubMed

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  1. Lithography hotspot discovery at 70nm DRAM 300mm fab: process window qualification using design base binning

    NASA Astrophysics Data System (ADS)

    Chen, Daniel; Chen, Damian; Yen, Ray; Cheng, Mingjen; Lan, Andy; Ghaskadvi, Rajesh

    2008-11-01

    Identifying hotspots--structures that limit the lithography process window--become increasingly important as the industry relies heavily on RET to print sub-wavelength designs. KLA-Tencor's patented Process Window Qualification (PWQ) methodology has been used for this purpose in various fabs. PWQ methodology has three key advantages (a) PWQ Layout--to obtain the best sensitivity (b) Design Based Binning--for pattern repeater analysis (c) Intelligent sampling--for the best DOI sampling rate. This paper evaluates two different analysis strategies for SEM review sampling successfully deployed at Inotera Memories, Inc. We propose a new approach combining the location repeater and pattern repeaters. Based on a recent case study the new sampling flow reduces the data analysis and sampling time from 6 hours to 1.5 hour maintaining maximum DOI sample rate.

  2. Effects of the number of people on efficient capture and sample collection: a lion case study.

    PubMed

    Ferreira, Sam M; Maruping, Nkabeng T; Schoultz, Darius; Smit, Travis R

    2013-05-24

    Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  3. Improving clinical laboratory efficiency: a time-motion evaluation of the Abbott m2000 RealTime and Roche COBAS AmpliPrep/COBAS TaqMan PCR systems for the simultaneous quantitation of HIV-1 RNA and HCV RNA.

    PubMed

    Amendola, Alessandra; Coen, Sabrina; Belladonna, Stefano; Pulvirenti, F Renato; Clemens, John M; Capobianchi, M Rosaria

    2011-08-01

    Diagnostic laboratories need automation that facilitates efficient processing and workflow management to meet today's challenges for expanding services and reducing cost, yet maintaining the highest levels of quality. Processing efficiency of two commercially available automated systems for quantifying HIV-1 and HCV RNA, Abbott m2000 system and Roche COBAS Ampliprep/COBAS TaqMan 96 (docked) systems (CAP/CTM), was evaluated in a mid/high throughput workflow laboratory using a representative daily workload of 24 HCV and 72 HIV samples. Three test scenarios were evaluated: A) one run with four batches on the CAP/CTM system, B) two runs on the Abbott m2000 and C) one run using the Abbott m2000 maxCycle feature (maxCycle) for co-processing these assays. Cycle times for processing, throughput and hands-on time were evaluated. Overall processing cycle time was 10.3, 9.1 and 7.6 h for Scenarios A), B) and C), respectively. Total hands-on time for each scenario was, in order, 100.0 (A), 90.3 (B) and 61.4 min (C). The interface of an automated analyzer to the laboratory workflow, notably system set up for samples and reagents and clean up functions, are as important as the automation capability of the analyzer for the overall impact to processing efficiency and operator hands-on time.

  4. System for high throughput water extraction from soil material for stable isotope analysis of water

    USDA-ARS?s Scientific Manuscript database

    A major limitation in the use of stable isotope of water in ecological studies is the time that is required to extract water from soil and plant samples. Using vacuum distillation the extraction time can be less than one hour per sample. Therefore, assembling a distillation system that can process m...

  5. Bioactive lipids in the butter production chain from Parmigiano Reggiano cheese area.

    PubMed

    Verardo, Vito; Gómez-Caravaca, Ana M; Gori, Alessandro; Losi, Giuseppe; Caboni, Maria F

    2013-11-01

    Bovine milk contains hundreds of diverse components, including proteins, peptides, amino acids, lipids, lactose, vitamins and minerals. Specifically, the lipid composition is influenced by different variables such as breed, feed and technological process. In this study the fatty acid and phospholipid compositions of different samples of butter and its by-products from the Parmigiano Reggiano cheese area, produced by industrial and traditional churning processes, were determined. The fatty acid composition of samples manufactured by the traditional method showed higher levels of monounsaturated and polyunsaturated fatty acids compared with industrial samples. In particular, the contents of n-3 fatty acids and conjugated linoleic acids were higher in samples produced by the traditional method than in samples produced industrially. Sample phospholipid composition also varied between the two technological processes. Phosphatidylethanolamine was the major phospholipid in cream, butter and buttermilk samples obtained by the industrial process as well as in cream and buttermilk samples from the traditional process, while phosphatidylcholine was the major phospholipid in traditionally produced butter. This result may be explained by the different churning processes causing different types of membrane disruption. Generally, samples produced traditionally had higher contents of total phospholipids; in particular, butter produced by the traditional method had a total phospholipid content 33% higher than that of industrially produced butter. The samples studied represent the two types of products present in the Parmigiano Reggiano cheese area, where the industrial churning process is widespread compared with the traditional processing of Reggiana cow's milk. This is because Reggiana cow's milk production is lower than that of other breeds and the traditional churning process is time-consuming and economically disadvantageous. However, its products have been demonstrated to contain more bioactive lipids compared with products obtained from other breeds and by the industrial process. © 2013 Society of Chemical Industry.

  6. MStern Blotting-High Throughput Polyvinylidene Fluoride (PVDF) Membrane-Based Proteomic Sample Preparation for 96-Well Plates.

    PubMed

    Berger, Sebastian T; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno

    2015-10-01

    We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. MStern Blotting–High Throughput Polyvinylidene Fluoride (PVDF) Membrane-Based Proteomic Sample Preparation for 96-Well Plates*

    PubMed Central

    Berger, Sebastian T.; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno

    2015-01-01

    We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. PMID:26223766

  8. Real-time phase evolution of Selective Laser Melted (SLM) Inconel 718 with temperature through synchrotron X-rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarley, Brooke A.; Manero, Albert; Cotelo, Jose

    2017-01-01

    Selective laser melting (SLM) is an additive manufacturing process that uses laser scanning to achieve melting and solidification of a metal powder bed. This process, when applied to develop high temperature material systems, holds great promise for more efficient manufacturing of turbine components that withstand extreme temperatures, heat fluxes, and high mechanical stresses associated with engine environments. These extreme operational conditions demand stringent tolerances and an understanding of the material evolution under thermal loading. This work presents a real-time approach to elucidating the evolution of precipitate phases in SLM Inconel 718 (IN718) under high temperatures using high-energy synchrotron x-ray diffraction.more » Four representative samples (taken along variable build height) were studied in room temperature conditions. Two samples were studied as-processed (samples 1 and 4) and two samples after different thermal treatments (samples 2 and 3). The as-processed samples were found to contain greater amounts of weakening phase, δ. Precipitation hardening of Sample 2 reduced the detectable volume of δ, while also promoting growth of γ00 in the γ matrix. Inversely, solution treatment of Sample 3 produced an overall decrease in precipitate phases. High-temperature, in-situ synchrotron scans during ramp-up, hold, and cool down of two different thermal cycles show the development of precipitate phases. Sample 1 was held at 870°C and subsequently ramped up to 1100°C, during which the high temperature instability of strengthening precipitate, γ00, was seen. γ00 dissolution occurred after 15 minutes at 870°C and was followed by an increase of δ-phase. Sample 4 was held at 800°C and exhibited growth of γ00 after 20 minutes at this temperature. These experiments use in-situ observations to understand the intrinsic thermal effect of the SLM process and the use of heat treatment to manipulate the phase composition of SLM IN718.« less

  9. SamSelect: a sample sequence selection algorithm for quorum planted motif search on large DNA datasets.

    PubMed

    Yu, Qiang; Wei, Dingbang; Huo, Hongwei

    2018-06-18

    Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.

  10. Fast wettability transition from hydrophilic to superhydrophobic laser-textured stainless steel surfaces under low-temperature annealing

    NASA Astrophysics Data System (ADS)

    Ngo, Chi-Vinh; Chun, Doo-Man

    2017-07-01

    Recently, the fabrication of superhydrophobic metallic surfaces by means of pulsed laser texturing has been developed. After laser texturing, samples are typically chemically coated or aged in ambient air for a relatively long time of several weeks to achieve superhydrophobicity. To accelerate the wettability transition from hydrophilicity to superhydrophobicity without the use of additional chemical treatment, a simple annealing post process has been developed. In the present work, grid patterns were first fabricated on stainless steel by a nanosecond pulsed laser, then an additional low-temperature annealing post process at 100 °C was applied. The effect of 100-500 μm step size of the textured grid upon the wettability transition time was also investigated. The proposed post process reduced the transition time from a couple of months to within several hours. All samples showed superhydrophobicity with contact angles greater than 160° and sliding angles smaller than 10° except samples with 500 μm step size, and could be applied in several potential applications such as self-cleaning and control of water adhesion.

  11. Headspace solid-phase microextraction (HS-SPME) combined with GC-MS as a process analytical technology (PAT) tool for monitoring the cultivation of C. tetani.

    PubMed

    Ghader, Masoud; Shokoufi, Nader; Es-Haghi, Ali; Kargosha, Kazem

    2018-04-15

    Vaccine production is a biological process in which variation in time and output is inevitable. Thus, the application of Process Analytical Technologies (PAT) will be important in this regard. Headspace solid - phase microextraction (HS-SPME) coupled with GC-MS can be used as a PAT for process monitoring. This method is suitable to chemical profiling of volatile organic compounds (VOCs) emitted from microorganisms. Tetanus is a lethal disease caused by Clostridium tetani (C. tetani) bacterium and vaccination is an ultimate way to prevent this disease. In this paper, SPME fiber was used for the investigation of VOCs emerging from C. tetani during cultivation. Different types of VOCs such as sulfur-containing compounds were identified and some of them were selected as biomarkers for bioreactor monitoring during vaccine production. In the second step, the portable dynamic air sampling (PDAS) device was used as an interface for sampling VOCs by SPME fibers. The sampling procedure was optimized by face-centered central composite design (FC-CCD). The optimized sampling time and inlet gas flow rates were 10 min and 2 m L s -1 , respectively. PDAS was mounted in exhausted gas line of bioreactor and 42 samples of VOCs were prepared by SPME fibers in 7 days during incubation. Simultaneously, pH and optical density (OD) were evaluated to cultivation process which showed good correlations with the identified VOCs (>80%). This method could be used for VOCs sampling from off-gas of a bioreactor to monitoring of the cultivation process. Copyright © 2018. Published by Elsevier B.V.

  12. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    PubMed

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Timing Recovery Strategies in Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Kovintavewat, Piya

    At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.

  14. Real time viability detection of bacterial spores

    DOEpatents

    Vanderberg, Laura A.; Herdendorf, Timothy J.; Obiso, Richard J.

    2003-07-29

    This invention relates to a process for detecting the presence of viable bacterial spores in a sample and to a spore detection system, the process including placing a sample in a germination medium for a period of time sufficient for commitment of any present viable bacterial spores to occur, mixing the sample with a solution of a lanthanide capable of forming a fluorescent complex with dipicolinic acid, and, measuring the sample for the presence of dipicolinic acid, and the system including a germination chamber having inlets from a sample chamber, a germinant chamber and a bleach chamber, the germination chamber further including an outlet through a filtering means, the outlet connected to a detection chamber, the detection chamber having an inlet from a fluorescence promoting metal chamber and the detection chamber including a spectral excitation source and a means of measuring emission spectra from a sample, the detection chamber further connected to a waste chamber. A germination reaction mixture useful for promoting commitment of any viable bacterial spores in a sample including a combination of L-alanine, L-asparagine and D-glucose is also described.

  15. [Comparison of sulfur fumigation processing and direct hot air heating technology on puerarin contents and efficacy of Puerariae Thomsonii Radix].

    PubMed

    Yu, Hong-Li; Zhang, Qian; Jin, Yang-Ping; Wang, Kui-Long; Lu, Tu-Lin; Li, Lin

    2016-07-01

    In order to compare the effect of sulfur fumigation processing and direct hot air heating technology on puerarin contents and efficacy of Puerariae Thomsonii Radix, the fresh roots of Pueraria thomsonii were cut into small pieces and prepared into direct sunshine drying samples, direct hot air drying samples, and sulfur fumigation-hot air drying samples. Moisture contents of the samples were then determined. The puerarin contents of different samples were compared by HPLC method. Moreover, the models of drunkenness mice were established, and then with superoxide dismutase (SOD) content as the index, aqueous decoction extracts of Puerariae Thomsonii Radix samples with sulfur fumigation processing and non-sulfur fumigation processing methods were administrated by ig; the effects of sulfur fumigation on contents of SOD in mice liver and serum were determined, and the sulfur fumigation samples and non-sulfur fumigation samples were investigated for moth and mildew under different packaging and storage conditions. Results showed that the sulfur fumigation samples significantly changed the puerarin content from Puerariae Thomsonii Radix. The content of puerarin was decreased gradually when increasing the times of sulfur fumigation and amount of sulfur. SOD content in drunken mice liver and serum was significantly decreased when increasing the times of sulfur fumigation, showing significant difference with both direct sunshine drying group and direct hot air drying group. Moth and mildew were not found in the sulfur fumigation samples and direct hot air drying samples whose moisture contents were lower than the limit in Pharmacopoeia. Research showed that sulfur fumigation can significantly reduce the content of main active ingredients and reduce the efficacy of Puerariae Thomsonii Radix, indicating that the quality of Puerariae Thomsonii Radix was significantly decreased after sulfur fumigation. However, the contents of the main active ingredients, efficacy and storage results of the direct hot air drying samples were similar to those in direct sunshine drying samples, so the hot air drying process was a nice drying technology which could be promoted for use. Copyright© by the Chinese Pharmaceutical Association.

  16. Soil Gas Sample Handling: Evaluation of Water Removal and Sample Ganging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, Brad G.; Abrecht, David G.; Hayes, James C.

    2016-10-31

    Soil gas sampling is currently conducted in support of Nuclear Test Ban treaty verification. Soil gas samples are collected and analyzed for isotopes of interest. Some issues that can impact sampling and analysis of these samples are excess moisture and sample processing time. Here we discuss three potential improvements to the current sampling protocol; a desiccant for water removal, use of molecular sieve to remove CO 2 from the sample during collection, and a ganging manifold to allow composite analysis of multiple samples.

  17. Melting dynamics of ice in the mesoscopic regime

    PubMed Central

    Citroni, Margherita; Fanetti, Samuele; Falsini, Naomi; Foggi, Paolo; Bini, Roberto

    2017-01-01

    How does a crystal melt? How long does it take for melt nuclei to grow? The melting mechanisms have been addressed by several theoretical and experimental works, covering a subnanosecond time window with sample sizes of tens of nanometers and thus suitable to determine the onset of the process but unable to unveil the following dynamics. On the other hand, macroscopic observations of phase transitions, with millisecond or longer time resolution, account for processes occurring at surfaces and time limited by thermal contact with the environment. Here, we fill the gap between these two extremes, investigating the melting of ice in the entire mesoscopic regime. A bulk ice Ih or ice VI sample is homogeneously heated by a picosecond infrared pulse, which delivers all of the energy necessary for complete melting. The evolution of melt/ice interfaces thereafter is monitored by Mie scattering with nanosecond resolution, for all of the time needed for the sample to reequilibrate. The growth of the liquid domains, over distances of micrometers, takes hundreds of nanoseconds, a time orders of magnitude larger than expected from simple H-bond dynamics. PMID:28536197

  18. Cleaning conveyor belts in the chicken-cutting area of a poultry processing plant with 45°c water.

    PubMed

    Soares, V M; Pereira, J G; Zanette, C M; Nero, L A; Pinto, J P A N; Barcellos, V C; Bersot, L S

    2014-03-01

    Conveyor belts are widely used in food handling areas, especially in poultry processing plants. Because they are in direct contact with food and it is a requirement of the Brazilian health authority, conveyor belts are required to be continuously cleaned with hot water under pressure. The use of water in this procedure has been questioned based on the hypothesis that water may further disseminate microorganisms but not effectively reduce the organic material on the surface. Moreover, reducing the use of water in processing may contribute to a reduction in costs and emission of effluents. However, no consistent evidence in support of removing water during conveyor belt cleaning has been reported. Therefore, the objective of the present study was to compare the bacterial counts on conveyor belts that were or were not continuously cleaned with hot water under pressure. Superficial samples from conveyor belts (cleaned or not cleaned) were collected at three different times during operation (T1, after the preoperational cleaning [5 a.m.]; T2, after the first work shift [4 p.m.]; and T3, after the second work shift [1:30 a.m.]) in a poultry meat processing facility, and the samples were subjected to mesophilic and enterobacterial counts. For Enterobacteriaceae, no significant differences were observed between the conveyor belts, independent of the time of sampling or the cleaning process. No significant differences were observed between the counts of mesophilic bacteria at the distinct times of sampling on the conveyor belt that had not been subjected to continuous cleaning with water at 45°C. When comparing similar periods of sampling, no significant differences were observed between the mesophilic counts obtained from the conveyor belts that were or were not subjected to continuous cleaning with water at 45°C. Continuous cleaning with water did not significantly reduce microorganism counts, suggesting the possibility of discarding this procedure in chicken processing.

  19. Combined Effect of Long Processing Time and Na2SiF6 on the Properties of PEO Coatings Formed on AZ91D

    NASA Astrophysics Data System (ADS)

    Rehman, Zeeshan Ur; Koo, Bon Heun

    2016-08-01

    In this study, protective ceramic coatings were prepared on AZ91D magnesium alloy by plasma electrolytic oxidation (PEO) to improve the corrosion and mechanical properties of AZ91D magnesium alloy. The process was conducted in silicate-fluoride-based electrolyte solution. It was found that the average micro-hardness of the coating was significantly increased with an increase in the PEO processing time. The highest value of the average micro-hardness ~1271.2 HV was recorded for 60-min processing time. The phase analysis of the coatings indicated that they were mainly composed of Mg2SiO4, MgO, and MgF2 phases. The surface and cross-sectional study demonstrated that porosity was largely reduced with processing time, together with the change in pore geometry from irregular to spherical shape. The results of the polarization test in 3.5 wt.% NaCl solution revealed that aggressive corrosion took place for 5-min sample; however, the corrosion current was noticeably decreased to 0.43 × 10-7 A/cm2 for the 60-min-coated sample. The superior nobility and hardness for long processing time are suggested to be due to the dense and highly thick coating, coupled with the presence of MgF2 phase.

  20. Design and research of built-in sample cell with multiple optical reflections

    NASA Astrophysics Data System (ADS)

    Liu, Jianhui; Wang, Shuyao; Lv, Jinwei; Liu, Shuyang; Zhou, Tao; Jia, Xiaodong

    2017-10-01

    In the field of trace gas measurement, with the characteristics of high sensitivity, high selectivity and rapid detection, tunable diode laser absorption spectroscopy (TDLAS) is widely used in industrial process and trace gas pollution monitoring. Herriott cell is a common form of multiple reflections of the sample cell, the structure of the Herriott cell is relatively simple, which be used to application of trace gas absorption spectroscopy. In the pragmatic situation, the gas components are complicated, and the continuous testing process for a long time can lead to different degree of pollution and corrosion for the reflector in the sample cell. If the mirror is not cleaned up in time, it will have a great influence on the detection accuracy. In order to solve this problem in the process of harsh environment detection, this paper presents a design of the built-in sample cell to avoid the contact of gas and the mirror, thereby effectively reducing corrosion pollution. If there is optical pollution, direct replacement of the built-in optical sample cell can easily to be disassembled, and cleaned. The advantage of this design is long optical path, high precision, cost savings and so on.

  1. Non-Contact Conductivity Measurement for Automated Sample Processing Systems

    NASA Technical Reports Server (NTRS)

    Beegle, Luther W.; Kirby, James P.

    2012-01-01

    A new method has been developed for monitoring and control of automated sample processing and preparation especially focusing on desalting of samples before analytical analysis (described in more detail in Automated Desalting Apparatus, (NPO-45428), NASA Tech Briefs, Vol. 34, No. 8 (August 2010), page 44). The use of non-contact conductivity probes, one at the inlet and one at the outlet of the solid phase sample preparation media, allows monitoring of the process, and acts as a trigger for the start of the next step in the sequence (see figure). At each step of the muti-step process, the system is flushed with low-conductivity water, which sets the system back to an overall low-conductivity state. This measurement then triggers the next stage of sample processing protocols, and greatly minimizes use of consumables. In the case of amino acid sample preparation for desalting, the conductivity measurement will define three key conditions for the sample preparation process. First, when the system is neutralized (low conductivity, by washing with excess de-ionized water); second, when the system is acidified, by washing with a strong acid (high conductivity); and third, when the system is at a basic condition of high pH (high conductivity). Taken together, this non-contact conductivity measurement for monitoring sample preparation will not only facilitate automation of the sample preparation and processing, but will also act as a way to optimize the operational time and use of consumables

  2. A containerless levitation setup for liquid processing in a superconducting magnet.

    PubMed

    Lu, Hui-Meng; Yin, Da-Chuan; Li, Hai-Sheng; Geng, Li-Qiang; Zhang, Chen-Yan; Lu, Qin-Qin; Guo, Yun-Zhu; Guo, Wei-Hong; Shang, Peng; Wakayama, Nobuko I

    2008-09-01

    Containerless processing of materials is considered beneficial for obtaining high quality products due to the elimination of the detrimental effects coming from the contact with container walls. Many containerless processing methods are realized by levitation techniques. This paper describes a containerless levitation setup that utilized the magnetization force generated in a gradient magnetic field. It comprises a levitation unit, a temperature control unit, and a real-time observation unit. Known volume of liquid diamagnetic samples can be levitated in the levitation chamber, the temperature of which is controlled using the temperature control unit. The evolution of the levitated sample is observed in real time using the observation unit. With this setup, containerless processing of liquid such as crystal growth from solution can be realized in a well-controlled manner. Since the levitation is achieved using a superconducting magnet, experiments requiring long duration time such as protein crystallization and simulation of space environment for living system can be easily succeeded.

  3. Magnetocaloric effect and slow magnetic relaxation in CsGd(MoO4)2 induced by crystal-field anisotropy

    NASA Astrophysics Data System (ADS)

    Tkáč, V.; Tarasenko, R.; Orendáčová, A.; Orendáč, M.; Sechovský, V.; Feher, A.

    2018-05-01

    The experimental and theoretical study of magnetocaloric effect and magnetic relaxation of the powder sample of CsGd(MoO4)2 were performed. The large conventional magnetocaloric effect was found around 2 K with - ΔSmax ≈ 26.5 J/(kg K) for B = 7 T. AC susceptibility measurement revealed multiple-time scale magnetic relaxation effects on different time scales. Slowest relaxation effect was attributed to the direct process with a bottleneck effect and two faster relaxation processes are effectively temperature independent, probably as a result of averaging in the powder sample.

  4. Process observation in fiber laser-based selective laser melting

    NASA Astrophysics Data System (ADS)

    Thombansen, Ulrich; Gatej, Alexander; Pereira, Milton

    2015-01-01

    The process observation in selective laser melting (SLM) focuses on observing the interaction point where the powder is processed. To provide process relevant information, signals have to be acquired that are resolved in both time and space. Especially in high-power SLM, where more than 1 kW of laser power is used, processing speeds of several meters per second are required for a high-quality processing results. Therefore, an implementation of a suitable process observation system has to acquire a large amount of spatially resolved data at low sampling speeds or it has to restrict the acquisition to a predefined area at a high sampling speed. In any case, it is vitally important to synchronously record the laser beam position and the acquired signal. This is a prerequisite that allows the recorded data become information. Today, most SLM systems employ f-theta lenses to focus the processing laser beam onto the powder bed. This report describes the drawbacks that result for process observation and suggests a variable retro-focus system which solves these issues. The beam quality of fiber lasers delivers the processing laser beam to the powder bed at relevant focus diameters, which is a key prerequisite for this solution to be viable. The optical train we present here couples the processing laser beam and the process observation coaxially, ensuring consistent alignment of interaction zone and observed area. With respect to signal processing, we have developed a solution that synchronously acquires signals from a pyrometer and the position of the laser beam by sampling the data with a field programmable gate array. The relevance of the acquired signals has been validated by the scanning of a sample filament. Experiments with grooved samples show a correlation between different powder thicknesses and the acquired signals at relevant processing parameters. This basic work takes a first step toward self-optimization of the manufacturing process in SLM. It enables the addition of cognitive functions to the manufacturing system to the extent that the system could track its own process. The results are based on analyzing and redesigning the optical train, in combination with a real-time signal acquisition system which provides a solution to certain technological barriers.

  5. Cutaway line drawing of STS-34 middeck experiment Polymer Morphology (PM)

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Cutaway line drawing shows components of STS-34 middeck experiment Polymer Morphology (PM). Components include the EAC, heat exchanger, sample cell control (SCC), sample cells, source, interferometer, electronics, carousel drive, infrared (IR) beam, and carousel. PM, a 3M-developed organic materials processing experiment, is designed to explore the effects of microgravity on polymeric materials as they are processed in space. The samples of polymeric materials being studied in the PM experiment are thin films (25 microns or less) approximately 25mm in diameter. The samples are mounted between two infrared transparent windows in a specially designed infrared cell that provides the capability of thermally processing the samples to 200 degrees Celsius with a high degree of thermal control. The samples are mounted on a carousel that allows them to be positioned, one at a time, in the infrared beam where spectra may be acquired. The Generic Electronics Module (GEM) provides all carousel and

  6. DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR BENCH-SCALE REFORMER TREATABILITY STUDIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BANNING DL

    2011-02-11

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.« less

  7. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  8. Influence of atmospheric processes on the solubility and composition of iron in Saharan dust

    DOE PAGES

    Longo, Amelia F.; Feng, Yan; Lai, Barry; ...

    2016-06-10

    Aerosol iron was examined in Saharan dust plumes using a combination of iron near-edge X-ray absorption spectroscopy and wet-chemical techniques. Aerosol samples were collected at three sites located in the Mediterranean, the Atlantic, and Bermuda to characterize iron at different atmospheric transport lengths and time scales. Iron(III) oxides were a component of aerosols at all sampling sites and dominated the aerosol iron in Mediterranean samples. In Atlantic samples, iron(II and III) sulfate, iron(III) phosphate, and iron(II) silicates were also contributors to aerosol composition. With increased atmospheric transport time, iron(II) sulfates are found to become more abundant, aerosol iron oxidation statemore » became more reduced, and aerosol acidity increased. As a result, atmospheric processing including acidic reactions and photoreduction likely influence the form of iron minerals and oxidation state in Saharan dust aerosols and contribute to increases in aerosol-iron solubility.« less

  9. Influence of Atmospheric Processes on the Solubility and Composition of Iron in Saharan Dust.

    PubMed

    Longo, Amelia F; Feng, Yan; Lai, Barry; Landing, William M; Shelley, Rachel U; Nenes, Athanasios; Mihalopoulos, Nikolaos; Violaki, Kalliopi; Ingall, Ellery D

    2016-07-05

    Aerosol iron was examined in Saharan dust plumes using a combination of iron near-edge X-ray absorption spectroscopy and wet-chemical techniques. Aerosol samples were collected at three sites located in the Mediterranean, the Atlantic, and Bermuda to characterize iron at different atmospheric transport lengths and time scales. Iron(III) oxides were a component of aerosols at all sampling sites and dominated the aerosol iron in Mediterranean samples. In Atlantic samples, iron(II and III) sulfate, iron(III) phosphate, and iron(II) silicates were also contributors to aerosol composition. With increased atmospheric transport time, iron(II) sulfates are found to become more abundant, aerosol iron oxidation state became more reduced, and aerosol acidity increased. Atmospheric processing including acidic reactions and photoreduction likely influence the form of iron minerals and oxidation state in Saharan dust aerosols and contribute to increases in aerosol-iron solubility.

  10. A new device to estimate abundance of moist-soil plant seeds

    USGS Publications Warehouse

    Penny, E.J.; Kaminski, R.M.; Reinecke, K.J.

    2006-01-01

    Methods to sample the abundance of moist-soil seeds efficiently and accurately are critical for evaluating management practices and determining food availability. We adapted a portable, gasoline-powered vacuum to estimate abundance of seeds on the surface of a moist-soil wetland in east-central Mississippi and evaluated the sampler by simulating conditions that researchers and managers may experience when sampling moist-soil areas for seeds. We measured the percent recovery of known masses of seeds by the vacuum sampler in relation to 4 experimentally controlled factors (i.e., seed-size class, sample mass, soil moisture class, and vacuum time) with 2-4 levels per factor. We also measured processing time of samples in the laboratory. Across all experimental factors, seed recovery averaged 88.4% and varied little (CV = 0.68%, n = 474). Overall, mean time to process a sample was 30.3 ? 2.5 min (SE, n = 417). Our estimate of seed recovery rate (88%) may be used to adjust estimates for incomplete seed recovery, or project-specific correction factors may be developed by investigators. Our device was effective for estimating surface abundance of moist-soil plant seeds after dehiscence and before habitats were flooded.

  11. A generic template for automated bioanalytical ligand-binding assays using modular robotic scripts in support of discovery biotherapeutic programs.

    PubMed

    Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J

    2013-07-01

    Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.

  12. Tackling sampling challenges in biomolecular simulations.

    PubMed

    Barducci, Alessandro; Pfaendtner, Jim; Bonomi, Massimiliano

    2015-01-01

    Molecular dynamics (MD) simulations are a powerful tool to give an atomistic insight into the structure and dynamics of proteins. However, the time scales accessible in standard simulations, which often do not match those in which interesting biological processes occur, limit their predictive capabilities. Many advanced sampling techniques have been proposed over the years to overcome this limitation. This chapter focuses on metadynamics, a method based on the introduction of a time-dependent bias potential to accelerate sampling and recover equilibrium properties of a few descriptors that are able to capture the complexity of a process at a coarse-grained level. The theory of metadynamics and its combination with other popular sampling techniques such as the replica exchange method is briefly presented. Practical applications of these techniques to the study of the Trp-Cage miniprotein folding are also illustrated. The examples contain a guide for performing these calculations with PLUMED, a plugin to perform enhanced sampling simulations in combination with many popular MD codes.

  13. Age-Related Differences in Reaction Time Task Performance in Young Children

    ERIC Educational Resources Information Center

    Kiselev, Sergey; Espy, Kimberlay Andrews; Sheffield, Tiffany

    2009-01-01

    Performance of reaction time (RT) tasks was investigated in young children and adults to test the hypothesis that age-related differences in processing speed supersede a "global" mechanism and are a function of specific differences in task demands and processing requirements. The sample consisted of 54 4-year-olds, 53 5-year-olds, 59…

  14. Academic Motivation, Self-Concept, Engagement, and Performance in High School: Key Processes from a Longitudinal Perspective

    ERIC Educational Resources Information Center

    Green, Jasmine; Liem, Gregory Arief D.; Martin, Andrew J.; Colmar, Susan; Marsh, Herbert W.; McInerney, Dennis

    2012-01-01

    The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation…

  15. Circulating tumoral cells lack circadian-rhythm in hospitalized metastasic breast cancer patients.

    PubMed

    García-Sáenz, José Angel; Martín, Miguel; Maestro, Marisa; Vidaurreta, Marta; Veganzones, Silvia; Villalobos, Laura; Rodríguez-Lajusticia, Laura; Rafael, Sara; Sanz-Casla, María Teresa; Casado, Antonio; Sastre, Javier; Arroyo, Manuel; Díaz-Rubio, Eduardo

    2006-11-01

    The relationship between breast cancer and circadian rhythm variation has been extensively studied. Increased breast tumorigenesis has been reported in melatonin-suppressed experimental models and in observational studies. Circulating Tumor Cells (CTC) circadian- rhythm may optimize the timing of therapies. This is a prospective experimental study to ascertain the day-time and night-time CTC levels in hospitalized metastasic breast cancer (MBC) patients. CTC are isolated and enumerated from a 08:00 AM and 08:00 PM blood collections. 23 MBC and 23 healthy volunteers entered the study. 69 samples were collected (23 samples at 08:00 AM and 23 samples at 08:00 PM from MBC; 23 samples from healthy volunteers). Results from two patients were rejected due to sample processing errors. No CTC were isolated from healthy-volunteers. No-differences between daytime and night-time CTC were observed. Therefore, we could not ascertain CTC circadian-rhythm in hospitalized metastasic breast cancer patients.

  16. Adventures in Modern Time Series Analysis: From the Sun to the Crab Nebula and Beyond

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey

    2014-01-01

    With the generation of long, precise, and finely sampled time series the Age of Digital Astronomy is uncovering and elucidating energetic dynamical processes throughout the Universe. Fulfilling these opportunities requires data effective analysis techniques rapidly and automatically implementing advanced concepts. The Time Series Explorer, under development in collaboration with Tom Loredo, provides tools ranging from simple but optimal histograms to time and frequency domain analysis for arbitrary data modes with any time sampling. Much of this development owes its existence to Joe Bredekamp and the encouragement he provided over several decades. Sample results for solar chromospheric activity, gamma-ray activity in the Crab Nebula, active galactic nuclei and gamma-ray bursts will be displayed.

  17. Development of an automated data processing method for sample to sample comparison of seized methamphetamines.

    PubMed

    Choe, Sanggil; Lee, Jaesin; Choi, Hyeyoung; Park, Yujin; Lee, Heesang; Pyo, Jaesung; Jo, Jiyeong; Park, Yonghoon; Choi, Hwakyung; Kim, Suncheun

    2012-11-30

    The information about the sources of supply, trafficking routes, distribution patterns and conspiracy links can be obtained from methamphetamine profiling. The precursor and synthetic method for the clandestine manufacture can be estimated from the analysis of minor impurities contained in methamphetamine. Also, the similarity between samples can be evaluated using the peaks that appear in chromatograms. In South Korea, methamphetamine was the most popular drug but the total seized amount of methamphetamine whole through the country was very small. Therefore, it would be more important to find the links between samples than the other uses of methamphetamine profiling. Many Asian countries including Japan and South Korea have been using the method developed by National Research Institute of Police Science of Japan. The method used gas chromatography-flame ionization detector (GC-FID), DB-5 column and four internal standards. It was developed to increase the amount of impurities and minimize the amount of methamphetamine. After GC-FID analysis, the raw data have to be processed. The data processing steps are very complex and require a lot of time and effort. In this study, Microsoft Visual Basic Application (VBA) modules were developed to handle these data processing steps. This module collected the results from the data into an Excel file and then corrected the retention time shift and response deviation generated from the sample preparation and instruments analysis. The developed modules were tested for their performance using 10 samples from 5 different cases. The processed results were analyzed with Pearson correlation coefficient for similarity assessment and the correlation coefficient of the two samples from the same case was more than 0.99. When the modules were applied to 131 seized methamphetamine samples, four samples from two different cases were found to have the common origin and the chromatograms of the four samples were appeared visually identical. The developed VBA modules could process raw data of GC-FID very quickly and easily. Also, they could assess the similarity between samples by peak pattern recognition using whole peaks without spectral identification of each peak that appeared in the chromatogram. The results collectively suggest that the modules would be useful tools to augment similarity assessment between seized methamphetamine samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Time to stabilization in single leg drop jump landings: an examination of calculation methods and assessment of differences in sample rate, filter settings and trial length on outcome values.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2015-01-01

    Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The Maia Spectroscopy Detector System: Engineering for Integrated Pulse Capture, Low-Latency Scanning and Real-Time Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkham, R.; Siddons, D.; Dunn, P.A.

    2010-06-23

    The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10{sup 7}/s, integrated scanning of samples for pixel transit times as small as 50 {micro}s and high definition images of 10{sup 8} pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and themore » underpinning engineering solutions.« less

  20. The Earth Microbiome Project and modeling the planets microbial potential (Invited)

    NASA Astrophysics Data System (ADS)

    Gilbert, J. A.

    2013-12-01

    The understanding of Earth's climate and ecology requires multiscale observations of the biosphere, of which microbial life are a major component. However, to acquire and process physical samples of soil, water and air that comprise the appropriate spatial and temporal resolution to capture the immense variation in microbial dynamics, would require a herculean effort and immense financial resources dwarfing even the most ambitious projects to date. To overcome this hurdle we created the Earth Microbiome Project, a crowd-sourced effort to acquire physical samples from researchers around the world that are, importantly, contextualized with physical, chemical and biological data detailing the environmental properties of that sample in the location and time it was acquired. The EMP leverages these existing efforts to target a systematic analysis of microbial taxonomic and functional dynamics across a vast array of environmental parameter gradients. The EMP captures the environmental gradients, location, time and sampling protocol information about every sample donated by our valued collaborators. Physical samples are then processed using a standardized DNA extraction, PCR, and shotgun sequencing protocol to generate comparable data regarding the microbial community structure and function in each sample. To date we have processed >17,000 samples from 40 different biomes. One of the key goals of the EMP is to map the spatiotemporal variability of microbial communities to capture the changes in important functional processes that need to be appropriately expressed in models to provide reliable forecasts of ecosystem phenotype across our changing planet. This is essential if we are to develop economically sound strategies to be good stewards of our Earth. The EMP recognizes that environments are comprised of complex sets of interdependent parameters and that the development of useful predictive computational models of both terrestrial and atmospheric systems requires recognition and accommodation of sources of uncertainty.

  1. Following the dynamics of matter with femtosecond precision using the X-ray streaking method

    DOE PAGES

    David, C.; Karvinen, P.; Sikorski, M.; ...

    2015-01-06

    X-ray Free Electron Lasers (FELs) can produce extremely intense and very short pulses, down to below 10 femtoseconds (fs). Among the key applications are ultrafast time-resolved studies of dynamics of matter by observing responses to fast excitation pulses in a pump-probe manner. Detectors with sufficient time resolution for observing these processes are not available. Therefore, such experiments typically measure a sample's full dynamics by repeating multiple pump-probe cycles at different delay times. This conventional method assumes that the sample returns to an identical or very similar state after each cycle. Here we describe a novel approach that can provide amore » time trace of responses following a single excitation pulse, jitter-free, with fs timing precision. We demonstrate, in an X-ray diffraction experiment, how it can be applied to the investigation of ultrafast irreversible processes.« less

  2. Artifacts in time-resolved NUS: A case study of NOE build-up curves from 2D NOESY.

    PubMed

    Dass, Rupashree; Kasprzak, Paweł; Koźmiński, Wiktor; Kazimierczuk, Krzysztof

    2016-04-01

    Multidimensional NMR spectroscopy requires time-consuming sampling of indirect dimensions and so is usually used to study stable samples. However, dynamically changing compounds or their mixtures commonly occur in problems of natural science. Monitoring them requires the use multidimensional NMR in a time-resolved manner - in other words, a series of quick spectra must be acquired at different points in time. Among the many solutions that have been proposed to achieve this goal, time-resolved non-uniform sampling (TR-NUS) is one of the simplest. In a TR-NUS experiment, the signal is sampled using a shuffled random schedule and then divided into overlapping subsets. These subsets are then processed using one of the NUS reconstruction methods, for example compressed sensing (CS). The resulting stack of spectra forms a temporal "pseudo-dimension" that shows the changes caused by the process occurring in the sample. CS enables the use of small subsets of data, which minimizes the averaging of the effects studied. Yet, even within these limited timeframes, the sample undergoes certain changes. In this paper we discuss the effect of varying signal amplitude in a TR-NUS experiment. Our theoretical calculations show that the variations within the subsets lead to t1-noise, which is dependent on the rate of change of the signal amplitude. We verify these predictions experimentally. As a model case we choose a novel 2D TR-NOESY experiment in which mixing time is varied in parallel with shuffled NUS in the indirect dimension. The experiment, performed on a sample of strychnine, provides a near-continuous NOE build-up curve, whose shape closely reflects the t1-noise level. 2D TR-NOESY reduces the measurement time compared to the conventional approach and makes it possible to verify the theoretical predictions about signal variations during TR-NUS. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Real-Time Electrical Impedimetric Monitoring of Blood Coagulation Process under Temperature and Hematocrit Variations Conducted in a Microfluidic Chip

    PubMed Central

    Lei, Kin Fong; Chen, Kuan-Hao; Tsui, Po-Hsiang; Tsang, Ngan-Ming

    2013-01-01

    Blood coagulation is an extremely complicated and dynamic physiological process. Monitoring of blood coagulation is essential to predict the risk of hemorrhage and thrombosis during cardiac surgical procedures. In this study, a high throughput microfluidic chip has been developed for the investigation of the blood coagulation process under temperature and hematocrit variations. Electrical impedance of the whole blood was continuously recorded by on-chip electrodes in contact with the blood sample during coagulation. Analysis of the impedance change of the blood was conducted to investigate the characteristics of blood coagulation process and the starting time of blood coagulation was defined. The study of blood coagulation time under temperature and hematocrit variations was shown a good agreement with results in the previous clinical reports. The electrical impedance measurement for the definition of blood coagulation process provides a fast and easy measurement technique. The microfluidic chip was shown to be a sensitive and promising device for monitoring blood coagulation process even in a variety of conditions. It is found valuable for the development of point-of-care coagulation testing devices that utilizes whole blood sample in microliter quantity. PMID:24116099

  4. Robust high-throughput batch screening method in 384-well format with optical in-line resin quantification.

    PubMed

    Kittelmann, Jörg; Ottens, Marcel; Hubbuch, Jürgen

    2015-04-15

    High-throughput batch screening technologies have become an important tool in downstream process development. Although continuative miniaturization saves time and sample consumption, there is yet no screening process described in the 384-well microplate format. Several processes are established in the 96-well dimension to investigate protein-adsorbent interactions, utilizing between 6.8 and 50 μL resin per well. However, as sample consumption scales with resin volumes and throughput scales with experiments per microplate, they are limited in costs and saved time. In this work, a new method for in-well resin quantification by optical means, applicable in the 384-well format, and resin volumes as small as 0.1 μL is introduced. A HTS batch isotherm process is described, utilizing this new method in combination with optical sample volume quantification for screening of isotherm parameters in 384-well microplates. Results are qualified by confidence bounds determined by bootstrap analysis and a comprehensive Monte Carlo study of error propagation. This new approach opens the door to a variety of screening processes in the 384-well format on HTS stations, higher quality screening data and an increase in throughput. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. The application of a novel optical SPM in biomedicine

    NASA Astrophysics Data System (ADS)

    Li, Yinli; Chen, Haibo; Wu, Shifa; Song, Linfeng; Zhang, Jian

    2005-01-01

    As an analysis tool, SPM has been broadly used in biomedicine in recent years, such as AFM and SNOM; they are effective instruments in detecting life nanostructures at atomic level. Atomic force and photon scanning tunneling microscope (AF/PSTM) is one of member of SPM, it can be used to obtain sample" optical and atomic fore images at once scanning, these images include the transmissivity image, reflection index image and topography image. This report mainly introduces the application of AF/PSTM in red blood membrane and the effect of different sample dealt with processes on the experiment result. The materials for preparing red cells membrane samples are anticoagulant blood, isotonic phosphatic buffer solution (PBS) and new two times distilled water. The images of AF/PSTM give real expression to the biology samples" fact despite of different sample dealt with processes, which prove that AF/PSTM suits to biology sample imaging. At the same time, the optical images and the topography image of AF/PSTM of the same sample are complementary with each other; this will make AF/PSTM a facile tool to analysis biologic samples" nanostructure. As another sample, this paper gives the application of AF/PSTM in immunoassay, the result shows that AF/PSTM is suit to analysis biologic sample, and it will become a new tool for biomedicine test.

  6. Learning process mapping heuristics under stochastic sampling overheads

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  7. A microfluidic device integrating dual CMOS polysilicon nanowire sensors for on-chip whole blood processing and simultaneous detection of multiple analytes.

    PubMed

    Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu

    2016-08-02

    The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.

  8. Microwave-assisted extraction and mild saponification for determination of organochlorine pesticides in oyster samples.

    PubMed

    Carro, N; García, I; Ignacio, M-C; Llompart, M; Yebra, M-C; Mouteira, A

    2002-10-01

    A sample-preparation procedure (extraction and saponification) using microwave energy is proposed for determination of organochlorine pesticides in oyster samples. A Plackett-Burman factorial design has been used to optimize the microwave-assisted extraction and mild saponification on a freeze dried sample spiked with a mixture of aldrin, endrin, dieldrin, heptachlor, heptachorepoxide, isodrin, transnonachlor, p, p'-DDE, and p, p'-DDD. Six variables: solvent volume, extraction time, extraction temperature, amount of acetone (%) in the extractant solvent, amount of sample, and volume of NaOH solution were considered in the optimization process. The results show that the amount of sample is statistically significant for dieldrin, aldrin, p, p'-DDE, heptachlor, and transnonachlor and solvent volume for dieldrin, aldrin, and p, p'-DDE. The volume of NaOH solution is statistically significant for aldrin and p, p'-DDE only. Extraction temperature and extraction time seem to be the main factors determining the efficiency of extraction process for isodrin and p, p'-DDE, respectively. The optimized procedure was compared with conventional Soxhlet extraction.

  9. Porous calcium polyphosphate bone substitutes: additive manufacturing versus conventional gravity sinter processing-effect on structure and mechanical properties.

    PubMed

    Hu, Youxin; Shanjani, Yaser; Toyserkani, Ehsan; Grynpas, Marc; Wang, Rizhi; Pilliar, Robert

    2014-02-01

    Porous calcium polyphosphate (CPP) structures proposed as bone-substitute implants and made by sintering CPP powders to form bending test samples of approximately 35 vol % porosity were machined from preformed blocks made either by additive manufacturing (AM) or conventional gravity sintering (CS) methods and the structure and mechanical characteristics of samples so made were compared. AM-made samples displayed higher bending strengths (≈1.2-1.4 times greater than CS-made samples), whereas elastic constant (i.e., effective elastic modulus of the porous structures) that is determined by material elastic modulus and structural geometry of the samples was ≈1.9-2.3 times greater for AM-made samples. X-ray diffraction analysis showed that samples made by either method displayed the same crystal structure forming β-CPP after sinter annealing. The material elastic modulus, E, determined using nanoindentation tests also showed the same value for both sample types (i.e., E ≈ 64 GPa). Examination of the porous structures indicated that significantly larger sinter necks resulted in the AM-made samples which presumably resulted in the higher mechanical properties. The development of mechanical properties was attributed to the different sinter anneal procedures required to make 35 vol % porous samples by the two methods. A primary objective of the present study, in addition to reporting on bending strength and sample stiffness (elastic constant) characteristics, was to determine why the two processes resulted in the observed mechanical property differences for samples of equivalent volume percentage of porosity. An understanding of the fundamental reason(s) for the observed effect is considered important for developing improved processes for preparation of porous CPP implants as bone substitutes for use in high load-bearing skeletal sites. Copyright © 2013 Wiley Periodicals, Inc.

  10. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Sample processing approach for detection of ricin in surface samples.

    PubMed

    Kane, Staci; Shah, Sanjiv; Erler, Anne Marie; Alfaro, Teneile

    2017-12-01

    With several ricin contamination incidents reported over the past decade, rapid and accurate methods are needed for environmental sample analysis, especially after decontamination. A sample processing method was developed for common surface sampling devices to improve the limit of detection and avoid false negative/positive results for ricin analysis. Potential assay interferents from the sample matrix (bleach residue, sample material, wetting buffer), including reference dust, were tested using a Time-Resolved Fluorescence (TRF) immunoassay. Test results suggested that the sample matrix did not cause the elevated background fluorescence sometimes observed when analyzing post-bleach decontamination samples from ricin incidents. Furthermore, sample particulates (80mg/mL Arizona Test Dust) did not enhance background fluorescence or interfere with ricin detection by TRF. These results suggested that high background fluorescence in this immunoassay could be due to labeled antibody quality and/or quantity issues. Centrifugal ultrafiltration devices were evaluated for ricin concentration as a part of sample processing. Up to 30-fold concentration of ricin was observed by the devices, which serve to remove soluble interferents and could function as the front-end sample processing step to other ricin analytical methods. The procedure has the potential to be used with a broader range of environmental sample types and with other potential interferences and to be followed by other ricin analytical methods, although additional verification studies would be required. Published by Elsevier B.V.

  12. Methods for producing silicon carbide architectural preforms

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A. (Inventor); Yun, Hee (Inventor)

    2010-01-01

    Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties for each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.

  13. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  14. Demonstration of the efficiency and robustness of an acid leaching process to remove metals from various CCA-treated wood samples.

    PubMed

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Janin, Amélie; Gastonguay, Louis

    2014-01-01

    In recent years, an efficient and economically attractive leaching process has been developed to remove metals from copper-based treated wood wastes. This study explored the applicability of this leaching process using chromated copper arsenate (CCA) treated wood samples with different initial metal loading and elapsed time between wood preservation treatment and remediation. The sulfuric acid leaching process resulted in the solubilization of more than 87% of the As, 70% of the Cr, and 76% of the Cu from CCA-chips and in the solubilization of more than 96% of the As, 78% of the Cr and 91% of the Cu from CCA-sawdust. The results showed that the performance of this leaching process might be influenced by the initial metal loading of the treated wood wastes and the elapsed time between preservation treatment and remediation. The effluents generated during the leaching steps were treated by precipitation-coagulation to satisfy the regulations for effluent discharge in municipal sewers. Precipitation using ferric chloride and sodium hydroxide was highly efficient, removing more than 99% of the As, Cr, and Cu. It appears that this leaching process can be successfully applied to remove metals from different CCA-treated wood samples and then from the effluents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. A novel PMT test system based on waveform sampling

    NASA Astrophysics Data System (ADS)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  16. Influence of experimental conditions on data variability in the liver comet assay.

    PubMed

    Guérard, M; Marchand, C; Plappert-Helbig, U

    2014-03-01

    The in vivo comet assay has increasingly been used for regulatory genotoxicity testing in recent years. While it has been demonstrated that the experimental execution of the assay, for example, electrophoresis or scoring, can have a strong impact on the results; little is known on how initial steps, that is, from tissue sampling during necropsy up to slide preparation, can influence the comet assay results. Therefore, we investigated which of the multitude of steps in processing the liver for the comet assay are most critical. All together eight parameters were assessed by using liver samples of untreated animals. In addition, two of those parameters (temperature and storage time of liver before embedding into agarose) were further investigated in animals given a single oral dose of ethyl methanesulfonate at dose levels of 50, 100, and 200 mg/kg, 3 hr prior to necropsy. The results showed that sample cooling emerged as the predominant influence factor, whereas variations in other elements of the procedure (e.g., size of the liver piece sampled, time needed to process the liver tissue post-mortem, agarose temperature, or time of lysis) seem to be of little relevance. Storing of liver samples of up to 6 hr under cooled conditions did not cause an increase in tail intensity. In contrast, storing the tissue at room temperature, resulted in a considerable time-dependent increase in comet parameters. Copyright © 2013 Wiley Periodicals, Inc.

  17. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhiyong; Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian 361005; Smith, Pieter E. S.

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. Bymore » porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.« less

  18. Role of intensive milling in the processing of barium ferrite/magnetite/iron hybrid magnetic nano-composites via partial reduction of barium ferrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molaei, M.J., E-mail: mj.molaee@merc.ac.ir; Delft Chem Tech, Faculty of Applied Sciences, Delft University of Technology, Julianalaan 136, 2628 BL Delft; Ataie, A.

    2015-03-15

    In this research a mixture of barium ferrite and graphite was milled for different periods of time and then heat treated at different temperatures. The effects of milling time and heat treatment temperature on the phase composition, thermal behavior, morphology and magnetic properties of the samples have been investigated using X-ray diffraction, differential thermal analysis, high resolution transmission electron microscopy and vibrating sample magnetometer techniques, respectively. X-ray diffraction results revealed that BaFe{sub 12}O{sub 19}/Fe{sub 3}O{sub 4} nanocomposites form after a 20 h milling due to the partial reduction of BaFe{sub 12}O{sub 19}. High resolution transmission electron microscope images of amore » 40 h milled sample showed agglomerated structure consisting of nanoparticles with a mean particle size of 30 nm. Thermal analysis of the samples via differential thermal analysis indicated that for un-milled samples, heat treatment up to 900 °C did not result in α-Fe formation, while for a 20 h milled sample heat treatment at 700 °C resulted in reduction process progress to the formation of α-Fe. Wustite was disappeared in an X-ray diffraction pattern of a heat treated sample at 850 °C, by increasing the milling time from 20 to 40 h. By increasing the milling time, the structure of heat treated samples becomes magnetically softer due to an increase in saturation magnetization and a decrease in coercivity. Saturation magnetization and coercivity of a sample milled for 20 h and heat treated at 850 °C were 126.3 emu/g and 149.5 Oe which by increasing the milling time to 40 h, alter to 169.1 emu/g and 24.3 Oe, respectively. High coercivity values of milled and heat treated samples were attributed to the nano-scale formed iron particles. - Graphical abstract: Display Omitted - Highlights: • Barium ferrite and graphite were treated mechano-thermally. • Increasing milling time increases reduction progress after heat treatment. • Composites including iron nano-crystals forms by milling and heat treatment. • Shorter milling time results in higher H{sub C} of the milled and heat treated samples.« less

  19. How Do I Sample the Environment and Equipment?

    NASA Astrophysics Data System (ADS)

    Kornacki, Jeffrey L.

    Food product contamination from the post-processing environment is likely the most frequent cause of contaminated processed food product recalls and a significant source of poisoning outbreaks, and shelf life problems in North America with processed Ready-To-Eat foods. Conditions exist for the growth of microorganisms in most food processing factories. Failure to clean and effectively sanitize a microbial growth niche can lead to biofilm formation. Biofilms may be orders of magnitude more resistant to destruction by sanitizers. Cells in some biofilms have been shown to be 1,000 times more resistant to destruction than those which are freely suspended. This has implications for cleaning, sanitizing, sampling, and training. Sampling the factory environment is one means of monitoring the efficacy of microbiological control as well as a powerful tool for in-factory contamination investigation. Many sampling techniques exist and are discussed. It is important to recognize the difference between cleaning (removal of soil) and sanitization (reduction of microbial populations). Knowing where, when, and how to sample, how many samples to take, and what to test for and how to interpret test information is critical in finding and preventing contamination.

  20. Is aceticlastic methanogen composition in full-scale anaerobic processes related to acetate utilization capacity?

    PubMed

    Yilmaz, Vedat; Ince-Yilmaz, Ebru; Yilmazel, Yasemin Dilsad; Duran, Metin

    2014-06-01

    In this study, biomass samples were obtained from six municipal and nine industrial full-scale anaerobic processes to investigate whether the aceticlastic methanogen population composition is related to acetate utilization capacity and the nature of the wastewater treated, i.e. municipal sludge or industrial wastewater. Batch serum bottle tests were used to determine the specific acetate utilization rate (AUR), and a quantitative real-time polymerase chain reaction protocol was used to enumerate the acetate-utilizing Methanosaeta and Methanosarcina populations in the biomass samples. Methanosaeta was the dominant aceticlastic methanogen in all samples, except for one industrial wastewater-treating anaerobic process. However, Methanosarcina density in industrial biomass samples was higher than the Methanosarcina density in the municipal samples. The average AUR values of municipal and industrial wastewater treatment plant biomass samples were 10.49 and 10.65 mg CH3COO(-)/log(aceticlastic methanogen gene copy).d, respectively. One-way ANOVA test and principle component analysis showed that the acetate utilization capacities and aceticlastic methanogen community composition did not show statistically significant correlation among the municipal digesters and industrial wastewater-treating processes investigated.

  1. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  2. Automated high-throughput flow-through real-time diagnostic system

    DOEpatents

    Regan, John Frederick

    2012-10-30

    An automated real-time flow-through system capable of processing multiple samples in an asynchronous, simultaneous, and parallel fashion for nucleic acid extraction and purification, followed by assay assembly, genetic amplification, multiplex detection, analysis, and decontamination. The system is able to hold and access an unlimited number of fluorescent reagents that may be used to screen samples for the presence of specific sequences. The apparatus works by associating extracted and purified sample with a series of reagent plugs that have been formed in a flow channel and delivered to a flow-through real-time amplification detector that has a multiplicity of optical windows, to which the sample-reagent plugs are placed in an operative position. The diagnostic apparatus includes sample multi-position valves, a master sample multi-position valve, a master reagent multi-position valve, reagent multi-position valves, and an optical amplification/detection system.

  3. Space Weathering of Lunar Rocks

    NASA Technical Reports Server (NTRS)

    Noble, S. K.; Keller, L. P.; Christoffersen, R.; Rahman, Z.

    2012-01-01

    All materials exposed at the lunar surface undergo space weathering processes. On the Moon, boulders make up only a small percentage of the exposed surface, and areas where such rocks are exposed, like central peaks, are often among the least space weathered regions identified from remote sensing data. Yet space weathered surfaces (patina) are relatively common on returned rock samples, some of which directly sample the surface of larger boulders. Because, as witness plates to lunar space weathering, rocks and boulders experience longer exposure times compared to lunar soil grains, they allow us to develop a deeper perspective on the relative importance of various weathering processes as a function of time.

  4. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  5. Growth of carbon nanofibers using resol-type phenolic resin and cobalt(II) catalyst.

    PubMed

    Kim, Taeyun; Mees, Karina; Park, Ho-Seon; Willert-Porada, Monika; Lee, Chang-Seop

    2013-11-01

    This study investigated carbon nanofibers (CNFs) grown on reticulated vitreous carbon (RVC) foam through catalytic deposition of ethylene. Before growing the CNFs, Co(II) on the RVC foam was expected to act as a catalyst by deposition. The preparation of the CNFs was a two-step process. The first step was preparing the RVC from polyurethane (PU) foam. Changes in weight over time were evaluated using two kinds of resol. The change in the mass and state of the sample with the change in temperature was studied during the carbonization process. The second step was to prepare the CNFs. An OH group was attached by the oxidation of the RVC foam. A change in the shape and mass of the sample was observed due to a change in nitric acid concentration and oxidation time. Then, cobalt was deposited to grow CNFs on the RVC foam. Hydrolysis helped to deposit the Co(ll) on the RVC foam. The appropriate time and temperature were investigated for the reduction process. In the last step, CNFs were prepared by the introducing ethylene gas. The resulting samples were analyzed using scanning electron microscopy, energy dispersive spectroscopy, N2-sorption, and X-ray photoelectron spectroscopy.

  6. A Systems View of Mother-Infant Face-to-Face Communication

    ERIC Educational Resources Information Center

    Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E.; Margolis, Amy; Buck, Karen A.; Chen, Henian

    2016-01-01

    Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. The process of interaction across time in a large low-risk community sample at infant age 4 months was examined. Split-screen videotape was coded on a 1-s time base for…

  7. Processing of baby food using pressure-assisted thermal sterilization (PATS) and comparison with thermal treatment

    NASA Astrophysics Data System (ADS)

    Wang, Yubin; Ismail, Marliya; Farid, Mohammed

    2017-10-01

    Currently baby food is sterilized using retort processing that gives an extended shelf life. However, this type of heat processing leads to reduction of organoleptic and nutrition value. Alternatively, the combination of pressure and heat could be used to achieve sterilization at reduced temperatures. This study investigates the potential of pressure-assisted thermal sterilization (PATS) technology for baby food sterilization. Here, baby food (apple puree), inoculated with Bacillus subtilis spores was treated using PATS at different operating temperatures, pressures and times and was compared with thermal only treatment. The results revealed that the decimal reduction time of B. subtilis in PATS treatment was lower than that of thermal only treatment. At a similar spore inactivation, the retention of ascorbic acid of PATS-treated sample was higher than that of thermally treated sample. The results indicated that PATS could be a potential technology for baby food processing while minimizing quality deterioration.

  8. Effect of Temperature, Time, and Material Thickness on the Dehydration Process of Tomato

    PubMed Central

    Correia, A. F. K.; Loro, A. C.; Zanatta, S.; Spoto, M. H. F.; Vieira, T. M. F. S.

    2015-01-01

    This study aimed to evaluate the effects of temperature, time, and thickness of tomatoes fruits during adiabatic drying process. Dehydration, a simple and inexpensive process compared to other conservation methods, is widely used in the food industry in order to ensure a long shelf life for the product due to the low water activity. This study aimed to obtain the best processing conditions to avoid losses and keep product quality. Factorial design and surface response methodology were applied to fit predictive mathematical models. In the dehydration of tomatoes through the adiabatic process, temperature, time, and sample thickness, which greatly contribute to the physicochemical and sensory characteristics of the final product, were evaluated. The optimum drying conditions were 60°C with the lowest thickness level and shorter time. PMID:26904666

  9. BPSK Demodulation Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Garcia, Thomas R.

    1996-01-01

    A digital communications signal is a sinusoidal waveform that is modified by a binary (digital) information signal. The sinusoidal waveform is called the carrier. The carrier may be modified in amplitude, frequency, phase, or a combination of these. In this project a binary phase shift keyed (BPSK) signal is the communication signal. In a BPSK signal the phase of the carrier is set to one of two states, 180 degrees apart, by a binary (i.e., 1 or 0) information signal. A digital signal is a sampled version of a "real world" time continuous signal. The digital signal is generated by sampling the continuous signal at discrete points in time. The rate at which the signal is sampled is called the sampling rate (f(s)). The device that performs this operation is called an analog-to-digital (A/D) converter or a digitizer. The digital signal is composed of the sequence of individual values of the sampled BPSK signal. Digital signal processing (DSP) is the modification of the digital signal by mathematical operations. A device that performs this processing is called a digital signal processor. After processing, the digital signal may then be converted back to an analog signal using a digital-to-analog (D/A) converter. The goal of this project is to develop a system that will recover the digital information from a BPSK signal using DSP techniques. The project is broken down into the following steps: (1) Development of the algorithms required to demodulate the BPSK signal; (2) Simulation of the system; and (3) Implementation a BPSK receiver using digital signal processing hardware.

  10. Large-Scale Variability of Inpatient Tacrolimus Therapeutic Drug Monitoring at an Academic Transplant Center: a Retrospective Study.

    PubMed

    Strohbehn, Garth W; Pan, Warren W; Petrilli, Christopher M; Heidemann, Lauren; Larson, Sophia; Aaronson, Keith D; Johnson, Matt; Ellies, Tammy; Heung, Michael

    2018-04-30

    Inpatient tacrolimus therapeutic drug monitoring (TDM) lacks standardized guidelines. In this study, the authors analyzed variability in the pre-analytical phase of the inpatient tacrolimus TDM process at their institution. Patients receiving tacrolimus (twice-daily formulation) and tacrolimus laboratory analysis were included in the study. Times of tacrolimus administration and laboratory study collection were extracted and time distribution plots for each step in the inpatient TDM process were generated. Trough levels were drawn appropriately in 25.9% of the cases. Timing between doses was consistent, with 91.9% of the following dose administrations occurring 12 +/- 2 hours after the previous dose. Only 38.1% of the drug administrations occurred within one hour of laboratory study collection. Tacrolimus-related patient safety events were reported at a rate of 1.9 events per month while incorrect timing of TDM sample collection occurred approximately 200 times per month. Root cause analysis identified a TDM process marked by a lack of communication and coordination of drug administration and TDM sample collection. Extrapolating findings nationwide, we estimate $22 million in laboratory costs wasted annually. Based on this large single-center study, the authors concluded that the inpatient TDM process is prone to timing errors, thus is financially wasteful, and at its worst harmful to patients due to clinical decisions being made on the basis of unreliable data. Further work is needed on systems solutions to better align the laboratory study collection and drug administration processes.

  11. Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method

    NASA Astrophysics Data System (ADS)

    Kuai, Ken Z.; Tsai, Christina W.

    2012-02-01

    SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.

  12. RNA quality in fresh-frozen gastrointestinal tumor specimens-experiences from the tumor and healthy tissue bank TU Dresden.

    PubMed

    Zeugner, Silke; Mayr, Thomas; Zietz, Christian; Aust, Daniela E; Baretton, Gustavo B

    2015-01-01

    The term "pre-analytics" summarizes all procedures concerned with specimen collection or processing as well as logistical aspects like transport or storage of tissue specimens. All or these variables as well as tissue-specific characteristics affect sample quality. While certain parameters like warm ischemia or tissue-specific characteristics cannot be changed, other parameters can be assessed and optimized. The aim of this study was to determine RNA quality by assessing the RIN values of specimens from different organs and to assess the influence of vacuum preservation. Samples from the GI tract, in general, appear to have lower RNA quality when compared to samples from other organ sites. This may be due to the digestive enzymes or bacterial colonization. Processing time in pathology does not significantly influence RNA quality. Tissue preservation with a vacuum sealer leads to preserved RNA quality over an extended period of time and offers a feasible alternative to minimize the influence of transport time into pathology.

  13. Assessment of Normal Variability in Peripheral Blood Gene Expression

    DOE PAGES

    Campbell, Catherine; Vernon, Suzanne D.; Karem, Kevin L.; ...

    2002-01-01

    Peripheral blood is representative of many systemic processes and is an ideal sample for expression profiling of diseases that have no known or accessible lesion. Peripheral blood is a complex mixture of cell types and some differences in peripheral blood gene expression may reflect the timing of sample collection rather than an underlying disease process. For this reason, it is important to assess study design factors that may cause variability in gene expression not related to what is being analyzed. Variation in the gene expression of circulating peripheral blood mononuclear cells (PBMCs) from three healthy volunteers sampled three times onemore » day each week for one month was examined for 1,176 genes printed on filter arrays. Less than 1% of the genes showed any variation in expression that was related to the time of collection, and none of the changes were noted in more than one individual. These results suggest that observed variation was due to experimental variability.« less

  14. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    PubMed

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  15. Studies in astronomical time series analysis: Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  16. QCScreen: a software tool for data quality control in LC-HRMS based metabolomics.

    PubMed

    Simader, Alexandra Maria; Kluger, Bernhard; Neumann, Nora Katharina Nicole; Bueschl, Christoph; Lemmens, Marc; Lirk, Gerald; Krska, Rudolf; Schuhmacher, Rainer

    2015-10-24

    Metabolomics experiments often comprise large numbers of biological samples resulting in huge amounts of data. This data needs to be inspected for plausibility before data evaluation to detect putative sources of error e.g. retention time or mass accuracy shifts. Especially in liquid chromatography-high resolution mass spectrometry (LC-HRMS) based metabolomics research, proper quality control checks (e.g. for precision, signal drifts or offsets) are crucial prerequisites to achieve reliable and comparable results within and across experimental measurement sequences. Software tools can support this process. The software tool QCScreen was developed to offer a quick and easy data quality check of LC-HRMS derived data. It allows a flexible investigation and comparison of basic quality-related parameters within user-defined target features and the possibility to automatically evaluate multiple sample types within or across different measurement sequences in a short time. It offers a user-friendly interface that allows an easy selection of processing steps and parameter settings. The generated results include a coloured overview plot of data quality across all analysed samples and targets and, in addition, detailed illustrations of the stability and precision of the chromatographic separation, the mass accuracy and the detector sensitivity. The use of QCScreen is demonstrated with experimental data from metabolomics experiments using selected standard compounds in pure solvent. The application of the software identified problematic features, samples and analytical parameters and suggested which data files or compounds required closer manual inspection. QCScreen is an open source software tool which provides a useful basis for assessing the suitability of LC-HRMS data prior to time consuming, detailed data processing and subsequent statistical analysis. It accepts the generic mzXML format and thus can be used with many different LC-HRMS platforms to process both multiple quality control sample types as well as experimental samples in one or more measurement sequences.

  17. A further test of sequential-sampling models that account for payoff effects on response bias in perceptual decision tasks.

    PubMed

    Diederich, Adele

    2008-02-01

    Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.

  18. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  19. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  20. [Determination of benzo(alpha)pyrene in food with microwave-assisted extraction].

    PubMed

    Zhou, Na; Luo, He-Dong; Li, Na; Li, Yao-Qun

    2014-03-01

    Coupling derivative technique and constant-energy synchronous fluorescence scanning technique, a method of determining benzo[alpha] pyrene in foods by second derivative constant-energy synchronous spectrofluorimetry after microwave-assisted treatment of samples was established using domestic microwave oven. The main factors of influencing the efficiency of microwave extraction were discussed, including the extraction solvent types and amounts, the microwave extraction time, microwave radiation power and cooling time. And the comparison with ultrasonic extraction was made. Low-fat food samples, which were just microwave-extracted with mixed-solvents, could be analyzed immediately by the spectrofluorimetric technique. For high-fat food samples, microwave-assisted saponification and extraction were made at the same time, thus simplifying operation steps and reducing sample analysis time. So the whole sample analysis process could be completed within one hour. This method was simple, rapid and inexpensive. In consequence, it was applied to determine benzo(a)pyrene in food with good reproducibility and the recoveries of benzo(alpha) pyrene ranged from 90.0% to 105.0% for the low fat samples and 83.3% to 94.6% for high-fat samples.

  1. Partnering with Engineers to Identify and Empirically Evaluate Delays in Magnetic Resonance Imaging Laying the Foundations for Quality Improvement and System-based Practice in Radiology.

    PubMed

    Brandon, Catherine J; Holody, Michael; Inch, Geoffrey; Kabcenell, Michael; Schowalter, Diane; Mullan, Patricia B

    2012-01-01

    The aim of this study was to evaluate the feasibility of partnering with engineering students and critically examining the merit of the problem identification and analyses students generated in identifying sources impeding effective turnaround in a large university department of diagnostic radiology. Turnaround involves the time and activities beginning when a patient enters the magnetic resonance scanner room until the patient leaves, minus the time the scanner is conducting the protocol. A prospective observational study was conducted, in which four senior undergraduate industrial and operations engineering students interviewed magnetic resonance staff members and observed all shifts. On the basis of 150 hours of observation, the engineering students identified 11 process steps (eg, changing coils). They charted machine use for all shifts, providing a breakdown of turnaround time between appropriate process and non-value-added time. To evaluate the processes occurring in the scanning room, the students used a work-sampling schedule in which a beeper sounded 2.5 times per hour, signaling the technologist to identify which of 11 process steps was occurring. This generated 2147 random observations over a 3-week period. The breakdown of machine use over 105 individual studies showed that non-value-added time accounted for 62% of turnaround time. Analysis of 2147 random samples of work showed that scanners were empty and waiting for patients 15% of the total time. Analyses showed that poor communication delayed the arrival of patients and that no one had responsibility for communicating when scanning was done. Engineering students used rigorous study design and sampling methods to conduct interviews and observations. This led to data-driven definition of problems and potential solutions to guide systems-based improvement. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.

  2. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  3. Bayesian Total-Evidence Dating Reveals the Recent Crown Radiation of Penguins

    PubMed Central

    Heath, Tracy A.; Ksepka, Daniel T.; Stadler, Tanja; Welch, David; Drummond, Alexei J.

    2017-01-01

    The total-evidence approach to divergence time dating uses molecular and morphological data from extant and fossil species to infer phylogenetic relationships, species divergence times, and macroevolutionary parameters in a single coherent framework. Current model-based implementations of this approach lack an appropriate model for the tree describing the diversification and fossilization process and can produce estimates that lead to erroneous conclusions. We address this shortcoming by providing a total-evidence method implemented in a Bayesian framework. This approach uses a mechanistic tree prior to describe the underlying diversification process that generated the tree of extant and fossil taxa. Previous attempts to apply the total-evidence approach have used tree priors that do not account for the possibility that fossil samples may be direct ancestors of other samples, that is, ancestors of fossil or extant species or of clades. The fossilized birth–death (FBD) process explicitly models the diversification, fossilization, and sampling processes and naturally allows for sampled ancestors. This model was recently applied to estimate divergence times based on molecular data and fossil occurrence dates. We incorporate the FBD model and a model of morphological trait evolution into a Bayesian total-evidence approach to dating species phylogenies. We apply this method to extant and fossil penguins and show that the modern penguins radiated much more recently than has been previously estimated, with the basal divergence in the crown clade occurring at \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${\\sim}12.7$\\end{document} Ma and most splits leading to extant species occurring in the last 2 myr. Our results demonstrate that including stem-fossil diversity can greatly improve the estimates of the divergence times of crown taxa. The method is available in BEAST2 (version 2.4) software www.beast2.org with packages SA (version at least 1.1.4) and morph-models (version at least 1.0.4) installed. [Birth–death process; calibration; divergence times; MCMC; phylogenetics.] PMID:28173531

  4. Rapid quality assessment of Radix Aconiti Preparata using direct analysis in real time mass spectrometry.

    PubMed

    Zhu, Hongbin; Wang, Chunyan; Qi, Yao; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying

    2012-11-08

    This study presents a novel and rapid method to identify chemical markers for the quality control of Radix Aconiti Preparata, a world widely used traditional herbal medicine. In the method, the samples with a fast extraction procedure were analyzed using direct analysis in real time mass spectrometry (DART MS) combined with multivariate data analysis. At present, the quality assessment approach of Radix Aconiti Preparata was based on the two processing methods recorded in Chinese Pharmacopoeia for the purpose of reducing the toxicity of Radix Aconiti and ensuring its clinical therapeutic efficacy. In order to ensure the safety and effectivity in clinical use, the processing degree of Radix Aconiti should be well controlled and assessed. In the paper, hierarchical cluster analysis and principal component analysis were performed to evaluate the DART MS data of Radix Aconiti Preparata samples in different processing times. The results showed that the well processed Radix Aconiti Preparata, unqualified processed and the raw Radix Aconiti could be clustered reasonably corresponding to their constituents. The loading plot shows that the main chemical markers having the most influence on the discrimination amongst the qualified and unqualified samples were mainly some monoester diterpenoid aconitines and diester diterpenoid aconitines, i.e. benzoylmesaconine, hypaconitine, mesaconitine, neoline, benzoylhypaconine, benzoylaconine, fuziline, aconitine and 10-OH-mesaconitine. The established DART MS approach in combination with multivariate data analysis provides a very flexible and reliable method for quality assessment of toxic herbal medicine. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  6. Off-line real-time FTIR analysis of a process step in imipenem production

    NASA Astrophysics Data System (ADS)

    Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.

    1992-08-01

    We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.

  7. Rapid Waterborne Pathogen Detection with Mobile Electronics.

    PubMed

    Wu, Tsung-Feng; Chen, Yu-Chen; Wang, Wei-Chung; Kucknoor, Ashwini S; Lin, Che-Jen; Lo, Yu-Hwa; Yao, Chun-Wei; Lian, Ian

    2017-06-09

    Pathogen detection in water samples, without complex and time consuming procedures such as fluorescent-labeling or culture-based incubation, is essential to public safety. We propose an immunoagglutination-based protocol together with the microfluidic device to quantify pathogen levels directly from water samples. Utilizing ubiquitous complementary metal-oxide-semiconductor (CMOS) imagers from mobile electronics, a low-cost and one-step reaction detection protocol is developed to enable field detection for waterborne pathogens. 10 mL of pathogen-containing water samples was processed using the developed protocol including filtration enrichment, immune-reaction detection and imaging processing. The limit of detection of 10 E. coli O157:H7 cells/10 mL has been demonstrated within 10 min of turnaround time. The protocol can readily be integrated into a mobile electronics such as smartphones for rapid and reproducible field detection of waterborne pathogens.

  8. The application of charge-coupled device processors in automatic-control systems

    NASA Technical Reports Server (NTRS)

    Mcvey, E. S.; Parrish, E. A., Jr.

    1977-01-01

    The application of charge-coupled device (CCD) processors to automatic-control systems is suggested. CCD processors are a new form of semiconductor component with the unique ability to process sampled signals on an analog basis. Specific implementations of controllers are suggested for linear time-invariant, time-varying, and nonlinear systems. Typical processing time should be only a few microseconds. This form of technology may become competitive with microprocessors and minicomputers in addition to supplementing them.

  9. Integrated Optical Information Processing

    DTIC Science & Technology

    1988-08-01

    applications in optical disk memory systems [91. This device is constructed in a glass /SiO2/Si waveguide. The choice of a Si substrate allows for the...contact mask) were formed in the photoresist deposited on all of the samples, we covered the unwanted gratings on each sample with cover glass slides...processing, let us consider TeO2 (v, = 620 m/s) as a potential substrate for applications requiring large time delays. This con- sideration is despite

  10. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  11. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  12. Rapid Protein Global Fold Determination Using Ultrasparse Sampling, High-Dynamic Range Artifact Suppression, and Time-Shared NOESY

    PubMed Central

    Coggins, Brian E.; Werner-Allen, Jonathan W.; Yan, Anthony; Zhou, Pei

    2012-01-01

    In structural studies of large proteins by NMR, global fold determination plays an increasingly important role in providing a first look at a target’s topology and reducing assignment ambiguity in NOESY spectra of fully-protonated samples. In this work, we demonstrate the use of ultrasparse sampling, a new data processing algorithm, and a 4-D time-shared NOESY experiment (1) to collect all NOEs in 2H/13C/15N-labeled protein samples with selectively-protonated amide and ILV methyl groups at high resolution in only four days, and (2) to calculate global folds from this data using fully automated resonance assignment. The new algorithm, SCRUB, incorporates the CLEAN method for iterative artifact removal, but applies an additional level of iteration, permitting real signals to be distinguished from noise and allowing nearly all artifacts generated by real signals to be eliminated. In simulations with 1.2% of the data required by Nyquist sampling, SCRUB achieves a dynamic range over 10000:1 (250× better artifact suppression than CLEAN) and completely quantitative reproduction of signal intensities, volumes, and lineshapes. Applied to 4-D time-shared NOESY data, SCRUB processing dramatically reduces aliasing noise from strong diagonal signals, enabling the identification of weak NOE crosspeaks with intensities 100× less than diagonal signals. Nearly all of the expected peaks for interproton distances under 5 Å were observed. The practical benefit of this method is demonstrated with structure calculations for 23 kDa and 29 kDa test proteins using the automated assignment protocol of CYANA, in which unassigned 4-D time-shared NOESY peak lists produce accurate and well-converged global fold ensembles, whereas 3-D peak lists either fail to converge or produce significantly less accurate folds. The approach presented here succeeds with an order of magnitude less sampling than required by alternative methods for processing sparse 4-D data. PMID:22946863

  13. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomeroy, Marc D

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition beforemore » condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.« less

  14. Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics

    NASA Astrophysics Data System (ADS)

    Iyer, V.; Shetty, S.; Iyengar, S. S.

    2015-07-01

    Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.

  15. Polonium-210 in the environment around a radioactive waste disposal area and phosphate ore processing plant.

    PubMed

    Arthur, W J; Markham, O D

    1984-04-01

    Polonium-210 concentrations were determined for soil, vegetation and small mammal tissues collected at a solid radioactive waste disposal area, near a phosphate ore processing plant and at two rural areas in southeastern Idaho. Polonium concentrations in media sampled near the radioactive waste disposal facility were equal to or less than values from rural area samples, indicating that disposal of solid radioactive waste at the Idaho National Engineering Laboratory Site has not resulted in increased environmental levels of polonium. Concentrations of 210Po in soils, deer mice hide and carcass samples collected near the phosphate processing plant were statistically (P less than or equal to 0.05) greater than the other sampling locations; however, the mean 210Po concentration in soils and small mammal tissues from sampling areas near the phosphate plant were only four and three times greater, respectively, than control values. No statistical (P greater than 0.05) difference was observed for 210Po concentrations in vegetation among any of the sampling locations.

  16. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  17. Instance-based learning: integrating sampling and repeated decisions from experience.

    PubMed

    Gonzalez, Cleotilde; Dutt, Varun

    2011-10-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association

  18. Adaptive Sampling of Time Series During Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive.

  19. Industrial Raman gas sensing for real-time system control

    NASA Astrophysics Data System (ADS)

    Buric, M.; Mullen, J.; Chorpening, B.; Woodruff, S.

    2014-06-01

    Opportunities exist to improve on-line process control in energy applications with a fast, non-destructive measurement of gas composition. Here, we demonstrate a Raman sensing system which is capable of reporting the concentrations of numerous species simultaneously with sub-percent accuracy and sampling times below one-second for process control applications in energy or chemical production. The sensor is based upon a hollow-core capillary waveguide with a 300 micron bore with reflective thin-film metal and dielectric linings. The effect of using such a waveguide in a Raman process is to integrate Raman photons along the length of the sample-filled waveguide, thus permitting the acquisition of very large Raman signals for low-density gases in a short time. The resultant integrated Raman signals can then be used for quick and accurate analysis of a gaseous mixture. The sensor is currently being tested for energy applications such as coal gasification, turbine control, well-head monitoring for exploration or production, and non-conventional gas utilization. In conjunction with an ongoing commercialization effort, the researchers have recently completed two prototype instruments suitable for hazardous area operation and testing. Here, we report pre-commercialization testing of those field prototypes for control applications in gasification or similar processes. Results will be discussed with respect to accuracy, calibration requirements, gas sampling techniques, and possible control strategies of industrial significance.

  20. Tracing Crop Nitrogen Dynamics on the Field-Scale by Combining Multisensoral EO Data with an Integrated Process Model- A Validation Experiment for Cereals in Southern Germany

    NASA Astrophysics Data System (ADS)

    Hank, Tobias B.; Bach, Heike; Danner, Martin; Hodrius, Martina; Mauser, Wolfram

    2016-08-01

    Nitrogen, being the basic element for the construction of plant proteins and pigments, is one of the most important production factors for agricultural cultivation. High resolution and near real-time information on nitrogen status in the soil thus is of highest interest for economically and ecologically optimized fertilizer planning and application. Unfortunately, nitrogen storage in the soil column cannot be directly observed with Earth Observation (EO) instruments. Advanced EO supported process modelling approaches therefore must be applied that allow tracing the spatiotemporal dynamics of nitrogen transformation, translocation and transport in the soil and in the canopy. Before these models can be applied as decision support tools for smart farming, they must be carefully parameterized and validated. This study applies an advanced land surface process model (PROMET) to selected winter cereal fields in Southern Germany and correlates the model outputs to destructively sampled nitrogen data from the growing season of 2015 (17 sampling dates, 8 sample locations). The spatial parametrization of the process model thereby is supported by assimilating eight satellite images (5 times Landsat 8 OLI and 3 times RapidEye). It was found that the model is capable of realistically tracing the temporal and spatial dynamics of aboveground nitrogen uptake and allocation (R2 = 0.84, RMSE 31.3 kg ha-1).

  1. The relationship between time perspective and self-regulatory processes, abilities and outcomes: a protocol for a meta-analytical review.

    PubMed

    Baird, Harriet M; Webb, Thomas L; Martin, Jilly; Sirois, Fuschia M

    2017-07-05

    Both theoretical and empirical evidence suggests that time perspective is likely to influence self-regulatory processes and outcomes. Despite the theoretical and practical significance of such relations, the relationship between time perspective and self-regulatory processes and outcomes across different measures, samples and life domains, including health, has yet to be explored. The proposed review will develop a taxonomy for classifying measures according to the self-regulatory process, ability or outcome that they are likely to reflect. Electronic scientific databases will be searched, along with relevant conference abstract booklets and citation lists. Additionally, a call for unpublished data will be submitted to relevant bodies. To be eligible for inclusion, studies must include a measure of time perspective and a measure of at least one self-regulatory process, ability and/ or outcome. Eligibility will not be restricted by publication date, language, type of sample or setting. The bivariate correlations will be extracted (or calculated) and submitted to a random-effects meta-analysis. The sample-weighted average effect size, heterogeneity, risk of bias and publication bias will be calculated, and the effects of categorical and continuous moderator variables on the effect sizes will be determined. The proposed meta-analysis will synthesise previously conducted research; thus, ethical approval is not required. The findings will be submitted for publication in an international peer-reviewed journal and reported as part of the first author’s PhD thesis. The findings will also be disseminated to the research community and, where appropriate, to other interested parties through presentations at relevant academic and non-academic conferences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. The relationship between time perspective and self-regulatory processes, abilities and outcomes: a protocol for a meta-analytical review

    PubMed Central

    Baird, Harriet M; Webb, Thomas L; Martin, Jilly; Sirois, Fuschia M

    2017-01-01

    Introduction Both theoretical and empirical evidence suggests that time perspective is likely to influence self-regulatory processes and outcomes. Despite the theoretical and practical significance of such relations, the relationship between time perspective and self-regulatory processes and outcomes across different measures, samples and life domains, including health, has yet to be explored. Methods and analysis The proposed review will develop a taxonomy for classifying measures according to the self-regulatory process, ability or outcome that they are likely to reflect. Electronic scientific databases will be searched, along with relevant conference abstract booklets and citation lists. Additionally, a call for unpublished data will be submitted to relevant bodies. To be eligible for inclusion, studies must include a measure of time perspective and a measure of at least one self-regulatory process, ability and/ or outcome. Eligibility will not be restricted by publication date, language, type of sample or setting. The bivariate correlations will be extracted (or calculated) and submitted to a random-effects meta-analysis. The sample-weighted average effect size, heterogeneity, risk of bias and publication bias will be calculated, and the effects of categorical and continuous moderator variables on the effect sizes will be determined. Ethics and dissemination The proposed meta-analysis will synthesise previously conducted research; thus, ethical approval is not required. The findings will be submitted for publication in an international peer-reviewed journal and reported as part of the first author’s PhD thesis. The findings will also be disseminated to the research community and, where appropriate, to other interested parties through presentations at relevant academic and non-academic conferences. PMID:28679677

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fong, Erika J.; Huang, Chao; Hamilton, Julie

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  4. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  5. Mutual information estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.

  6. On the use of ultracentrifugal devices for routine sample preparation in biomolecular magic-angle-spinning NMR

    PubMed Central

    Mandal, Abhishek; Boatz, Jennifer C.; Wheeler, Travis; van der Wel, Patrick C. A.

    2017-01-01

    A number of recent advances in the field of magic-angle-spinning (MAS) solid-state NMR have enabled its application to a range of biological systems of ever increasing complexity. To retain biological relevance, these samples are increasingly studied in a hydrated state. At the same time, experimental feasibility requires the sample preparation process to attain a high sample concentration within the final MAS rotor. We discuss these considerations, and how they have led to a number of different approaches to MAS NMR sample preparation. We describe our experience of how custom-made (or commercially available) ultracentrifugal devices can facilitate a simple, fast and reliable sample preparation process. A number of groups have since adopted such tools, in some cases to prepare samples for sedimentation-style MAS NMR experiments. Here we argue for a more widespread adoption of their use for routine MAS NMR sample preparation. PMID:28229262

  7. Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.

    PubMed

    Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen

    2018-01-01

    In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.

  8. Integrated stationary Ornstein-Uhlenbeck process, and double integral processes

    NASA Astrophysics Data System (ADS)

    Abundo, Mario; Pirozzi, Enrica

    2018-03-01

    We find a representation of the integral of the stationary Ornstein-Uhlenbeck (ISOU) process in terms of Brownian motion Bt; moreover, we show that, under certain conditions on the functions f and g , the double integral process (DIP) D(t) = ∫βt g(s) (∫αs f(u) dBu) ds can be thought as the integral of a suitable Gauss-Markov process. Some theoretical and application details are given, among them we provide a simulation formula based on that representation by which sample paths, probability densities and first passage times of the ISOU process are obtained; the first-passage times of the DIP are also studied.

  9. Application of Raman spectroscopy for on-line monitoring of low dose blend uniformity.

    PubMed

    Hausman, Debra S; Cambron, R Thomas; Sakr, Adel

    2005-07-14

    On-line Raman spectroscopy was used to evaluate the effect of blending time on low dose, 1%, blend uniformity of azimilide dihydrochloride. An 8 qt blender was used for the experiments and instrumented with a Raman probe through the I-bar port. The blender was slowed to 6.75 rpm to better illustrate the blending process (normal speed is 25 rpm). Uniformity was reached after 20 min of blending at 6.75 rpm (135 revolutions or 5.4 min at 25 rpm). On-line Raman analysis of blend uniformity provided more benefits than traditional thief sampling and off-line analysis. On-line Raman spectroscopy enabled generating data rich blend profiles, due to the ability to collect a large number of samples during the blending process (sampling every 20s). In addition, the Raman blend profile was rapidly generated, compared to the lengthy time to complete a blend profile with thief sampling and off-line analysis. The on-line Raman blend uniformity results were also significantly correlated (p-value < 0.05) to the HPLC uniformity results of thief samples.

  10. In-Source Fragmentation and the Sources of Partially Tryptic Peptides in Shotgun Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jong-Seo; Monroe, Matthew E.; Camp, David G.

    2013-02-01

    Partially tryptic peptides are often identified in shotgun proteomics using trypsin as the proteolytic enzyme; however, it has been controversial regarding the sources of such partially tryptic peptides. Herein we investigate the impact of in-source fragmentation on shotgun proteomics using three biological samples, including a standard protein mixture, a mouse brain tissue homogenate, and a mouse plasma sample. Since the in-source fragments of a peptide retain the same elution time with its parent fully tryptic peptide, the partially tryptic peptides from in-source fragmentation can be distinguished from the other partially tryptic peptides by plotting the observed retention times against themore » computationally predicted retention times. Most partially tryptic in-source fragmentation artifacts were misaligned from the linear distribution of fully tryptic peptides. The impact of in-source fragmentation on peptide identifications was clearly significant in a less complex sample such as a standard protein digest, where ~60 % of unique peptides were observed as partially tryptic peptides from in-source fragmentation. In mouse brain or mouse plasma samples, in-source fragmentation contributed to 1-3 % of all identified peptides. The other major source of partially tryptic peptides in complex biological samples is presumably proteolytic processing by endogenous proteases in the samples. By filtering out the in-source fragmentation artifacts from the identified partially tryptic or non-tryptic peptides, it is possible to directly survey in-vivo proteolytic processing in biological samples such as blood plasma.« less

  11. Investigation of Mercury Wet Deposition Physicochemistry in the Ohio River Valley through Automated Sequential Sampling

    EPA Science Inventory

    Intra-storm variability and soluble fractionation was explored for summer-time rain events in Steubenville, Ohio to evaluate the physical processes controlling mercury (Hg) in wet deposition in this industrialized region. Comprehensive precipitation sample collection was conducte...

  12. Devices and process for high-pressure magic angle spinning nuclear magnetic resonance

    DOEpatents

    Hoyt, David W; Sears, Jr., Jesse A; Turcu, Romulus V.F.; Rosso, Kevin M; Hu, Jian Zhi

    2014-04-08

    A high-pressure magic angle spinning (MAS) rotor is detailed that includes a high-pressure sample cell that maintains high pressures exceeding 150 bar. The sample cell design minimizes pressure losses due to penetration over an extended period of time.

  13. Devices and process for high-pressure magic angle spinning nuclear magnetic resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, David W.; Sears, Jesse A.; Turcu, Romulus V. F.

    A high-pressure magic angle spinning (MAS) rotor is detailed that includes a high-pressure sample cell that maintains high pressures exceeding 150 bar. The sample cell design minimizes pressure losses due to penetration over an extended period of time.

  14. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    NASA Astrophysics Data System (ADS)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  15. Removal of Non-metallic Inclusions from Nickel Base Superalloys by Electromagnetic Levitation Melting in a Slag

    NASA Astrophysics Data System (ADS)

    Manjili, Mohsen Hajipour; Halali, Mohammad

    2018-02-01

    Samples of INCONEL 718 were levitated and melted in a slag by the application of an electromagnetic field. The effects of temperature, time, and slag composition on the inclusion content of the samples were studied thoroughly. Samples were compared with the original alloy to study the effect of the process on inclusions. Size, shape, and chemical composition of remaining non-metallic inclusions were investigated. The samples were prepared by Standard Guide for Preparing and Evaluating Specimens for Automatic Inclusion Assessment of Steel (ASTM E 768-99) method and the results were reported by means of the Standard Test Methods for Determining the Inclusion Content of Steel (ASTM E 45-97). Results indicated that by increasing temperature and processing time, greater level of cleanliness could be achieved, and numbers and size of the remaining inclusions decreased significantly. It was also observed that increasing calcium fluoride content of the slag helped reduce inclusion content.

  16. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less

  17. Sample-Clock Phase-Control Feedback

    NASA Technical Reports Server (NTRS)

    Quirk, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy

    2012-01-01

    To demodulate a communication signal, a receiver must recover and synchronize to the symbol timing of a received waveform. In a system that utilizes digital sampling, the fidelity of synchronization is limited by the time between the symbol boundary and closest sample time location. To reduce this error, one typically uses a sample clock in excess of the symbol rate in order to provide multiple samples per symbol, thereby lowering the error limit to a fraction of a symbol time. For systems with a large modulation bandwidth, the required sample clock rate is prohibitive due to current technological barriers and processing complexity. With precise control of the phase of the sample clock, one can sample the received signal at times arbitrarily close to the symbol boundary, thus obviating the need, from a synchronization perspective, for multiple samples per symbol. Sample-clock phase-control feedback was developed for use in the demodulation of an optical communication signal, where multi-GHz modulation bandwidths would require prohibitively large sample clock frequencies for rates in excess of the symbol rate. A custom mixedsignal (RF/digital) offset phase-locked loop circuit was developed to control the phase of the 6.4-GHz clock that samples the photon-counting detector output. The offset phase-locked loop is driven by a feedback mechanism that continuously corrects for variation in the symbol time due to motion between the transmitter and receiver as well as oscillator instability. This innovation will allow significant improvements in receiver throughput; for example, the throughput of a pulse-position modulation (PPM) with 16 slots can increase from 188 Mb/s to 1.5 Gb/s.

  18. Automated process for solvent separation of organic/inorganic substance

    DOEpatents

    Schweighardt, F.K.

    1986-07-29

    There is described an automated process for the solvent separation of organic/inorganic substances that operates continuously and unattended and eliminates potential errors resulting from subjectivity and the aging of the sample during analysis. In the process, metered amounts of one or more solvents are passed sequentially through a filter containing the sample under the direction of a microprocessor control apparatus. The mixture in the filter is agitated by ultrasonic cavitation for a timed period and the filtrate is collected. The filtrate of each solvent extraction is collected individually and the residue on the filter element is collected to complete the extraction process. 4 figs.

  19. Automated process for solvent separation of organic/inorganic substance

    DOEpatents

    Schweighardt, Frank K.

    1986-01-01

    There is described an automated process for the solvent separation of organic/inorganic substances that operates continuously and unattended and eliminates potential errors resulting from subjectivity and the aging of the sample during analysis. In the process, metered amounts of one or more solvents are passed sequentially through a filter containing the sample under the direction of a microprocessor control apparatus. The mixture in the filter is agitated by ultrasonic cavitation for a timed period and the filtrate is collected. The filtrate of each solvent extraction is collected individually and the residue on the filter element is collected to complete the extraction process.

  20. Studies in astronomical time series analysis. III - Fourier transforms, autocorrelation functions, and cross-correlation functions of unevenly spaced data

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1989-01-01

    This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.

  1. Increased instrument intelligence--can it reduce laboratory error?

    PubMed

    Jekelis, Albert W

    2005-01-01

    Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.

  2. Estimating time-dependent ROC curves using data under prevalent sampling.

    PubMed

    Li, Shanshan

    2017-04-15

    Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Unattended reaction monitoring using an automated microfluidic sampler and on-line liquid chromatography.

    PubMed

    Patel, Darshan C; Lyu, Yaqi Fara; Gandarilla, Jorge; Doherty, Steve

    2018-04-03

    In-process sampling and analysis is an important aspect of monitoring kinetic profiles and impurity formation or rejection, both in development and during commercial manufacturing. In pharmaceutical process development, the technology of choice for a substantial portion of this analysis is high-performance liquid chromatography (HPLC). Traditionally, the sample extraction and preparation for reaction characterization have been performed manually. This can be time consuming, laborious, and impractical for long processes. Depending on the complexity of the sample preparation, there can be variability introduced by different analysts, and in some cases, the integrity of the sample can be compromised during handling. While there are commercial instruments available for on-line monitoring with HPLC, they lack capabilities in many key areas. Some do not provide integration of the sampling and analysis, while others afford limited flexibility in sample preparation. The current offerings provide a limited number of unit operations available for sample processing and no option for workflow customizability. This work describes development of a microfluidic automated program (MAP) which fully automates the sample extraction, manipulation, and on-line LC analysis. The flexible system is controlled using an intuitive Microsoft Excel based user interface. The autonomous system is capable of unattended reaction monitoring that allows flexible unit operations and workflow customization to enable complex operations and on-line sample preparation. The automated system is shown to offer advantages over manual approaches in key areas while providing consistent and reproducible in-process data. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Occurrence analysis of daily rainfalls by using non-homogeneous Poissonian processes

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Ferrari, E.; de Luca, D. L.

    2009-09-01

    In recent years several temporally homogeneous stochastic models have been applied to describe the rainfall process. In particular stochastic analysis of daily rainfall time series may contribute to explain the statistic features of the temporal variability related to the phenomenon. Due to the evident periodicity of the physical process, these models have to be used only to short temporal intervals in which occurrences and intensities of rainfalls can be considered reliably homogeneous. To this aim, occurrences of daily rainfalls can be considered as a stationary stochastic process in monthly periods. In this context point process models are widely used for at-site analysis of daily rainfall occurrence; they are continuous time series models, and are able to explain intermittent feature of rainfalls and simulate interstorm periods. With a different approach, periodic features of daily rainfalls can be interpreted by using a temporally non-homogeneous stochastic model characterized by parameters expressed as continuous functions in the time. In this case, great attention has to be paid to the parsimony of the models, as regards the number of parameters and the bias introduced into the generation of synthetic series, and to the influence of threshold values in extracting peak storm database from recorded daily rainfall heights. In this work, a stochastic model based on a non-homogeneous Poisson process, characterized by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. In particular, variation of rainfall occurrence intensity ? (t) is modelled by using Fourier series analysis, in which the non-homogeneous process is transformed into a homogeneous and unit one through a proper transformation of time domain, and the choice of the minimum number of harmonics is evaluated applying available statistical tests. The procedure is applied to a dataset of rain gauges located in different geographical zones of Mediterranean area. Time series have been selected on the basis of the availability of at least 50 years in the time period 1921-1985, chosen as calibration period, and of all the years of observation in the subsequent validation period 1986-2005, whose daily rainfall occurrence process variability is under hypothesis. Firstly, for each time series and for each fixed threshold value, parameters estimation of the non-homogeneous Poisson model is carried out, referred to calibration period. As second step, in order to test the hypothesis that daily rainfall occurrence process preserves the same behaviour in more recent time periods, the intensity distribution evaluated for calibration period is also adopted for the validation period. Starting from this and using a Monte Carlo approach, 1000 synthetic generations of daily rainfall occurrences, of length equal to validation period, have been carried out, and for each simulation sample ?(t) has been evaluated. This procedure is adopted because of the complexity of determining analytical statistical confidence limits referred to the sample intensity ?(t). Finally, sample intensity, theoretical function of the calibration period and 95% statistical band, evaluated by Monte Carlo approach, are matching, together with considering, for each threshold value, the mean square error (MSE) between the theoretical ?(t) and the sample one of recorded data, and his correspondent 95% one tail statistical band, estimated from the MSE values between the sample ?(t) of each synthetic series and the theoretical one. The results obtained may be very useful in the context of the identification and calibration of stochastic rainfall models based on historical precipitation data. Further applications of the non-homogeneous Poisson model will concern the joint analyses of the storm occurrence process with the rainfall height marks, interpreted by using a temporally homogeneous model in proper sub-year intervals.

  5. Micro-CT scouting for transmission electron microscopy of human tissue specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, A. G.; Stempinski, E. S.; XIAO, X.

    Transmission electron microscopy (TEM) provides sub-nanometre-scale details in volumetric samples. Samples such as pathology tissue specimens are often stained with a metal element to enhance contrast, which makes them opaque to optical microscopes. As a result, it can be a lengthy procedure to find the region of interest inside a sample through sectioning. Here, we describe micro-CT scouting for TEM that allows noninvasive identification of regions of interest within a block sample to guide the sectioning step. In a tissue pathology study, a bench-top micro-CT scanner with 10 m resolution was used to determine the location of patches of themore » mucous membrane in osmium-stained human nasal scraping samples. Furthermore, once the regions of interest were located, the sample block was sectioned to expose that location, followed by ultra-thin sectioning and TEM to inspect the internal structure of the cilia of the membrane epithelial cells with nanometre resolution. This method substantially reduced the time and labour of the search process from typically 20 sections for light microscopy to three sections with no added sample preparation. Lay description Electron microscopy provides very high levels of detail in a small area, and thus the question of where to look in an opaque sample, such as a stained tissue specimen, needs to be answered by sectioning the sample in small steps and examining the sections under a light microscope, until the region of interest is found. The search process can be lengthy and labor intensive, especially for a study involving a large number of samples. Small areas of interest can be missed in the process if not enough regions are examined. We also describe a method to directly locate the region of interest within a whole sample using micro-CT imaging, bypassing the need of blindly sectioning. Micro-CT enables locating the region within 3D space; this information provides a guide for sectioning the sample to expose that precise location for high resolution electron microscopy imaging. In a human tissue specimen study, this method considerably reduced the time and labor of the search process.« less

  6. Micro-CT scouting for transmission electron microscopy of human tissue specimens

    DOE PAGES

    Morales, A. G.; Stempinski, E. S.; XIAO, X.; ...

    2016-02-08

    Transmission electron microscopy (TEM) provides sub-nanometre-scale details in volumetric samples. Samples such as pathology tissue specimens are often stained with a metal element to enhance contrast, which makes them opaque to optical microscopes. As a result, it can be a lengthy procedure to find the region of interest inside a sample through sectioning. Here, we describe micro-CT scouting for TEM that allows noninvasive identification of regions of interest within a block sample to guide the sectioning step. In a tissue pathology study, a bench-top micro-CT scanner with 10 m resolution was used to determine the location of patches of themore » mucous membrane in osmium-stained human nasal scraping samples. Furthermore, once the regions of interest were located, the sample block was sectioned to expose that location, followed by ultra-thin sectioning and TEM to inspect the internal structure of the cilia of the membrane epithelial cells with nanometre resolution. This method substantially reduced the time and labour of the search process from typically 20 sections for light microscopy to three sections with no added sample preparation. Lay description Electron microscopy provides very high levels of detail in a small area, and thus the question of where to look in an opaque sample, such as a stained tissue specimen, needs to be answered by sectioning the sample in small steps and examining the sections under a light microscope, until the region of interest is found. The search process can be lengthy and labor intensive, especially for a study involving a large number of samples. Small areas of interest can be missed in the process if not enough regions are examined. We also describe a method to directly locate the region of interest within a whole sample using micro-CT imaging, bypassing the need of blindly sectioning. Micro-CT enables locating the region within 3D space; this information provides a guide for sectioning the sample to expose that precise location for high resolution electron microscopy imaging. In a human tissue specimen study, this method considerably reduced the time and labor of the search process.« less

  7. Standard operating procedures for pre-analytical handling of blood and urine for metabolomic studies and biobanks.

    PubMed

    Bernini, Patrizia; Bertini, Ivano; Luchinat, Claudio; Nincheri, Paola; Staderini, Samuele; Turano, Paola

    2011-04-01

    (1)H NMR metabolic profiling of urine, serum and plasma has been used to monitor the impact of the pre-analytical steps on the sample quality and stability in order to propose standard operating procedures (SOPs) for deposition in biobanks. We analyzed the quality of serum and plasma samples as a function of the elapsed time (t = 0-4 h) between blood collection and processing and of the time from processing to freezing (up to 24 h). The stability of the urine metabolic profile over time (up to 24 h) at various storage temperatures was monitored as a function of the different pre-analytical treatments like pre-storage centrifugation, filtration, and addition of the bacteriostatic preservative sodium azide. Appreciable changes in the profiles, reflecting changes in the concentration of a number of metabolites, were detected and discussed in terms of chemical and enzymatic reactions for both blood and urine samples. Appropriate procedures for blood derivatives collection and urine preservation/storage that allow maintaining as much as possible the original metabolic profile of the fresh samples emerge, and are proposed as SOPs for biobanking.

  8. Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.

    PubMed

    Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun

    2018-01-01

    Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.

  9. Impact of collection container material and holding times on sample integrity for mercury and methylmercury in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riscassi, Ami L; Miller, Carrie L; Brooks, Scott C

    Mercury (Hg) and methylmercury (MeHg) concentrations in streamwater can vary on short timescales (hourly or less) during storm flow and on a diel cycle; the frequency and timing of sampling required to accurately characterize these dynamics may be difficult to accomplish manually. Automated sampling can assist in sample collection; however use has been limited for Hg and MeHg analysis due to stability concerns of trace concentrations during extended storage times. We examined the viability of using automated samplers with disposable low-density polyethylene (LDPE) sample bags to collect industrially contaminated streamwater for unfiltered and filtered Hg and MeHg analysis. Specifically wemore » investigated the effect of holding times ranging from hours to days on streamwater collected during baseflow and storm flow. Unfiltered and filtered Hg and MeHg concentrations decreased with increases in time prior to sample processing; holding times of 24 hours or less resulted in concentration changes (mean 11 7% different) similar to variability in duplicates collected manually during analogous field conditions (mean 7 10% different). Comparisons of samples collected with manual and automated techniques throughout a year for a wide range of stream conditions were also found to be similar to differences observed between duplicate grab samples. These results demonstrate automated sampling into LDPE bags with holding times of 24 hours or less can be effectively used to collect streamwater for Hg and MeHg analysis, and encourage the testing of these materials and methods for implementation in other aqueous systems where high-frequency sampling is warranted.« less

  10. Monitoring endemic livestock diseases using laboratory diagnostic data: A simulation study to evaluate the performance of univariate process monitoring control algorithms.

    PubMed

    Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils

    2016-05-01

    Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster detection. However, the practical implications of increasing the sample size (such as the costs associated with the disease) should also be taken into account. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Factors associated with Listeria monocytogenes contamination of cold-smoked pork products produced in Latvia and Lithuania.

    PubMed

    Bērziņs, Aivars; Hörman, Ari; Lundén, Janne; Korkeala, Hannu

    2007-04-10

    A total of 312 samples of sliced, vacuum packaged, cold-smoked pork from 15 meat processing plants in Latvia and Lithuania, obtained over a 15-month period from 2003 until 2004, were analyzed for the presence of Listeria monocytogenes at the end of their shelf-life. Overall, 120 samples (38%) tested positive for L. monocytogenes. Despite the long storing period, the levels of L. monocytogenes in cold-smoked pork products were low. Manufacturing processes were studied at seven meat processing plants. A new approach with a logistic multivariable regression model was applied to identify the main factors associated with L. monocytogenes contamination during the manufacturing of cold-smoked pork products. Brining by injection was a significant factor (odds ratio 10.66; P<0.05) for contamination of product with L. monocytogenes. Moreover, long cold-smoking times (> or = 12 h) had a significant predictive value (odds ratio 24.38; P<0.014) for a sample to test positive for L. monocytogenes. Pulsed-field gel electrophoresis results indicated that various sources of L. monocytogenes contamination existed over periods of time in several meat processing plants. In two meat processing plants, persistent L. monocytogenes strains belonging to serotypes 1/2a and 1/2c were found.

  12. Characterization of compounds by time-of-flight measurement utilizing random fast ions

    DOEpatents

    Conzemius, R.J.

    1989-04-04

    An apparatus is described for characterizing the mass of sample and daughter particles, comprising a source for providing sample ions; a fragmentation region wherein a fraction of the sample ions may fragment to produce daughter ion particles; an electrostatic field region held at a voltage level sufficient to effect ion-neutral separation and ion-ion separation of fragments from the same sample ion and to separate ions of different kinetic energy; a detector system for measuring the relative arrival times of particles; and processing means operatively connected to the detector system to receive and store the relative arrival times and operable to compare the arrival times with times detected at the detector when the electrostatic field region is held at a different voltage level and to thereafter characterize the particles. Sample and daughter particles are characterized with respect to mass and other characteristics by detecting at a particle detector the relative time of arrival for fragments of a sample ion at two different electrostatic voltage levels. The two sets of particle arrival times are used in conjunction with the known altered voltage levels to mathematically characterize the sample and daughter fragments. In an alternative embodiment the present invention may be used as a detector for a conventional mass spectrometer. In this embodiment, conventional mass spectrometry analysis is enhanced due to further mass resolving of the detected ions. 8 figs.

  13. Characterization of compounds by time-of-flight measurement utilizing random fast ions

    DOEpatents

    Conzemius, Robert J.

    1989-01-01

    An apparatus for characterizing the mass of sample and daughter particles, comprising a source for providing sample ions; a fragmentation region wherein a fraction of the sample ions may fragment to produce daughter ion particles; an electrostatic field region held at a voltage level sufficient to effect ion-neutral separation and ion-ion separation of fragments from the same sample ion and to separate ions of different kinetic energy; a detector system for measuring the relative arrival times of particles; and processing means operatively connected to the detector system to receive and store the relative arrival times and operable to compare the arrival times with times detected at the detector when the electrostatic field region is held at a different voltage level and to thereafter characterize the particles. Sample and daughter particles are characterized with respect to mass and other characteristics by detecting at a particle detector the relative time of arrival for fragments of a sample ion at two different electrostatic voltage levels. The two sets of particle arrival times are used in conjunction with the known altered voltage levels to mathematically characterize the sample and daughter fragments. In an alternative embodiment the present invention may be used as a detector for a conventional mass spectrometer. In this embodiment, conventional mass spectrometry analysis is enhanced due to further mass resolving of the detected ions.

  14. Design and development of a highly sensitive, field portable plasma source instrument for on-line liquid stream monitoring and real-time sample analysis

    NASA Astrophysics Data System (ADS)

    Duan, Yixiang; Su, Yongxuan; Jin, Zhe; Abeln, Stephen P.

    2000-03-01

    The development of a highly sensitive, field portable, low-powered instrument for on-site, real-time liquid waste stream monitoring is described in this article. A series of factors such as system sensitivity and portability, plasma source, sample introduction, desolvation system, power supply, and the instrument configuration, were carefully considered in the design of the portable instrument. A newly designed, miniature, modified microwave plasma source was selected as the emission source for spectroscopy measurement, and an integrated small spectrometer with a charge-coupled device detector was installed for signal processing and detection. An innovative beam collection system with optical fibers was designed and used for emission signal collection. Microwave plasma can be sustained with various gases at relatively low power, and it possesses high detection capabilities for both metal and nonmetal pollutants, making it desirable to use for on-site, real-time, liquid waste stream monitoring. An effective in situ sampling system was coupled with a high efficiency desolvation device for direct-sampling liquid samples into the plasma. A portable computer control system is used for data processing. The new, integrated instrument can be easily used for on-site, real-time monitoring in the field. The system possesses a series of advantages, including high sensitivity for metal and nonmetal elements; in situ sampling; compact structure; low cost; and ease of operation and handling. These advantages will significantly overcome the limitations of previous monitoring techniques and make great contributions to environmental restoration and monitoring.

  15. Evolution of Residual Stress and Distortion of Cold-Rolled Bearing Ring from Annealing to Quenched-Tempered Heat Treatment

    NASA Astrophysics Data System (ADS)

    Lu, Bohan; Lu, Xiaohui

    2018-02-01

    This study investigates the correlation between the residual stress and distortion behavior of a cold-rolled ring from the annealing to quenching-tempering (QT) process. Due to the cold-rolled process, the external periphery of the bearing ring experiences a compressive residual stress. To relieve the residual stress, cold-rolled rings are annealed at 700 °C which is higher than the starting temperature of recrystallization. When cold-rolled rings are annealed at 700 °C for 15 min, the compressive residual stress is reduced to zero and the outer diameter of the annealed ring becomes larger than that of a non-annealed sample, which is unrelated to annealing time. Simultaneously, the roundness and taper deviation do not obviously change compared with those of non-annealed sample. The stress relaxation during the annealing process was attributed to the recovery and recrystallization of ferrite. Annealing has a genetic influence on the following QT heat treatment, wherein the lowest residual stress is in the non-annealed cold-rolled ring. From the annealing to QT process, the deviation of the outer diameter, roundness, and taper increased with annealing time, a large extend than that of non-annealed samples.

  16. Improving preanalytic processes using the principles of lean production (Toyota Production System).

    PubMed

    Persoon, Thomas J; Zaleski, Sue; Frerichs, Janice

    2006-01-01

    The basic technologies used in preanalytic processes for chemistry tests have been mature for a long time, and improvements in preanalytic processes have lagged behind improvements in analytic and postanalytic processes. We describe our successful efforts to improve chemistry test turnaround time from a central laboratory by improving preanalytic processes, using existing resources and the principles of lean production. Our goal is to report 80% of chemistry tests in less than 1 hour and to no longer recognize a distinction between expedited and routine testing. We used principles of lean production (the Toyota Production System) to redesign preanalytic processes. The redesigned preanalytic process has fewer steps and uses 1-piece flow to move blood samples through the accessioning, centrifugation, and aliquoting processes. Median preanalytic processing time was reduced from 29 to 19 minutes, and the laboratory met the goal of reporting 80% of chemistry results in less than 1 hour for 11 consecutive months.

  17. Sample and data processing considerations for the NIST quantitative infrared database

    NASA Astrophysics Data System (ADS)

    Chu, Pamela M.; Guenther, Franklin R.; Rhoderick, George C.; Lafferty, Walter J.; Phillips, William

    1999-02-01

    Fourier-transform infrared (FT-IR) spectrometry has become a useful real-time in situ analytical technique for quantitative gas phase measurements. In fact, the U.S. Environmental Protection Agency (EPA) has recently approved open-path FT-IR monitoring for the determination of hazardous air pollutants (HAP) identified in EPA's Clean Air Act of 1990. To support infrared based sensing technologies, the National Institute of Standards and Technology (NIST) is currently developing a standard quantitative spectral database of the HAPs based on gravimetrically prepared standard samples. The procedures developed to ensure the quantitative accuracy of the reference data are discussed, including sample preparation, residual sample contaminants, data processing considerations, and estimates of error.

  18. Sampled-data H∞ filtering for Markovian jump singularly perturbed systems with time-varying delay and missing measurements

    NASA Astrophysics Data System (ADS)

    Yan, Yifang; Yang, Chunyu; Ma, Xiaoping; Zhou, Linna

    2018-02-01

    In this paper, sampled-data H∞ filtering problem is considered for Markovian jump singularly perturbed systems with time-varying delay and missing measurements. The sampled-data system is represented by a time-delay system, and the missing measurement phenomenon is described by an independent Bernoulli random process. By constructing an ɛ-dependent stochastic Lyapunov-Krasovskii functional, delay-dependent sufficient conditions are derived such that the filter error system satisfies the prescribed H∞ performance for all possible missing measurements. Then, an H∞ filter design method is proposed in terms of linear matrix inequalities. Finally, numerical examples are given to illustrate the feasibility and advantages of the obtained results.

  19. Towards Real Time Diagnostics of Hybrid Welding Laser/GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy Mcjunkin; Dennis C. Kunerth; Corrie Nichol

    2013-07-01

    Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defectsmore » or precursors to defects and correct when possible during the weld process.« less

  20. Towards real time diagnostics of Hybrid Welding Laser/GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, T. R.; Kunerth, D. C.; Nichol, C. I.

    2014-02-18

    Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defectsmore » or precursors to defects and correct when possible during the weld process.« less

  1. Towards real time diagnostics of Hybrid Welding Laser/GMAW

    NASA Astrophysics Data System (ADS)

    McJunkin, T. R.; Kunerth, D. C.; Nichol, C. I.; Todorov, E.; Levesque, S.

    2014-02-01

    Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defects or precursors to defects and correct when possible during the weld process.

  2. Corrosion resistance and biological activity of TiO2 implant coatings produced in oxygen-rich environments.

    PubMed

    Zhang, Rui; Wan, Yi; Ai, Xing; Liu, Zhanqiang; Zhang, Dong

    2017-01-01

    The physical and chemical properties of bio-titanium alloy implant surfaces play an important role in their corrosion resistance and biological activity. New turning and turning-rolling processes are presented, employing an oxygen-rich environment in order to obtain titanium dioxide layers that can both protect implants from corrosion and also promote cell adhesion. The surface topographies, surface roughnesses and chemical compositions of the sample surfaces were obtained using scanning electron microscopy, a white light interferometer, and the Auger electron spectroscopy, respectively. The corrosion resistance of the samples in a simulated body fluid was determined using electrochemical testing. Biological activity on the samples was also analyzed, using a vitro cell culture system. The results show that compared with titanium oxide layers formed using a turning process in air, the thickness of the titanium oxide layers formed using turning and turning-rolling processes in an oxygen-rich environment increased by 4.6 and 7.3 times, respectively. Using an oxygen-rich atmosphere in the rolling process greatly improves the corrosion resistance of the resulting samples in a simulated body fluid. On samples produced using the turning-rolling process, cells spread quickly and exhibited the best adhesion characteristics.

  3. Acoustic Sample Deposition MALDI-MS (ASD-MALDI-MS): A Novel Process Flow for Quality Control Screening of Compound Libraries.

    PubMed

    Chin, Jefferson; Wood, Elizabeth; Peters, Grace S; Drexler, Dieter M

    2016-02-01

    In the early stages of drug discovery, high-throughput screening (HTS) of compound libraries against pharmaceutical targets is a common method to identify potential lead molecules. For these HTS campaigns to be efficient and successful, continuous quality control of the compound collection is necessary and crucial. However, the large number of compound samples and the limited sample amount pose unique challenges. Presented here is a proof-of-concept study for a novel process flow for the quality control screening of small-molecule compound libraries that consumes only minimal amounts of samples and affords compound-specific molecular data. This process employs an acoustic sample deposition (ASD) technique for the offline sample preparation by depositing nanoliter volumes in an array format onto microscope glass slides followed by matrix-assisted laser desorption/ionization mass spectrometric (MALDI-MS) analysis. An initial study of a 384-compound array employing the ASD-MALDI-MS workflow resulted in a 75% first-pass positive identification rate with an analysis time of <1 s per sample. © 2015 Society for Laboratory Automation and Screening.

  4. VAPOR PRESSURE ISOTOPE EFFECTS IN THE MEASUREMENT OF ENVIRONMENTAL TRITIUM SAMPLES.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhne, W.

    2012-12-03

    Standard procedures for the measurement of tritium in water samples often require distillation of an appropriate sample aliquot. This distillation process may result in a fractionation of tritiated water and regular light water due to the vapor pressure isotope effect, introducing either a bias or an additional contribution to the total tritium measurement uncertainty. The magnitude of the vapor pressure isotope effect is characterized as functions of the amount of water distilled from the sample aliquot and the heat settings for the distillation process. The tritium concentration in the distillate is higher than the tritium concentration in the sample earlymore » in the distillation process, it then sharply decreases due to the vapor pressure isotope effect and becomes lower than the tritium concentration in the sample, until the high tritium concentration retained in the boiling flask is evaporated at the end of the process. At that time, the tritium concentration in the distillate again overestimates the sample tritium concentration. The vapor pressure isotope effect is more pronounced the slower the evaporation and distillation process is conducted; a lower heat setting during the evaporation of the sample results in a larger bias in the tritium measurement. The experimental setup used and the fact that the current study allowed for an investigation of the relative change in vapor pressure isotope effect in the course of the distillation process distinguish it from and extend previously published measurements. The separation factor as a quantitative measure of the vapor pressure isotope effect is found to assume values of 1.034 {+-} 0.033, 1.052 {+-} 0.025, and 1.066 {+-} 0.037, depending on the vigor of the boiling process during distillation of the sample. A lower heat setting in the experimental setup, and therefore a less vigorous boiling process, results in a larger value for the separation factor. For a tritium measurement in water samples, this implies that the tritium concentration could be underestimated by 3 - 6%.« less

  5. Physical, Cognitive, and Psychosocial Variables from the Disablement Process Model Predict Patterns of Independence and the Transition into Disability for the Oldest-Old

    ERIC Educational Resources Information Center

    Fauth, Elizabeth Braungart; Zarit, Steven H.; Malmberg, Bo; Johansson, Boo

    2007-01-01

    Purpose: This study used the Disablement Process Model to predict whether a sample of the oldest-old maintained their disability or disability-free status over a 2- and 4-year follow-up, or whether they transitioned into a state of disability during this time. Design and Methods: We followed a sample of 149 Swedish adults who were 86 years of age…

  6. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  7. Event-driven processing for hardware-efficient neural spike sorting

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  8. A new virus of soybean confirmed in Wisconsin

    USDA-ARS?s Scientific Manuscript database

    This week our laboratory confirmed the presence of Soybean vein necrosis-associated virus (SVNaV) in soybeans sampled in Wisconsin. Samples were taken at several times during September and processed in our laboratory. Symptoms of the disease caused by the virus include yellowing (chlorosis) of the ...

  9. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy

    PubMed Central

    Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786

  10. Assessing heat treatment of chicken breast cuts by impedance spectroscopy.

    PubMed

    Schmidt, Franciny C; Fuentes, Ana; Masot, Rafael; Alcañiz, Miguel; Laurindo, João B; Barat, José M

    2017-03-01

    The aim of this work was to develop a new system based on impedance spectroscopy to assess the heat treatment of previously cooked chicken meat by two experiments; in the first, samples were cooked at different temperatures (from 60 to 90 ℃) until core temperature of the meat reached the water bath temperature. In the second approach, temperature was 80 ℃ and the samples were cooked for different times (from 5 to 55 min). Impedance was measured once samples had cooled. The examined processing parameters were the maximum temperature reached in thermal centre of the samples, weight loss, moisture and the integral of the temperature profile during the cooking-cooling process. The correlation between the processing parameters and impedance was studied by partial least square regressions. The models were able to predict the studied parameters. Our results are essential for developing a new system to control the technological, sensory and safety aspects of cooked meat products on the whole meat processing line.

  11. Methods for Producing High-Performance Silicon Carbide Fibers, Architectural Preforms, and High-Temperature Composite Structures

    NASA Technical Reports Server (NTRS)

    Yun, Hee-Mann (Inventor); DiCarlo, James A. (Inventor)

    2014-01-01

    Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties tier each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.

  12. Estimating replicate time shifts using Gaussian process regression

    PubMed Central

    Liu, Qiang; Andersen, Bogi; Smyth, Padhraic; Ihler, Alexander

    2010-01-01

    Motivation: Time-course gene expression datasets provide important insights into dynamic aspects of biological processes, such as circadian rhythms, cell cycle and organ development. In a typical microarray time-course experiment, measurements are obtained at each time point from multiple replicate samples. Accurately recovering the gene expression patterns from experimental observations is made challenging by both measurement noise and variation among replicates' rates of development. Prior work on this topic has focused on inference of expression patterns assuming that the replicate times are synchronized. We develop a statistical approach that simultaneously infers both (i) the underlying (hidden) expression profile for each gene, as well as (ii) the biological time for each individual replicate. Our approach is based on Gaussian process regression (GPR) combined with a probabilistic model that accounts for uncertainty about the biological development time of each replicate. Results: We apply GPR with uncertain measurement times to a microarray dataset of mRNA expression for the hair-growth cycle in mouse back skin, predicting both profile shapes and biological times for each replicate. The predicted time shifts show high consistency with independently obtained morphological estimates of relative development. We also show that the method systematically reduces prediction error on out-of-sample data, significantly reducing the mean squared error in a cross-validation study. Availability: Matlab code for GPR with uncertain time shifts is available at http://sli.ics.uci.edu/Code/GPRTimeshift/ Contact: ihler@ics.uci.edu PMID:20147305

  13. Reward and punishment learning in daily life: A replication study.

    PubMed

    Heininga, Vera E; van Roekel, Eeske; Wichers, Marieke; Oldehinkel, Albertine J

    2017-01-01

    Day-to-day experiences are accompanied by feelings of Positive Affect (PA) and Negative Affect (NA). Implicitly, without conscious processing, individuals learn about the reward and punishment value of each context and activity. These associative learning processes, in turn, affect the probability that individuals will re-engage in such activities or seek out that context. So far, implicit learning processes are almost exclusively investigated in controlled laboratory settings and not in daily life. Here we aimed to replicate the first study that investigated implicit learning processes in real life, by means of the Experience Sampling Method (ESM). That is, using an experience-sampling study with 90 time points (three measurements over 30 days), we prospectively measured time spent in social company and amount of physical activity as well as PA and NA in the daily lives of 18-24-year-old young adults (n = 69 with anhedonia, n = 69 without anhedonia). Multilevel analyses showed a punishment learning effect with regard to time spent in company of friends, but not a reward learning effect. Neither reward nor punishment learning effects were found with regard to physical activity. Our study shows promising results for future research on implicit learning processes in daily life, with the proviso of careful consideration of the timescale used. Short-term retrospective ESM design with beeps approximately six hours apart may suffer from mismatch noise that hampers accurate detection of associative learning effects over time.

  14. MontePython 3: Parameter inference code for cosmology

    NASA Astrophysics Data System (ADS)

    Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon

    2018-05-01

    MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.

  15. [Effects of post-harvest processing and extraction methods on polysaccharides content of Dendrobium officinale].

    PubMed

    Li, Cong; Ning, Li-Dan; Si, Jin-Ping; Wu, Ling-Shang; Liu, Jing-Jing; Song, Xian-Shui; Yu, Qiao-Xian

    2013-02-01

    To reveal the quality variation of polysaccharide in Dendrobium officinale by post-harvest processing and extraction methods, and provide a basis for post-harvest processing and clinical and hygienical applications of Tiepifengdou (Dendrobii Officinalis Caulis). The content of polysaccharides were studied by 4 post-harvest processing methods, i. e. drying by drying closet, drying after scalding by boiling water, drying while twisting, and drying while twisting after scalding by boiling water. And a series of temperatures were set in each processing procedure. An orthogonal test L9 (3(4)) with crushed degrees, solid-liquid ratio, extraction time and extraction times as factors were designed to analyze the dissolution rate of polysaccharides in Tiepifengdou processed by drying while twisting at 80 degrees C. The content of polysaccharides was ranged from 26.59% to 32.70% in different samples processed by different processing methods, among which drying while twisting at 80 degrees C and 100 degrees C respectively were the best. Crushed degree was the most important influence on the dissolution rate of polysaccharides. The dissolution rate of polysaccharides was extremely low when the sample was boiled directly without crushing and sieving. Drying while twisting at 80 degrees C was the best post-harvest processing method, which can help to dry the fresh herbs and improve the accumulation of polysaccharides. Boiling the uncrushed Tiepifengdou for a long time as traditional method could not fully extract polysaccharides, while boiling the crushed Tiepifengdou can efficiently extract polysaccharides.

  16. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  17. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  18. Cost assessment of the automated VERSANT 440 Molecular System versus the semi-automated System 340 bDNA Analyzer platforms.

    PubMed

    Elbeik, Tarek; Loftus, Richard A; Beringer, Scott

    2007-11-01

    Labor, supply and waste were evaluated for HIV-1 and HCV bDNA on the semi-automated System 340 bDNA Analyzer and the automated VERSANT 440 Molecular System (V440). HIV-1 sample processing was evaluated using a 24- and 48-position centrifuge rotor. Vigilance time (hands-on manipulations plus incubation time except initial target hybridization) and disposables were approximately 37 and 12% lower for HIV-1, and 64 and 31% lower for HCV bDNA, respectively, with V440. Biohazardous solid waste was approximately twofold lower for both assays and other waste types were the same for either assay on both platforms. HIV-1 sample processing vigilance time for the 48-position rotor was reduced by 2 h. V440 provides cost savings and improved workflow.

  19. Development of Novel Method for Rapid Extract of Radionuclides from Solution Using Polymer Ligand Film

    NASA Astrophysics Data System (ADS)

    Rim, Jung H.

    Accurate and fast determination of the activity of radionuclides in a sample is critical for nuclear forensics and emergency response. Radioanalytical techniques are well established for radionuclides measurement, however, they are slow and labor intensive, requiring extensive radiochemical separations and purification prior to analysis. With these limitations of current methods, there is great interest for a new technique to rapidly process samples. This dissertation describes a new analyte extraction medium called Polymer Ligand Film (PLF) developed to rapidly extract radionuclides. Polymer Ligand Film is a polymer medium with ligands incorporated in its matrix that selectively and rapidly extract analytes from a solution. The main focus of the new technique is to shorten and simplify the procedure necessary to chemically isolate radionuclides for determination by alpha spectrometry or beta counting. Five different ligands were tested for plutonium extraction: bis(2-ethylhexyl) methanediphosphonic acid (H2DEH[MDP]), di(2-ethyl hexyl) phosphoric acid (HDEHP), trialkyl methylammonium chloride (Aliquat-336), 4,4'(5')-di-t-butylcyclohexano 18-crown-6 (DtBuCH18C6), and 2-ethylhexyl 2-ethylhexylphosphonic acid (HEH[EHP]). The ligands that were effective for plutonium extraction further studied for uranium extraction. The plutonium recovery by PLFs has shown dependency on nitric acid concentration and ligand to total mass ratio. H2DEH[MDP] PLFs performed best with 1:10 and 1:20 ratio PLFs. 50.44% and 47.61% of plutonium were extracted on the surface of PLFs with 1M nitric acid for 1:10 and 1:20 PLF, respectively. HDEHP PLF provided the best combination of alpha spectroscopy resolution and plutonium recovery with 1:5 PLF when used with 0.1M nitric acid. The overall analyte recovery was lower than electrodeposited samples, which typically has recovery above 80%. However, PLF is designed to be a rapid field deployable screening technique and consistency is more important than recovery. PLFs were also tested using blind quality control samples and the activities were accurately measured. It is important to point out that PLFs were consistently susceptible to analytes penetrating and depositing below the surface. The internal radiation within the body of PLF is mostly contained and did not cause excessive self-attenuation and peak broadening in alpha spectroscopy. The analyte penetration issue was beneficial in the destructive analysis. H2DEH[MDP] PLF was tested with environmental samples to fully understand the capabilities and limitations of the PLF in relevant environments. The extraction system was very effective in extracting plutonium from environmental water collected from Mortandad Canyon at Los Alamos National Laboratory with minimal sample processing. Soil samples were tougher to process than the water samples. Analytes were first leached from the soil matrixes using nitric acid before processing with PLF. This approach had a limitation in extracting plutonium using PLF. The soil samples from Mortandad Canyon, which are about 1% iron by weight, were effectively processed with the PLF system. Even with certain limitations of the PLF extraction system, this technique was able to considerably decrease the sample analysis time. The entire environmental sample was analyzed within one to two days. The decrease in time can be attributed to the fact that PLF is replacing column chromatography and electrodeposition with a single step for preparing alpha spectrometry samples. The two-step process of column chromatography and electrodeposition takes a couple days to a week to complete depending on the sample. The decrease in time and the simplified procedure make this technique a unique solution for application to nuclear forensics and emergency response. A large number of samples can be quickly analyzed and selective samples can be further analyzed with more sensitive techniques based on the initial data. The deployment of a PLF system as a screening method will greatly reduce a total analysis time required to gain meaningful isotopic data for the nuclear forensics application. (Abstract shortened by UMI.)

  20. Retained Austenite in SAE 52100 Steel Post Magnetic Processing and Heat Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pappas, Nathaniel R; Watkins, Thomas R; Cavin, Odis Burl

    2007-01-01

    Steel is an iron-carbon alloy that contains up to 2% carbon by weight. Understanding which phases of iron and carbon form as a function of temperature and percent carbon is important in order to process/manufacture steel with desired properties. Austenite is the face center cubic (fcc) phase of iron that exists between 912 and 1394 C. When hot steel is rapidly quenched in a medium (typically oil or water), austenite transforms into martensite. The goal of the study is to determine the effect of applying a magnetic field on the amount of retained austenite present at room temperature after quenching.more » Samples of SAE 52100 steel were heat treated then subjected to a magnetic field of varying strength and time, while samples of SAE 1045 steel were heat treated then subjected to a magnetic field of varying strength for a fixed time while being tempered. X-ray diffraction was used to collect quantitative data corresponding to the amount of each phase present post processing. The percentage of retained austenite was then calculated using the American Society of Testing and Materials standard for determining the amount of retained austenite for randomly oriented samples and was plotted as a function of magnetic field intensity, magnetic field apply time, and magnetic field wait time after quenching to determine what relationships exist with the amount of retained austenite present. In the SAE 52100 steel samples, stronger field strengths resulted in lower percentages of retained austenite for fixed apply times. The results were inconclusive when applying a fixed magnetic field strength for varying amounts of time. When applying a magnetic field after waiting a specific amount of time after quenching, the analyses indicate that shorter wait times result in less retained austenite. The SAE 1045 results were inconclusive. The samples showed no retained austenite regardless of magnetic field strength, indicating that tempering removed the retained austenite. It is apparent that applying a magnetic field after quenching will result in a lower amount of retained austenite but that the exact relationship, linear or other, is inconclusive. This project is a part of a larger, ongoing project investigating the application of a magnetic field during heat treatment and its influence on the iron-carbon phase-equilibria.« less

  1. [Comparative studies of personal and steady-state sampling for determining dust exposure in different job groups].

    PubMed

    Cherneva, P; Lukanova, R

    1994-01-01

    The variability of the dust concentration in time and space, as well as the change of worker's place during the working process, define the necessity of introducing personal sampling in the hygiene control practice. However, the laboratory equipment with personal devices is still not sufficient. The aim of this work is to assess the dust exposure of the basic professional groups from the ore- and coal production in Bulgaria by personal sampling in comparative studies of the static ambient sampling used up to now. 63 full-shift investigations of the dust factor were performed on professional groups of miners of the polymetal and coal pits by static ambient devices-[Hygitest production] and personal [from firms "Casella", "Strolein" and "Gilian"] devices, after standardized methods. The results are data processed-by means of logarithmic normal distribution of the relation of the respirable dust concentrations, determined personally and by static ambient sampling. The limits of variation of this correlation are from 0.5 to 4.1 at average geometric value -0.95 and standard geometric deviation-1.8 i.e. both types of sampling are intersubstitutional for the examined groups and sites, as in the underground ores the professional risk of respirable dust is underestimated up to 4 times at static ambient sampling.

  2. Space charge dynamics Of CF4 fluorinated LDPE samples from different fluorination conditions and their DC conductivities

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Li, Ziyun; Chen, George; Chen, Qiang; Li, Shengtao

    2017-07-01

    Taking advantage of plasma technology using mixing gas CF4/H2, a fluorination process was performed on LDPE samples in the present paper. Different exposure times and discharge voltage levels were applied to produce four different types of samples. It has been found that after fluorination, space charge injection is obviously suppressed. And with longer fluorination times and higher discharge voltage, injected homocharges are reduced. By employing x-ray photoelectron spectroscopy, new chemical groups of C-F bindings are confirmed to be introduced by fluorination process of the plasma treatment. The charge suppression effect can be explained as: surface traps introduced by fluorination will reduce the interface field at both electrodes. Moreover, for fluorinated samples, heterocharge emerges obviously under 30 kV \\text{m}{{\\text{m}}-1} , which are considered as charges ionized from degradation products of etching and/or lower weight molecular specifies. Through the conductivity measurements also performed at 30 kV \\text{m}{{\\text{m}}-1} , it is found that, for the fluorinated samples with the better charge blocking effect, the conductivity is lowered. However, the conductivity of the fluorinated sample with the lightest degree of fluorination is found to be higher than that of normal samples.

  3. Using Six Sigma to improve once daily gentamicin dosing and therapeutic drug monitoring performance.

    PubMed

    Egan, Sean; Murphy, Philip G; Fennell, Jerome P; Kelly, Sinead; Hickey, Mary; McLean, Carolyn; Pate, Muriel; Kirke, Ciara; Whiriskey, Annette; Wall, Niall; McCullagh, Eddie; Murphy, Joan; Delaney, Tim

    2012-12-01

    Safe, effective therapy with the antimicrobial gentamicin requires good practice in dose selection and monitoring of serum levels. Suboptimal therapy occurs with breakdown in the process of drug dosing, serum blood sampling, laboratory processing and level interpretation. Unintentional underdosing may result. This improvement effort aimed to optimise this process in an academic teaching hospital using Six Sigma process improvement methodology. A multidisciplinary project team was formed. Process measures considered critical to quality were defined, and baseline practice was examined through process mapping and audit. Root cause analysis informed improvement measures. These included a new dosing and monitoring schedule, and standardised assay sampling and drug administration timing which maximised local capabilities. Three iterations of the improvement cycle were conducted over a 24-month period. The attainment of serum level sampling in the required time window improved by 85% (p≤0.0001). A 66% improvement in accuracy of dosing was observed (p≤0.0001). Unnecessary dose omission while awaiting level results and inadvertent disruption to therapy due to dosing and monitoring process breakdown were eliminated. Average daily dose administered increased from 3.39 mg/kg to 4.78 mg/kg/day. Using Six Sigma methodology enhanced gentamicin usage process performance. Local process related factors may adversely affect adherence to practice guidelines for gentamicin, a drug which is complex to use. It is vital to adapt dosing guidance and monitoring requirements so that they are capable of being implemented in the clinical environment as a matter of routine. Improvement may be achieved through a structured localised approach with multidisciplinary stakeholder involvement.

  4. Where Do I Start (Beginning the Investigation)?

    NASA Astrophysics Data System (ADS)

    Kornacki, Jeffrey L.

    No doubt some will open directly to this chapter, because your product is contaminated with an undesirable microbe, or perhaps you have been asked to do such an investigation for another company's facility not previously observed by you and naturally you want tips on how to find where the contaminant is getting into the product stream. This chapter takes the reader through the process of beginning the investigation including understanding the process including the production schedule and critically reviewing previously generated laboratory data. Understanding the critical control points and validity of their critical limits is also important. Scoping the extent of the problem is next. It is always a good idea for the factory to have a rigorously validated cleaning and sanitation procedure that provides a documented "sanitation breakpoint," which can be useful in the "scoping" process, although some contamination events may extend past these "break-points." Touring the facility is next wherein preliminary pre-selection of areas for future sampling can be done. Operational samples and observations in non-food contact areas can be taken at this time. Then the operations personnel need to be consulted and plans made for an appropriate amount of time to observe equipment break down for "post-operational" sampling and "pre-operational" investigational sampling. Hence the chapter further discusses preparing operations personnel for the disruptions that go along with these investigations and assembling the sampling team. The chapter concludes with a discussion of post-startup observations after an investigation and sampling.

  5. Demonstrating the Value of Fine-resolution Optical Data for Minimising Aliasing Impacts on Biogeochemical Models of Surface Waters

    NASA Astrophysics Data System (ADS)

    Chappell, N. A.; Jones, T.; Young, P.; Krishnaswamy, J.

    2015-12-01

    There is increasing awareness that under-sampling may have resulted in the omission of important physicochemical information present in water quality signatures of surface waters - thereby affecting interpretation of biogeochemical processes. For dissolved organic carbon (DOC) and nitrogen this under-sampling can now be avoided using UV-visible spectroscopy measured in-situ and continuously at a fine-resolution e.g. 15 minutes ("real time"). Few methods are available to extract biogeochemical process information directly from such high-frequency data. Jones, Chappell & Tych (2014 Environ Sci Technol: 13289-97) developed one such method using optically-derived DOC data based upon a sophisticated time-series modelling tool. Within this presentation we extend the methodology to quantify the minimum sampling interval required to avoid distortion of model structures and parameters that describe fundamental biogeochemical processes. This shifting of parameters which results from under-sampling is called "aliasing". We demonstrate that storm dynamics at a variety of sites dominate over diurnal and seasonal changes and that these must be characterised by sampling that may be sub-hourly to avoid aliasing. This is considerably shorter than that used by other water quality studies examining aliasing (e.g. Kirchner 2005 Phys Rev: 069902). The modelling approach presented is being developed into a generic tool to calculate the minimum sampling for water quality monitoring in systems driven primarily by hydrology. This is illustrated with fine-resolution, optical data from watersheds in temperate Europe through to the humid tropics.

  6. Hydrogeochemical processes and isotopes analysis. Study case: "La Línea Tunnel", Colombia

    NASA Astrophysics Data System (ADS)

    Piña, Adriana; Donado, Leonardo; Cramer, Thomas

    2017-04-01

    Hydrogeochemical and stable isotopes analyses have been widely used to identify recharge and discharge zones, flowpaths, type, origin and age of water, chemical processes between minerals and groundwater as well as effects caused by anthropogenic or natural pollution. In this paper we analyze the interactions between groundwater and surface water using as laboratory the tunnels located at the La Línea Massif in the Cordillera Central of the Colombian Andes. The massif is formed by two igneous-metamorphic fractured complexes (Cajamarca and Quebradagrande group) plus andesithic porphyry rocks from the tertiary period. There, eight main fault zones related to surface creeks were identified and main inflows inside the tunnels were reported. 60 water samples were collected in surface and inside the tunnel in fault zones in two different years, 2010 and 2015. To classify water samples, a multivariate statistical analysis combining Factor Analysis (FA) with Hierarchical Cluster Analysis (HCA) was performed. Then, analyses of the major chemical elements and water isotopes (18O, 2H and 3H) were used to define the origin of dissolved components and to analyse the evolution in time. Most samples were classified as bicarbonate calcite water or bicarbonate magnesium water type. Isotopic analyses show a characteristic behavior for east and west watershed and each geologic group. According to the FA and HCA, obtained factors and clusters are first related to the location of the samples (surface or tunnel samples) followed by the geology. Surface samples behave according to the Colombian meteoric line as inflows related to permeable faults while less permeable faults show hydrothermal processes. Finally, water evolution in time shows a decrease of pH, conductivity and Mg2+ related to silicate weathering or precipitation/dissolution processes that affect the spacing in fractures and consequently, the hydraulic properties.

  7. Analyte stability during the total testing process: studies of vitamins A, D and E by LC-MS/MS.

    PubMed

    Albahrani, Ali A; Rotarou, Victor; Roche, Peter J; Greaves, Ronda F

    2016-10-01

    There are limited evidence based studies demonstrating the stability of fat-soluble vitamins (FSV) measured in blood. This study aimed to examine the effects of light, temperature and time on vitamins A, D and E throughout the total testing process. Four experiments were conducted. Three investigated the sample matrix, of whole blood, serum and the extracted sample, against the variables of temperature and light; and the fourth experiment investigated the sample during the extraction process against the variable of light. All samples were analysed via our simultaneous FSV method using liquid chromatography-tandem mass spectrometry technology. The allowable clinical percentage change was calculated based on biological variation and desirable method imprecision for each analyte. The total change limit was ±7.3% for 25-OH-vitamin D3, ±11.8% for retinol and ±10.8% for α-tocopherol. Vitamins D and E were stable in the investigated conditions (concentration changes <4%) in the pre-analytical and analytical stages. Vitamin A showed photosensitivity in times >48 h with concentration changes of -6.8% (blood) and -6.5% (serum), both are within the allowable clinical percentage change. By contrast, the extracted retinol sample demonstrated a concentration change of -18.4% after 48 h of light exposure. However, vitamin A in the serum and extracted solution was stable for one month when stored at -20°C. Blood samples for vitamins D and E analyses can be processed in normal laboratory conditions of lighting and temperature. The required conditions for vitamin A analysis are similar when performed within 48 h. For longer-term storage, serum and vitamin A extracts should be stored at -20°C.

  8. Whole blood flow cytometry measurements of in vivo platelet activation in critically-Ill patients are influenced by variability in blood sampling techniques.

    PubMed

    Rondina, Matthew T; Grissom, Colin K; Men, Shaohua; Harris, Estelle S; Schwertz, Hansjorg; Zimmerman, Guy A; Weyrich, Andrew S

    2012-06-01

    Flow cytometry is often used to measure in vivo platelet activation in critically-ill patients. Variability in blood sampling techniques, which may confound these measurements, remains poorly characterized. Platelet activation was measured by flow cytometry performed on arterial and venous blood from 116 critically-ill patients. We determined how variability in vascular sampling site, processing times, and platelet counts influenced levels of platelet-monocyte aggregates (PMA), PAC-1 binding (for glycoprotein (GP) IIbIIIa), and P-selectin (P-SEL) expression. Levels of PMA, but not PAC-1 binding or P-SEL expression, were significantly affected by variability in vascular sampling site. Average PMA levels were approximately 60% higher in whole blood drawn from an arterial vessel compared to venous blood (16.2±1.8% vs. 10.7±1.2%, p<0.05). Levels of PMA in both arterial and venous blood increased significantly during ex vivo processing delays (1.7% increase for every 10 minute delay, p<0.05). In contrast, PAC-1 binding and P-SEL expression were unaffected by processing delays. Levels of PMA, but not PAC-1 binding or P-SEL expression, were correlated with platelet count quartiles (9.4±1.6% for the lowest quartile versus 15.4±1.6% for the highest quartile, p<0.05). In critically-ill patients, variability in vascular sampling site, processing times, and platelet counts influence levels of PMA, but not PAC-1 binding or P-SEL expression. These data demonstrate the need for rigorous adherence to blood sampling protocols, particularly when levels of PMA, which are most sensitive to variations in blood collection, are measured for detection of in vivo platelet activation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  10. FPGA design for constrained energy minimization

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Chang, Chein-I.; Cao, Mang

    2004-02-01

    The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.

  11. Microwave plasma monitoring system for the elemental composition analysis of high temperature process streams

    DOEpatents

    Woskov, Paul P.; Cohn, Daniel R.; Titus, Charles H.; Surma, Jeffrey E.

    1997-01-01

    Microwave-induced plasma for continuous, real time trace element monitoring under harsh and variable conditions. The sensor includes a source of high power microwave energy and a shorted waveguide made of a microwave conductive, high temperature capability refractory material communicating with the source of the microwave energy to generate a plasma. The high power waveguide is constructed to be robust in a hot, hostile environment. It includes an aperture for the passage of gases to be analyzed and a spectrometer is connected to receive light from the plasma. Provision is made for real time in situ calibration. The spectrometer disperses the light, which is then analyzed by a computer. The sensor is capable of making continuous, real time quantitative measurements of desired elements, such as the heavy metals lead and mercury. The invention may be incorporated into a high temperature process device and implemented in situ for example, such as with a DC graphite electrode plasma arc furnace. The invention further provides a system for the elemental analysis of process streams by removing particulate and/or droplet samples therefrom and entraining such samples in the gas flow which passes through the plasma flame. Introduction of and entraining samples in the gas flow may be facilitated by a suction pump, regulating gas flow, gravity or combinations thereof.

  12. Opto-electrochemical spectroscopy of metals in aqueous solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, K., E-mail: khaledhabib@usa.net

    In the present investigation, holographic interferometry was utilized for the first time to determine the rate change of the electrical resistance of aluminium samples during the initial stage of anodisation processes in aqueous solution. In fact, because the resistance values in this investigation were obtained by holographic interferometry, electromagnetic method rather than electronic method, the abrupt rate change of the resistance was called electrical resistance–emission spectroscopy. The anodisation process of the aluminium samples was carried out by electrochemical impedance spectroscopy (EIS) in different sulphuric acid concentrations (1.0%–2.5% H{sub 2}SO{sub 4}) at room temperature. In the meantime, the real time holographicmore » interferometry was used to determine the difference between the electrical resistance of two subsequent values, dR, as a function of the elapsed time of the EIS experiment for the aluminium samples in 1.0%, 1.5%, 2.0%, and 2.5% H{sub 2}SO{sub 4} solutions. The electrical resistance–emission spectra of the present investigation represent a detailed picture of not only the rate change of the electrical resistance throughout the anodisation processes but also the spectra represent the rate change of the growth of the oxide films on the aluminium samples in different solutions. As a result, a new spectrometer was developed based on the combination of the holographic interferometry and electrochemical impedance spectroscopy for studying in situ the electrochemical behavior of metals in aqueous solutions.« less

  13. Development of a fast framing detector for electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Ian J.; Bustillo, Karen C.; Ciston, Jim

    2016-10-01

    A high frame rate detector system is described that enables fast real-time data analysis of scanning diffraction experiments in scanning transmission electron microscopy (STEM). This is an end-to-end development that encompasses the data producing detector, data transportation, and real-time processing of data. The detector will consist of a central pixel sensor that is surrounded by annular silicon diodes. Both components of the detector system will synchronously capture data at almost 100 kHz frame rate, which produces an approximately 400 Gb/s data stream. Low-level preprocessing will be implemented in firmware before the data is streamed from the National Center for Electronmore » Microscopy (NCEM) to the National Energy Research Scientific Computing Center (NERSC). Live data processing, before it lands on disk, will happen on the Cori supercomputer and aims to present scientists with prompt experimental feedback. This online analysis will provide rough information of the sample that can be utilized for sample alignment, sample monitoring and verification that the experiment is set up correctly. Only a compressed version of the relevant data is then selected for more in-depth processing.« less

  14. Spectral Interferometry with Electron Microscopes

    PubMed Central

    Talebi, Nahid

    2016-01-01

    Interference patterns are not only a defining characteristic of waves, but also have several applications; characterization of coherent processes and holography. Spatial holography with electron waves, has paved the way towards space-resolved characterization of magnetic domains and electrostatic potentials with angstrom spatial resolution. Another impetus in electron microscopy has been introduced by ultrafast electron microscopy which uses pulses of sub-picosecond durations for probing a laser induced excitation of the sample. However, attosecond temporal resolution has not yet been reported, merely due to the statistical distribution of arrival times of electrons at the sample, with respect to the laser time reference. This is however, the very time resolution which will be needed for performing time-frequency analysis. These difficulties are addressed here by proposing a new methodology to improve the synchronization between electron and optical excitations through introducing an efficient electron-driven photon source. We use focused transition radiation of the electron as a pump for the sample. Due to the nature of transition radiation, the process is coherent. This technique allows us to perform spectral interferometry with electron microscopes, with applications in retrieving the phase of electron-induced polarizations and reconstructing dynamics of the induced vector potential. PMID:27649932

  15. 19 CFR 163.11 - Audit procedures.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... how the results of the sampling will be projected over the universe of transactions for purposes of... results over the universe of transactions is the process by which the results obtained from the sample entries actually examined are applied to the universe of entries set within the time period and scope of...

  16. 19 CFR 163.11 - Audit procedures.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... how the results of the sampling will be projected over the universe of transactions for purposes of... results over the universe of transactions is the process by which the results obtained from the sample entries actually examined are applied to the universe of entries set within the time period and scope of...

  17. 19 CFR 163.11 - Audit procedures.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... how the results of the sampling will be projected over the universe of transactions for purposes of... results over the universe of transactions is the process by which the results obtained from the sample entries actually examined are applied to the universe of entries set within the time period and scope of...

  18. 77 FR 26292 - Risk Evaluation and Mitigation Strategy Assessments: Social Science Methodologies to Assess Goals...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ... determine endpoints; questionnaire design and analyses; and presentation of survey results. To date, FDA has..., the workshop will invest considerable time in identifying best methodological practices for conducting... sample, sample size, question design, process, and endpoints. Panel 2 will focus on alternatives to...

  19. Predicting Stored Grain Insect Population Densities Using an Electronic Probe Trap

    USDA-ARS?s Scientific Manuscript database

    Manual sampling of insects in stored grain is a laborious and time consuming process. Automation of grain sampling should help to increase the adoption of stored-grain integrated pest management. A new commercial electronic grain probe trap (OPI Insector™) has recently been marketed. We field tested...

  20. Modification of a Microwave Oven for Laboratory Use.

    ERIC Educational Resources Information Center

    Andrews, Judith; Atkinson, George F.

    1984-01-01

    Discusses use of a domestic microwave oven for drying analytical samples with time savings compared to conventional ovens, providing a solution to the problem of loss of load as samples dry. Presents a system for examining emitted gases from drying process and reports results of several test dryings. (JM)

  1. REAL TIME MONITORING OF PCDD/PCDF FOR TRANSIENT CHARACTERIZATION AND PROCESS CONTROL

    EPA Science Inventory

    Current sampling methods for PCDD/F emission compliance make use of a sample taken during steady state conditions which is assumed to be representative of facility performance. This is often less than satisfactory. The rapid variation of PCDDs, PCDF, and other co-pollutants due ...

  2. Ultra-accelerated natural sunlight exposure testing

    DOEpatents

    Jorgensen, Gary J.; Bingham, Carl; Goggin, Rita; Lewandowski, Allan A.; Netter, Judy C.

    2000-06-13

    Process and apparatus for providing ultra accelerated natural sunlight exposure testing of samples under controlled weathering without introducing unrealistic failure mechanisms in exposed materials and without breaking reciprocity relationships between flux exposure levels and cumulative dose that includes multiple concurrent levels of temperature and relative humidity at high levels of natural sunlight comprising: a) concentrating solar flux uniformly; b) directing the controlled uniform sunlight onto sample materials in a chamber enclosing multiple concurrent levels of temperature and relative humidity to allow the sample materials to be subjected to accelerated irradiance exposure factors for a sufficient period of time in days to provide a corresponding time of about at least a years worth of representative weathering of the sample materials.

  3. Work overload, burnout, and psychological ill-health symptoms: a three-wave mediation model of the employee health impairment process.

    PubMed

    de Beer, Leon T; Pienaar, Jaco; Rothmann, Sebastiaan

    2016-07-01

    The study reported here investigated the causal relationships in the health impairment process of employee well-being, and the mediating role of burnout in the relationship between work overload and psychological ill-health symptoms, over time. The research is deemed important due to the need for longitudinal evidence of the health impairment process of employee well-being over three waves of data. A quantitative survey design was followed. Participants constituted a longitudinal sample of 370 participants, at three time points, after attrition. Descriptive statistics and structural equation modeling methods were implemented. Work overload at time one predicted burnout at time two, and burnout at time two predicted psychological ill-health symptoms at time three. Indirect effects were found between work overload time one and psychological ill-health symptoms time three via burnout time two, and also between burnout time one and psychological ill-health symptoms time three, via burnout time two. The results provided supportive evidence for an "indirect-only" mediation effect, for burnout's causal mediation mechanism in the health impairment process between work overload and psychological ill-health symptoms.

  4. Soft Real-Time PID Control on a VME Computer

    NASA Technical Reports Server (NTRS)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  5. Mechanical Properties and Microstructure of AZ31B Magnesium Alloy Processed by I-ECAP

    NASA Astrophysics Data System (ADS)

    Gzyl, Michal; Rosochowski, Andrzej; Pesci, Raphael; Olejnik, Lech; Yakushina, Evgenia; Wood, Paul

    2014-03-01

    Incremental equal channel angular pressing (I-ECAP) is a severe plastic deformation process used to refine grain size of metals, which allows processing very long billets. As described in the current article, an AZ31B magnesium alloy was processed for the first time by three different routes of I-ECAP, namely, A, BC, and C, at 523 K (250 °C). The structure of the material was homogenized and refined to ~5 microns of the average grain size, irrespective of the route used. Mechanical properties of the I-ECAPed samples in tension and compression were investigated. Strong influence of the processing route on yield and fracture behavior of the material was established. It was found that texture controls the mechanical properties of AZ31B magnesium alloy subjected to I-ECAP. SEM and OM techniques were used to obtain microstructural images of the I-ECAPed samples subjected to tension and compression. Increased ductility after I-ECAP was attributed to twinning suppression and facilitation of slip on basal plane. Shear bands were revealed in the samples processed by I-ECAP and subjected to tension. Tension-compression yield stress asymmetry in the samples tested along extrusion direction was suppressed in the material processed by routes BC and C. This effect was attributed to textural development and microstructural homogenization. Twinning activities in fine- and coarse-grained samples have also been studied.

  6. Clinical time series prediction: towards a hierarchical dynamical system framework

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671

  7. Cutaway line drawing of STS-34 middeck experiment Polymer Morphology (PM)

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Cutaway line drawing shows components of STS-34 middeck experiment Polymer Morphology (PM). Generic Electronics Module (GEM) components include the control housing, circulating fans, hard disk, tape drives, computer boards, and heat exchanger. PM, a 3M-developed organic materials processing experiment, is designed to explore the effects of microgravity on polymeric materials as they are processed in space. The samples of polymeric materials being studied in the PM experiment are thin films (25 microns or less) approximately 25mm in diameter. The samples are mounted between two infrared transparent windows in a specially designed infrared cell that provides the capability of thermally processing the samples to 200 degrees Celsius with a high degree of thermal control. The samples are mounted on a carousel that allows them to be positioned, one at a time, in the infrared beam where spectra may be acquired. The GEM provides all carousel and sample cell control (SCC). The first flight of P

  8. Significant increase in cultivation of Gardnerella vaginalis, Alloscardovia omnicolens, Actinotignum schaalii, and Actinomyces spp. in urine samples with total laboratory automation.

    PubMed

    Klein, Sabrina; Nurjadi, Dennis; Horner, Susanne; Heeg, Klaus; Zimmermann, Stefan; Burckhardt, Irene

    2018-04-13

    While total laboratory automation (TLA) is well established in laboratory medicine, only a few microbiological laboratories are using TLA systems. Especially in terms of speed and accuracy, working with TLA is expected to be superior to conventional microbiology. We compared in total 35,564 microbiological urine cultures with and without incubation and processing with BD Kiestra TLA for a 6-month period each retrospectively. Sixteen thousand three hundred thirty-eight urine samples were analyzed in the pre-TLA period and 19,226 with TLA. Sixty-two percent (n = 10,101/16338) of the cultures processed without TLA and 68% (n = 13,102/19226) of the cultures processed with TLA showed growth. There were significantly more samples with two or more species per sample and with low numbers of colony forming units (CFU) after incubation with TLA. Regarding the type of bacteria, there were comparable amounts of Enterobacteriaceae in the samples, slightly less non-fermenting Gram-negative bacteria, but significantly more Gram-positive cocci, and Gram-positive rods. Especially Alloscardivia omnicolens, Gardnerella vaginalis, Actinomyces spp., and Actinotignum schaalii were significantly more abundant in the samples incubated and processed with TLA. The time to report was significantly lower in the TLA processed samples by 1.5 h. We provide the first report in Europe of a large number of urine samples processed with TLA. TLA showed enhanced growth of non-classical and rarely cultured bacteria from urine samples. Our findings suggest that previously underestimated bacteria may be relevant pathogens for urinary tract infections. Further studies are needed to confirm our findings.

  9. Creating Sub-50 nm Nanofluidic Junctions in PDMS Microchip via Self-Assembly Process of Colloidal Silica Beads for Electrokinetic Concentration of Biomolecules

    PubMed Central

    Syed, A.; Mangano, L.; Mao, P.; Han, J.

    2014-01-01

    In this work we describe a novel and simple self-assembly of colloidal silica beads to create nanofluidic junction between two microchannels. The nanoporous membrane was used to induce ion concentration polarization inside the microchannel and this electrokinetic preconcentration system allowed rapid concentration of DNA samples by ∼1700 times and protein samples by ∼100 times within 5 minutes. PMID:25254651

  10. Renewal processes based on generalized Mittag-Leffler waiting times

    NASA Astrophysics Data System (ADS)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  11. DGGE and multivariate analysis of a yeast community in spontaneous cocoa fermentation process.

    PubMed

    Ferreira, A C R; Marques, E L S; Dias, J C T; Rezende, R P

    2015-12-28

    Cocoa bean is the main raw material used in the production of chocolate. In southern Bahia, Brazil, cocoa farming and processing is an important economic activity. The fermentation of cocoa is the processing stage that yields important chocolate flavor precursors and complex microbial involvement is essential for this process. In this study, PCR-denaturing gradient gel electrophoreses (DGGE) was used to investigate the diversity of yeasts present during the spontaneous fermentation of cocoa in southern Bahia. The DGGE analysis revealed a richness of 8 to 13 distinct bands of varied intensities among the samples; and samples taken at 24, 36, and 48 h into the fermentation process were found to group with 70% similarity and showed the greatest diversity of bands. Hierarchical clustering showed that all samples had common operational taxonomic units (OTUs) and the highest number of OTUs was found in the 48 h sample. Variations in pH and temperature observed within the fermenting mass over time possibly had direct effects on the composition of the existing microbial community. The findings reported here indicate that a heterogeneous yeast community is involved in the complex cocoa fermentation process, which is known to involve a succession of specialized microorganisms.

  12. Microwave sintering of Ag-nanoparticle thin films on a polyimide substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujii, S., E-mail: fujii.s.ap@m.titech.ac.jp; Department of Information and Communication System Engineering, National Institute of Technology, Okinawa College, Nago, Okinawa 905-2192; Kawamura, S.

    2015-12-15

    Ag-nanoparticle thin films on a polyimide substrate were subjected to microwave sintering by use of a single-mode waveguide applicator. A two-step sintering process was employed. First, at low conductivities of the film, the film sample was placed at the site of the maximum electric field and subjected to microwave irradiation. Second, when the conductivity of the film increased, the film sample was placed at the site of the maximum magnetic field and again subjected to microwave irradiation. The microwave sintering process was completed within 1.5 min, which is significantly lower than the time required for the oven heating process. Themore » resulting conductivity of the film, albeit only 30% of that of the bulk material, was seven times that of a film annealed at the same temperature in a furnace. Scanning electron microscopy images revealed that the nanoparticles underwent both grain necking and grain growth during microwave sintering. In addition, this sintering process was equivalent to the oven heating process performed at a 50 °C higher annealing temperature. An electromagnetic wave simulation and a heat transfer simulation of the microwave sintering process were performed to gain a thorough understanding of the process.« less

  13. Enhancement of MS Signal Processing For Improved Cancer Biomarker Discovery

    NASA Astrophysics Data System (ADS)

    Si, Qian

    Technological advances in proteomics have shown great potential in detecting cancer at the earliest stages. One way is to use the time of flight mass spectroscopy to identify biomarkers, or early disease indicators related to the cancer. Pattern analysis of time of flight mass spectra data from blood and tissue samples gives great hope for the identification of potential biomarkers among the complex mixture of biological and chemical samples for the early cancer detection. One of the keys issues is the pre-processing of raw mass spectra data. A lot of challenges need to be addressed: unknown noise character associated with the large volume of data, high variability in the mass spectroscopy measurements, and poorly understood signal background and so on. This dissertation focuses on developing statistical algorithms and creating data mining tools for computationally improved signal processing for mass spectrometry data. I have introduced an advanced accurate estimate of the noise model and a half-supervised method of mass spectrum data processing which requires little knowledge about the data.

  14. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  15. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    NASA Astrophysics Data System (ADS)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  16. Parental Time Pressures and Depression among Married Dual-Earner Parents

    ERIC Educational Resources Information Center

    Roxburgh, Susan

    2012-01-01

    This article examines whether there is an association between depression and parental time pressure among employed parents. Using a sample of 248 full-time employed parents and using the stress process framework, I also examine the extent to which gender, socioeconomic status, social support, and job conditions account for variation in the…

  17. Diffraction and microscopy with attosecond electron pulse trains

    NASA Astrophysics Data System (ADS)

    Morimoto, Yuya; Baum, Peter

    2018-03-01

    Attosecond spectroscopy1-7 can resolve electronic processes directly in time, but a movie-like space-time recording is impeded by the too long wavelength ( 100 times larger than atomic distances) or the source-sample entanglement in re-collision techniques8-11. Here we advance attosecond metrology to picometre wavelength and sub-atomic resolution by using free-space electrons instead of higher-harmonic photons1-7 or re-colliding wavepackets8-11. A beam of 70-keV electrons at 4.5-pm de Broglie wavelength is modulated by the electric field of laser cycles into a sequence of electron pulses with sub-optical-cycle duration. Time-resolved diffraction from crystalline silicon reveals a < 10-as delay of Bragg emission and demonstrates the possibility of analytic attosecond-ångström diffraction. Real-space electron microscopy visualizes with sub-light-cycle resolution how an optical wave propagates in space and time. This unification of attosecond science with electron microscopy and diffraction enables space-time imaging of light-driven processes in the entire range of sample morphologies that electron microscopy can access.

  18. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    PubMed

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Nondestructive nanostraw intracellular sampling for longitudinal cell monitoring

    PubMed Central

    Cao, Yuhong; Chen, Haodong; Birey, Fikri; Leal-Ortiz, Sergio A.; Han, Crystal M.; Santiago, Juan G.; Paşca, Sergiu P.; Wu, Joseph C.; Melosh, Nicholas A.

    2017-01-01

    Here, we report a method for time-resolved, longitudinal extraction and quantitative measurement of intracellular proteins and mRNA from a variety of cell types. Cytosolic contents were repeatedly sampled from the same cell or population of cells for more than 5 d through a cell-culture substrate, incorporating hollow 150-nm-diameter nanostraws (NS) within a defined sampling region. Once extracted, the cellular contents were analyzed with conventional methods, including fluorescence, enzymatic assays (ELISA), and quantitative real-time PCR. This process was nondestructive with >95% cell viability after sampling, enabling long-term analysis. It is important to note that the measured quantities from the cell extract were found to constitute a statistically significant representation of the actual contents within the cells. Of 48 mRNA sequences analyzed from a population of cardiomyocytes derived from human induced pluripotent stem cells (hiPSC-CMs), 41 were accurately quantified. The NS platform samples from a select subpopulation of cells within a larger culture, allowing native cell-to-cell contact and communication even during vigorous activity such as cardiomyocyte beating. This platform was applied both to cell lines and to primary cells, including CHO cells, hiPSC-CMs, and human astrocytes derived in 3D cortical spheroids. By tracking the same cell or group of cells over time, this method offers an avenue to understand dynamic cell behavior, including processes such as induced pluripotency and differentiation. PMID:28223521

  20. The effect of orientation difference in fused deposition modeling of ABS polymer on the processing time, dimension accuracy, and strength

    NASA Astrophysics Data System (ADS)

    Tanoto, Yopi Y.; Anggono, Juliana; Siahaan, Ian H.; Budiman, Wesley

    2017-01-01

    There are several parameters that must be set before manufacturing a product using 3D printing. These parameters include the orientation deposition of that product, type of material, form fill, fill density, and other parameters. The finished product of 3D printing has some responses that can be observed, measured, and tested. Some of those responses are the processing time, the dimensions of the end product, its surface roughness and the mechanical properties, i.e. its yield strength, ultimate tensile strength, and impact resistance. This research was conducted to study the relationship between process parameters of 3D printing machine using a technology of fused deposition modeling (FDM) and the generated responses. The material used was ABS plastic that was commonly used in the industry. Understanding the relationship between the parameters and the responses thus the resulting product can be manufactured to meet the user needs. Three different orientations in depositing the ABS polymer named XY(first orientation), YX (second orientation), and ZX (third orientation) were studied. Processing time, dimensional accuracy, and the product strength were the responses that were measured and tested. The study reports that the printing process with third orientation was the fastest printing process with the processing time 2432 seconds followed by orientation 1 and 2 with a processing time of 2688 and 2780 seconds respectively. Dimension accuracy was also measured from the width and the length of gauge area of tensile test specimens printed in comparison with the dimensions required by ASTM 638-02. It was found that the smallest difference was in thickness dimension, i.e. 0.1 mm thicker in printed sample using second orientation than as required by the standard. The smallest thickness deviation from the standard was measured in width dimension of a sample printed using first orientation (0.13 mm). As with the length dimension, the closest dimension to the standard was resulted from the third orientation product, i.e 0.2 mm. Tensile test done on all the specimens produced with those three orientations shows that the highest tensile strength was obtained in sample from second orientation deposition, i.e. 7.66 MPa followed by the first and third orientations products, i.e. 6.8 MPa and 3.31 MPa, respectively.

  1. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    PubMed Central

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-01-01

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765

  2. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  3. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  4. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    PubMed

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  5. A Feedfordward Adaptive Controller to Reduce the Imaging Time of Large-Sized Biological Samples with a SPM-Based Multiprobe Station

    PubMed Central

    Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel

    2012-01-01

    The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491

  6. MRI and unilateral NMR study of reindeer skin tanning processes.

    PubMed

    Zhu, Lizheng; Del Federico, Eleonora; Ilott, Andrew J; Klokkernes, Torunn; Kehlet, Cindie; Jerschow, Alexej

    2015-04-07

    The study of arctic or subarctic indigenous skin clothing material, known for its design and ability to keep the body warm, provides information about the tanning materials and techniques. The study also provides clues about the culture that created it, since tanning processes are often specific to certain indigenous groups. Untreated skin samples and samples treated with willow (Salix sp) bark extract and cod liver oil are compared in this study using both MRI and unilateral NMR techniques. The two types of samples show different proton spatial distributions and different relaxation times, which may also provide information about the tanning technique and aging behavior.

  7. User's manual SIG: a general-purpose signal processing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lager, D.; Azevedo, S.

    1983-10-25

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Many of the basic operations one would perform on digitized data are contained in the core SIG package. Out of these core commands, more powerful signal processing algorithms may be built. Many different operations on time- and frequency-domain signals can be performed by SIG. They include operations on the samples of a signal, such as adding a scalar tomore » each sample, operations on the entire signal such as digital filtering, and operations on two or more signals such as adding two signals. Signals may be simulated, such as a pulse train or a random waveform. Graphics operations display signals and spectra.« less

  8. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  9. Effects of freezing, freeze drying and convective drying on in vitro gastric digestion of apples.

    PubMed

    Dalmau, Maria Esperanza; Bornhorst, Gail M; Eim, Valeria; Rosselló, Carmen; Simal, Susana

    2017-01-15

    The influence of processing (freezing at -196°C in liquid N2, FN sample; freeze-drying at -50°C and 30Pa, FD sample; and convective drying at 60°C and 2m/s, CD sample) on apple (var. Granny Smith) behavior during in vitro gastric digestion was investigated. Dried apples (FD and CD samples) were rehydrated prior to digestion. Changes in carbohydrate composition, moisture, soluble solids, acidity, total polyphenol content (TPC), and antioxidant activity (AA) of apple samples were measured at different times during digestion. Processing resulted in disruption of the cellular structure during digestion, as observed by scanning electron microscopy, light microscopy, and changes in carbohydrate composition. Moisture content increased (6-11% dmo), while soluble solids (55-78% dmo), acidity (44-72% dmo), total polyphenol content (30-61% dmo), and antioxidant activity (41-87%) decreased in all samples after digestion. Mathematical models (Weibull and exponential models) were used to better evaluate the influence of processing on apple behavior during gastric digestion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display.

    PubMed

    Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho

    2018-06-01

    Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. The relationship between processing style, trauma memory processes, and the development of posttraumatic stress symptoms in children and adolescents.

    PubMed

    McKinnon, Anna; Brewer, Neil; Cameron, Kate; Nixon, Reginald D V

    2017-12-01

    Data-driven processing, peri-event fear, and trauma memory characteristics are hypothesised to play a core role in the development of Posttraumatic Stress Disorder. We assessed the relationships between these characteristics and Posttraumatic Stress (PTS) symptoms in a sample of youth. Study 1 (N = 36, 7-16 years), involved a sample of children who had undergone a stressful orthopaedic procedure. One week later they answered a series of probed recall questions about the trauma (assessed for accuracy by comparison to a video) and reported on their PTS symptoms. They also rated confidence in their probed recall answers to assess meta-cognitive monitoring of their memory for the trauma. In Study 2, a sample of injured children (N = 57, 7-16 years) were assessed within 1-month of a visit to an Emergency Department, and then at 3-month follow-up. They answered probed recall questions, made confidence ratings, and completed measures of data-driven processing, peri-event fear, PTS and associated psychopathology. Memories were verified using witness accounts. Studies 1 and 2 did not find an association between PTS symptoms and trauma memory accuracy or confidence. In Studies 1 and 2 data-driven processing predicted PTS symptoms. The studies had modest samples sizes and there were ceiling effects for some accuracy and confidence items. Data-driven processing at the time of a trauma was associated with PTS symptoms after accounting for fear at the time of the trauma. Accuracy of recall for trauma memories was not significantly related to PTS symptoms. No decisive conclusion could be drawn regarding the relation between confidence in trauma memories and PTS symptoms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Impact of hydrothermal alteration on time-dependent tunnel deformation in Neogene volcanic rock sequence in Japan: Petrology, Geochemistry and Geophysical investigation

    NASA Astrophysics Data System (ADS)

    Yamazaki, S.; Okazaki, K.; Niwa, H.; Arai, T.; Murayama, H.; Kurahashi, T.; Ito, Y.

    2017-12-01

    Time-dependent tunnel deformation is one of remaining geological problems for mountain tunneling. As a case study of time-dependent tunnel deformation, we investigated petrographical, mineral and chemical compositions of boring core samples and seismic exploration along a tunnel that constructed into Neogene volcanic rock sequence of andesite to dacite pyroclastic rocks and massive lavas with mafic enclaves. The tunnel has two zones of floor heaving that deformed time-dependently about 2 month after the tunnel excavation. The core samples around the deformed zones are characterized secondary mineral assemblages of smectite, cristobalite, tridymite, sulfides (pyrite and marcasite) and partially or completely reacted carbonates (calcite and siderite), which were formed by hydrothermal alteration under neutral to acidic condition below about 100 °C. The core samples also showed localized deterioration, such as crack formation and expansion, which occurred from few days to months after the drilling. The deterioration could be explained as a result of the cyclic physical and chemical weathering process with the oxidation of sulfide minerals, dissolution of carbonate mineral cementation and volumetric expantion of smectite. This weathering process is considered as a key factor for time-dependent tunnel deformation in the hydrothermally altered volcanic rocks. The zones of time-dependent deformation along a tunnel route can be predicted by the variations of whole-rock chemical compositions such as Na, Ca, Sr, Ba and S.

  13. A novel time-domain signal processing algorithm for real time ventricular fibrillation detection

    NASA Astrophysics Data System (ADS)

    Monte, G. E.; Scarone, N. C.; Liscovsky, P. O.; Rotter S/N, P.

    2011-12-01

    This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.

  14. Fuji apple storage time rapid determination method using Vis/NIR spectroscopy.

    PubMed

    Liu, Fuqi; Tang, Xuxiang

    2015-01-01

    Fuji apple storage time rapid determination method using visible/near-infrared (Vis/NIR) spectroscopy was studied in this paper. Vis/NIR diffuse reflection spectroscopy responses to samples were measured for 6 days. Spectroscopy data were processed by stochastic resonance (SR). Principal component analysis (PCA) was utilized to analyze original spectroscopy data and SNR eigen value. Results demonstrated that PCA could not totally discriminate Fuji apples using original spectroscopy data. Signal-to-noise ratio (SNR) spectrum clearly classified all apple samples. PCA using SNR spectrum successfully discriminated apple samples. Therefore, Vis/NIR spectroscopy was effective for Fuji apple storage time rapid discrimination. The proposed method is also promising in condition safety control and management for food and environmental laboratories.

  15. Fuji apple storage time rapid determination method using Vis/NIR spectroscopy

    PubMed Central

    Liu, Fuqi; Tang, Xuxiang

    2015-01-01

    Fuji apple storage time rapid determination method using visible/near-infrared (Vis/NIR) spectroscopy was studied in this paper. Vis/NIR diffuse reflection spectroscopy responses to samples were measured for 6 days. Spectroscopy data were processed by stochastic resonance (SR). Principal component analysis (PCA) was utilized to analyze original spectroscopy data and SNR eigen value. Results demonstrated that PCA could not totally discriminate Fuji apples using original spectroscopy data. Signal-to-noise ratio (SNR) spectrum clearly classified all apple samples. PCA using SNR spectrum successfully discriminated apple samples. Therefore, Vis/NIR spectroscopy was effective for Fuji apple storage time rapid discrimination. The proposed method is also promising in condition safety control and management for food and environmental laboratories. PMID:25874818

  16. Concept for facilitating analyst-mediated interpretation of qualitative chromatographic-mass spectral data: an alternative to manual examination of extracted ion chromatograms.

    PubMed

    Borges, Chad R

    2007-07-01

    A chemometrics-based data analysis concept has been developed as a substitute for manual inspection of extracted ion chromatograms (XICs), which facilitates rapid, analyst-mediated interpretation of GC- and LC/MS(n) data sets from samples undergoing qualitative batchwise screening for prespecified sets of analytes. Automatic preparation of data into two-dimensional row space-derived scatter plots (row space plots) eliminates the need to manually interpret hundreds to thousands of XICs per batch of samples while keeping all interpretation of raw data directly in the hands of the analyst-saving great quantities of human time without loss of integrity in the data analysis process. For a given analyte, two analyte-specific variables are automatically collected by a computer algorithm and placed into a data matrix (i.e., placed into row space): the first variable is the ion abundance corresponding to scan number x and analyte-specific m/z value y, and the second variable is the ion abundance corresponding to scan number x and analyte-specific m/z value z (a second ion). These two variables serve as the two axes of the aforementioned row space plots. In order to collect appropriate scan number (retention time) information, it is necessary to analyze, as part of every batch, a sample containing a mixture of all analytes to be tested. When pure standard materials of tested analytes are unavailable, but representative ion m/z values are known and retention time can be approximated, data are evaluated based on two-dimensional scores plots from principal component analysis of small time range(s) of mass spectral data. The time-saving efficiency of this concept is directly proportional to the percentage of negative samples and to the total number of samples processed simultaneously.

  17. [Optimization of blood gas analysis in intensive care units : Reduction of preanalytical errors and improvement of workflow].

    PubMed

    Kieninger, M; Zech, N; Mulzer, Y; Bele, S; Seemann, M; Künzig, H; Schneiker, A; Gruber, M

    2015-05-01

    Point of care testing with blood gas analysis (BGA) is an important factor for intensive care medicine. Continuous efforts to optimize workflow, improve safety for the staff and avoid preanalytical mistakes are important and should reflect quality management standards. In a prospective observational study it was investigated whether the implementation of a new system for BGA using labeled syringes and automated processing of the specimens leads to improvements compared to the previously used procedure. In a 4-week test period the time until receiving the final results of the BGA with the standard method used in the clinical routine (control group) was compared to the results in a second 4-week test period using the new labeled syringes and automated processing of the specimens (intervention group). In addition, preanalytical mistakes with both systems were checked during routine daily use. Finally, it was investigated whether a delay of 10 min between taking and analyzing the blood samples alters the results of the BGA. Preanalytical errors were frequently observed in the control group where non-deaerated samples were recorded in 87.3 % but in the intervention group almost all samples (98.9 %) were correctly deaerated. Insufficient homogenization due to omission of manual pivoting was seen in 83.2 % in the control group and in 89.9 % in the intervention group; however, in the intervention group the samples were homogenized automatically during the further analytical process. Although a survey among the staff revealed a high acceptance of the new system and a subjective improvement of workflow, a measurable gain in time after conversion to the new procedure could not be seen. The mean time needed for a complete analysis process until receiving the final results was 244 s in the intervention group and 201 s in the control group. A 10-min delay between taking and analyzing the blood samples led to a significant and clinically relevant elevation of the values for partial pressure of oxygen (pO2) in both groups compared to the results when analyzing the samples immediately (118.4 vs. 148.6 mmHg in the control group and 115.3 vs. 123.7 mmHg in the intervention group). When using standard syringes the partial pressure of carbon dioxide (pCO2) was significantly lower (40.5 vs. 38.3 mmHg) whereas no alterations were seen when using the labeled syringes. The implementation of a new BGA system with labeled syringes and automated processing of the specimens was possible without any difficulties under daily clinical routine conditions in this 10-bed intensive care unit (ICU). A gain of time could not be measured but a reduction in preanalytical errors using the labeled syringes with automated processing was found. Delayed analysis of blood samples can lead to significant changes in pO2 and pCO2 depending on the type of syringe used.

  18. Multi-locus analysis of genomic time series data from experimental evolution.

    PubMed

    Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S

    2015-04-01

    Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.

  19. Automated collection and processing of environmental samples

    DOEpatents

    Troyer, Gary L.; McNeece, Susan G.; Brayton, Darryl D.; Panesar, Amardip K.

    1997-01-01

    For monitoring an environmental parameter such as the level of nuclear radiation, at distributed sites, bar coded sample collectors are deployed and their codes are read using a portable data entry unit that also records the time of deployment. The time and collector identity are cross referenced in memory in the portable unit. Similarly, when later recovering the collector for testing, the code is again read and the time of collection is stored as indexed to the sample collector, or to a further bar code, for example as provided on a container for the sample. The identity of the operator can also be encoded and stored. After deploying and/or recovering the sample collectors, the data is transmitted to a base processor. The samples are tested, preferably using a test unit coupled to the base processor, and again the time is recorded. The base processor computes the level of radiation at the site during exposure of the sample collector, using the detected radiation level of the sample, the delay between recovery and testing, the duration of exposure and the half life of the isotopes collected. In one embodiment, an identity code and a site code are optically read by an image grabber coupled to the portable data entry unit.

  20. Risk of infection due to medical interventions via central venous catheters or implantable venous access port systems at the middle port of a three-way cock: luer lock cap vs. luer access split septum system (Q-Syte).

    PubMed

    Pohl, Fabian; Hartmann, Werner; Holzmann, Thomas; Gensicke, Sandra; Kölbl, Oliver; Hautmann, Matthias G

    2014-01-25

    Many cancer patients receive a central venous catheter or port system prior to therapy to assure correct drug administration. Even appropriate hygienic intervention maintenance carries the risk of contaminating the middle port (C-port) of a three-way cock (TWC), a risk that increases with the number of medical interventions. Because of the complexity of the cleaning procedure with disconnection and reconnection of the standard luer lock cap (referred as "intervention"), we compared luer lock caps with a "closed access system" consisting of a luer access split septum system with regard to process optimization (work simplification, process time), efficiency (costs) and hygiene (patient safety). For determination of process optimization the workflow of an intervention according to the usual practice and risks was depicted in a process diagram. For determining the actual process costs, we analyzed use of material and time parameters per intervention and used the process parameters for programming the process into a simulation run (n = 1000) to determine the process costs as well as their differences (ACTUAL vs. NOMINAL) within the framework of a discrete event simulation.Additionally cultures were carried out at the TWC C-ports to evaluate possible contamination. With the closed access system, the mean working time of 5.5 minutes could be reduced to 2.97 minutes. The results for average process costs (labour and material costs per use) were 3.92 € for luer lock caps and 2.55 € for the closed access system. The hypothesis test (2-sample t-test, CI 0.95, p-value<0.05) confirmed the significance of the result.In 50 reviewed samples (TWC's), the contamination rate for the luer lock cap was 8% (4 out of 50 samples were positive), the contamination rate of the 50 samples with the closed access system was 0%.Possible hygienic risks (related to material, surroundings, staff handling) could be reduced by 65.38%. In the present research, the closed access system with a divided split septum was superior to conventional luer lock caps. The advantage of the closed access system lies in the simplified handling for staff, which results in a reduced risk of patient infection due to improved clinical hygiene.

  1. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  2. Photonic Breast Tomography and Tumor Aggressiveness Assessment

    DTIC Science & Technology

    2010-07-01

    removal of breast tumours (Specific Aim 4). While the TROT approach [7] has been introduced in other areas, such as, array processing for acoustic and...to the time-reversal matrix used in the general area of array processing for acoustic and radar time-reversal imaging [15]. The eigenvalue equation...spectrum [Eq.(1) in Ref. 8] is calculated directly for all voxels in the sample using the vector subspace method, Multiple Signal Classification ( MUSIC

  3. Analysis of peptides using an integrated microchip HPLC-MS/MS system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Brian J.; Chirica, Gabriela S.; Reichmuth, David S.

    Hyphendated LC-MS techniques are quickly becoming the standard tool for protemic analyses. For large homogeneous samples, bulk processing methods and capillary injection and separation techniques are suitable. However, for analysis of small or heterogeneous samples, techniques that can manipulate picoliter samples without dilution are required or samples will be lost or corrupted; further, static nanospray-type flowrates are required to maximize SNR. Microchip-level integration of sample injection with separation and mass spectrometry allow small-volume analytes to be processed on chip and immediately injected without dilution for analysis. An on-chip HPLC was fabricated using in situ polymerization of both fixed and mobilemore » polymer monoliths. Integration of the chip with a nanospray MS emitter enables identification of peptides by the use of tandem MS. The chip is capable of analyzing of very small sample volumes (< 200 pl) in short times (< 3 min).« less

  4. Microwave Processing of Crowns from Winter Cereals for Light Microscopy.

    USDA-ARS?s Scientific Manuscript database

    Microwave processing of tissue considerably shortens the time it takes to prepare samples for light and electron microscopy. However, plant tissues from different species and different regions of the plant respond differently making it impossible to use a single protocol for all plant tissue. The ...

  5. Materials Science Research Rack Onboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Frazier, Natalie C.; Johnson, Jimmie; Aicher, Winfried

    2011-01-01

    The Materials Science Research Rack (MSRR) allows for the study of a variety of materials including metals, ceramics, semiconductor crystals, and glasses onboard the International Space Station (ISS). MSRR was launched on STS-128 in August 2009, and is currently installed in the U. S. Destiny Laboratory Module. Since that time, MSRR has performed virtually flawlessly logging more than 550 hours of operating time. Materials science is an integral part of development of new materials for everyday life here on Earth. The goal of studying materials processing in space is to develop a better understanding of the chemical and physical mechanisms involved. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility containing two furnace inserts in which Sample Cartridge Assemblies (SCAs), each containing one material sample, can be processed up to temperatures of 1400C. Once an SCA is installed by a Crew Member, the experiment can be run by automatic command or science conducted via telemetry commands from the ground. Initially, 12 SCAs were processed in the first furnace insert for a team of European and US investigators. The processed samples have been returned to Earth for evaluation and comparison of their properties to samples similarly processed on the ground. A preliminary examination of the samples indicates that the majority of the desired science objectives have been successfully met leading to significant improvements in the understanding of alloy solidification processes. The second furnace insert will be installed in the facility in January 2011 for processing the remaining SCA currently on orbit. Six SCAs are planned for launch summer 2011, and additional batches are planned for future processing. This facility is available to support additional materials science investigations through programs such as the US National Laboratory, Technology Development, NASA Research Announcements, ESA application oriented research programs, and others. The development of the research rack was a cooperative effort between NASA's Marshall Space Flight Center and the European Space Agency (ESA).

  6. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  7. Scalable Performance Measurement and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less

  8. An investigation of phase transformation and crystallinity in laser surface modified H13 steel

    NASA Astrophysics Data System (ADS)

    Aqida, S. N.; Brabazon, D.; Naher, S.

    2013-03-01

    This paper presents a laser surface modification process of AISI H13 tool steel using 0.09, 0.2 and 0.4 mm size of laser spot with an aim to increase hardness properties. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). X-ray diffraction analysis (XRD) was conducted to measure crystallinity of the laser-modified surface. X-ray diffraction patterns of the samples were recorded using a Bruker D8 XRD system with Cu K α ( λ=1.5405 Å) radiation. The diffraction patterns were recorded in the 2 θ range of 20 to 80°. The hardness properties were tested at 981 mN force. The laser-modified surface exhibited reduced crystallinity compared to the un-processed samples. The presence of martensitic phase was detected in the samples processed using 0.4 mm spot size. Though there was reduced crystallinity, a high hardness was measured in the laser-modified surface. Hardness was increased more than 2.5 times compared to the as-received samples. These findings reveal the phase source of the hardening mechanism and grain composition in the laser-modified surface.

  9. Detecting spatial patterns of rivermouth processes using a geostatistical framework for near-real-time analysis

    USGS Publications Warehouse

    Xu, Wenzhao; Collingsworth, Paris D.; Bailey, Barbara; Carlson Mazur, Martha L.; Schaeffer, Jeff; Minsker, Barbara

    2017-01-01

    This paper proposes a geospatial analysis framework and software to interpret water-quality sampling data from towed undulating vehicles in near-real time. The framework includes data quality assurance and quality control processes, automated kriging interpolation along undulating paths, and local hotspot and cluster analyses. These methods are implemented in an interactive Web application developed using the Shiny package in the R programming environment to support near-real time analysis along with 2- and 3-D visualizations. The approach is demonstrated using historical sampling data from an undulating vehicle deployed at three rivermouth sites in Lake Michigan during 2011. The normalized root-mean-square error (NRMSE) of the interpolation averages approximately 10% in 3-fold cross validation. The results show that the framework can be used to track river plume dynamics and provide insights on mixing, which could be related to wind and seiche events.

  10. Complexity multiscale asynchrony measure and behavior for interacting financial dynamics

    NASA Astrophysics Data System (ADS)

    Yang, Ge; Wang, Jun; Niu, Hongli

    2016-08-01

    A stochastic financial price process is proposed and investigated by the finite-range multitype contact dynamical system, in an attempt to study the nonlinear behaviors of real asset markets. The viruses spreading process in a finite-range multitype system is used to imitate the interacting behaviors of diverse investment attitudes in a financial market, and the empirical research on descriptive statistics and autocorrelation behaviors of return time series is performed for different values of propagation rates. Then the multiscale entropy analysis is adopted to study several different shuffled return series, including the original return series, the corresponding reversal series, the random shuffled series, the volatility shuffled series and the Zipf-type shuffled series. Furthermore, we propose and compare the multiscale cross-sample entropy and its modification algorithm called composite multiscale cross-sample entropy. We apply them to study the asynchrony of pairs of time series under different time scales.

  11. Order parameter aided efficient phase space exploration under extreme conditions

    NASA Astrophysics Data System (ADS)

    Samanta, Amit

    Physical processes in nature exhibit disparate time-scales, for example time scales associated with processes like phase transitions, various manifestations of creep, sintering of particles etc. are often much higher than time the system spends in the metastable states. The transition times associated with such events are also orders of magnitude higher than time-scales associated with vibration of atoms. Thus, an atomistic simulation of such transition events is a challenging task. Consequently, efficient exploration of configuration space and identification of metastable structures in condensed phase systems is challenging. In this talk I will illustrate how we can define a set of coarse-grained variables or order parameters and use these to systematically and efficiently steer a system containing thousands or millions of atoms over different parts of the configuration. This order parameter aided sampling can be used to identify metastable states, transition pathways and understand the mechanistic details of complex transition processes. I will illustrate how this sampling scheme can be used to study phase transition pathways and phase boundaries in prototypical materials, like SiO2 and Cu under high-pressure conditions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. Materials Science Research Rack Onboard the International Space Station Hardware and Operations

    NASA Technical Reports Server (NTRS)

    Lehman, John R.; Frazier, Natalie C.; Johnson, Jimmie

    2012-01-01

    The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and ESA for materials science investigations on the International Space Station (ISS). MSRR was launched on STS-128 in August 2009, and is currently installed in the U.S. Destiny Laboratory Module. Since that time, MSRR has performed virtually flawlessly, logging more than 620 hours of operating time. The MSRR accommodates advanced investigations in the microgravity environment on the ISS for basic materials science research in areas such as solidification of metals and alloys. The purpose is to advance the scientific understanding of materials processing as affected by microgravity and to gain insight into the physical behavior of materials processing. MSRR allows for the study of a variety of materials including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. Currently the NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA developed Materials Science Laboratory (MSL) which accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample-Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400 C. Once an SCA is installed, the experiment can be run by automatic command or science conducted via telemetry commands from the ground. Initially, 12 SCAs were processed in the first furnace insert for a team of European and US investigators. After these samples were processed the Furnaces Inserts were exchanged and an additional single sample was processed. The processed samples have been returned to Earth for evaluation and comparison of their properties to samples similarly processed on the ground. A preliminary examination of the samples indicates that the majority of the desired science objectives have been successfully met leading to significant improvements in the understanding of alloy solidification processes. Six SCAs were launched on Space Shuttle Mission STS-135 in July 2011 for processing during the Fall of 2011. Additional batches are planned for future processing. This facility is available to support additional materials science investigations through programs such as the US National Laboratory, Technology Development, NASA Research Announcements, and others.

  13. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  14. Proteolytic Digestion and TiO2 Phosphopeptide Enrichment Microreactor for Fast MS Identification of Proteins.

    PubMed

    Deng, Jingren; Lazar, Iulia M

    2016-04-01

    The characterization of phosphorylation state(s) of a protein is best accomplished by using isolated or enriched phosphoprotein samples or their corresponding phosphopeptides. The process is typically time-consuming as, often, a combination of analytical approaches must be used. To facilitate throughput in the study of phosphoproteins, a microreactor that enables a novel strategy for performing fast proteolytic digestion and selective phosphopeptide enrichment was developed. The microreactor was fabricated using 100 μm i.d. fused-silica capillaries packed with 1-2 mm beds of C18 and/or TiO2 particles. Proteolytic digestion-only, phosphopeptide enrichment-only, and sequential proteolytic digestion/phosphopeptide enrichment microreactors were developed and tested with standard protein mixtures. The protein samples were adsorbed on the C18 particles, quickly digested with a proteolytic enzyme infused over the adsorbed proteins, and further eluted onto the TiO2 microreactor for enrichment in phosphopeptides. A number of parameters were optimized to speed up the digestion and enrichments processes, including microreactor dimensions, sample concentrations, digestion time, flow rates, buffer compositions, and pH. The effective time for the steps of proteolytic digestion and enrichment was less than 5 min. For simple samples, such as standard protein mixtures, this approach provided equivalent or better results than conventional bench-top methods, in terms of both enzymatic digestion and selectivity. Analysis times and reagent costs were reduced ~10- to 15-fold. Preliminary analysis of cell extracts and recombinant proteins indicated the feasibility of integration of these microreactors in more advanced workflows amenable for handling real-world biological samples. Graphical Abstract ᅟ.

  15. Proteolytic Digestion and TiO2 Phosphopeptide Enrichment Microreactor for Fast MS Identification of Proteins

    NASA Astrophysics Data System (ADS)

    Deng, Jingren; Lazar, Iulia M.

    2016-04-01

    The characterization of phosphorylation state(s) of a protein is best accomplished by using isolated or enriched phosphoprotein samples or their corresponding phosphopeptides. The process is typically time-consuming as, often, a combination of analytical approaches must be used. To facilitate throughput in the study of phosphoproteins, a microreactor that enables a novel strategy for performing fast proteolytic digestion and selective phosphopeptide enrichment was developed. The microreactor was fabricated using 100 μm i.d. fused-silica capillaries packed with 1-2 mm beds of C18 and/or TiO2 particles. Proteolytic digestion-only, phosphopeptide enrichment-only, and sequential proteolytic digestion/phosphopeptide enrichment microreactors were developed and tested with standard protein mixtures. The protein samples were adsorbed on the C18 particles, quickly digested with a proteolytic enzyme infused over the adsorbed proteins, and further eluted onto the TiO2 microreactor for enrichment in phosphopeptides. A number of parameters were optimized to speed up the digestion and enrichments processes, including microreactor dimensions, sample concentrations, digestion time, flow rates, buffer compositions, and pH. The effective time for the steps of proteolytic digestion and enrichment was less than 5 min. For simple samples, such as standard protein mixtures, this approach provided equivalent or better results than conventional bench-top methods, in terms of both enzymatic digestion and selectivity. Analysis times and reagent costs were reduced ~10- to 15-fold. Preliminary analysis of cell extracts and recombinant proteins indicated the feasibility of integration of these microreactors in more advanced workflows amenable for handling real-world biological samples.

  16. Problem gambling symptomatology and alcohol misuse among adolescents: A parallel-process latent growth curve model.

    PubMed

    Mutti-Packer, Seema; Hodgins, David C; El-Guebaly, Nady; Casey, David M; Currie, Shawn R; Williams, Robert J; Smith, Garry J; Schopflocher, Don P

    2017-06-01

    The objective of the current study was to examine the possible temporal associations between alcohol misuse and problem gambling symptomatology from adolescence through to young adulthood. Parallel-process latent growth curve modeling was used to examine the trajectories of alcohol misuse and symptoms of problem gambling over time. Data were from a sample of adolescents recruited for the Leisure, Lifestyle, and Lifecycle Project in Alberta, Canada (n = 436), which included 4 assessments over 5 years. There was an average decline in problem gambling symptoms followed by an accelerating upward trend as the sample reached the legal age to gamble. There was significant variation in the rate of change in problem gambling symptoms over time; not all respondents followed the same trajectory. There was an average increase in alcohol misuse over time, with significant variability in baseline levels of use and the rate of change over time. The unconditional parallel process model indicated that higher baseline levels of alcohol misuse were associated with higher baseline levels of problem gambling symptoms. In addition, higher baseline levels of alcohol misuse were associated with steeper declines in problem gambling symptoms over time. However, these between-process correlations did not retain significance when covariates were added to the model, indicating that one behavior was not a risk factor for the other. The lack of mutual influence in the problem gambling symptomatology and alcohol misuse processes suggest that there are common risk factors underlying these two behaviors, supporting the notion of a syndrome model of addiction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Performance of a segmented HPGe detector at KRISS.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Lee, S H; Park, Tae Soon; Oh, J S

    2018-04-01

    A 24 segmented HPGe coaxial detector was set up with a digitized data acquisition system (DAQ). The DAQ was composed of a digitizer (5 × 10 7 sampling/s), a Field-Programmable Gate Array (FPGA), and a real time operating system. The Full Width Half Maximum (FWHM), rise time, signal characteristics, and spectra of a 137 Cs source were evaluated. The data were processed using an in-house developed gamma-ray tracking system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Interleukin-6 Detection with a Plasmonic Chip

    NASA Astrophysics Data System (ADS)

    Tawa, Keiko; Sumiya, Masashi; Toma, Mana; Sasakawa, Chisato; Sujino, Takuma; Miyaki, Tatsuki; Nakazawa, Hikaru; Umetsu, Mitsuo

    Interleukin-6, a cytokine relating inflammatory and autoimmune activity, was detected with three fluorescence assays using a plasmonic chip. In their assays, the way of surface modification, sample volume, incubation time and mixing solution, were found to influence the detection sensitivity. When the assay was revised in the point of a rapid and easy process, the detection sensitivity was not compromised compared to assays with sufficient sample volume and assay time. To suit the purpose of immunosensing, the assay conditions should be determined.

  19. Detection of motile micro-organisms in biological samples by means of a fully automated image processing system

    NASA Astrophysics Data System (ADS)

    Alanis, Elvio; Romero, Graciela; Alvarez, Liliana; Martinez, Carlos C.; Hoyos, Daniel; Basombrio, Miguel A.

    2001-08-01

    A fully automated image processing system for detection of motile microorganism is biological samples is presented. The system is specifically calibrated for determining the concentration of Trypanosoma Cruzi parasites in blood samples of mice infected with Chagas disease. The method can be adapted for use in other biological samples. A thin layer of blood infected by T. cruzi parasites is examined in a common microscope in which the images of the vision field are taken by a CCD camera and temporarily stored in the computer memory. In a typical field, a few motile parasites are observable surrounded by blood red cells. The parasites have low contrast. Thus, they are difficult to detect visually but their great motility betrays their presence by the movement of the nearest neighbor red cells. Several consecutive images of the same field are taken, decorrelated with each other where parasites are present, and digitally processed in order to measure the number of parasites present in the field. Several fields are sequentially processed in the same fashion, displacing the sample by means of step motors driven by the computer. A direct advantage of this system is that its results are more reliable and the process is less time consuming than the current subjective evaluations made visually by technicians.

  20. Green Aspects of Techniques for the Determination of Currently Used Pesticides in Environmental Samples

    PubMed Central

    Stocka, Jolanta; Tankiewicz, Maciej; Biziuk, Marek; Namieśnik, Jacek

    2011-01-01

    Pesticides are among the most dangerous environmental pollutants because of their stability, mobility and long-term effects on living organisms. Their presence in the environment is a particular danger. It is therefore crucial to monitor pesticide residues using all available analytical methods. The analysis of environmental samples for the presence of pesticides is very difficult: the processes involved in sample preparation are labor-intensive and time-consuming. To date, it has been standard practice to use large quantities of organic solvents in the sample preparation process; but as these solvents are themselves hazardous, solvent-less and solvent-minimized techniques are becoming popular. The application of Green Chemistry principles to sample preparation is primarily leading to the miniaturization of procedures and the use of solvent-less techniques, and these are discussed in the paper. PMID:22174632

  1. An efficient, reliable and inexpensive device for the rapid homogenization of multiple tissue samples by centrifugation.

    PubMed

    Ilyin, S E; Plata-Salamán, C R

    2000-02-15

    Homogenization of tissue samples is a common first step in the majority of current protocols for RNA, DNA, and protein isolation. This report describes a simple device for centrifugation-mediated homogenization of tissue samples. The method presented is applicable to RNA, DNA, and protein isolation, and we show examples where high quality total cell RNA, DNA, and protein were obtained from brain and other tissue samples. The advantages of the approach presented include: (1) a significant reduction in time investment relative to hand-driven or individual motorized-driven pestle homogenization; (2) easy construction of the device from inexpensive parts available in any laboratory; (3) high replicability in the processing; and (4) the capacity for the parallel processing of multiple tissue samples, thus allowing higher efficiency, reliability, and standardization.

  2. Performance analysis of gamma ray spectrometric parameters on digital signal and analog signal processing based MCA systems using NaI(Tl) detector.

    PubMed

    Kukreti, B M; Sharma, G K

    2012-05-01

    Accurate and speedy estimations of ppm range uranium and thorium in the geological and rock samples are most useful towards ongoing uranium investigations and identification of favorable radioactive zones in the exploration field areas. In this study with the existing 5 in. × 4 in. NaI(Tl) detector setup, prevailing background and time constraints, an enhanced geometrical setup has been worked out to improve the minimum detection limits for primordial radioelements K(40), U(238) and Th(232). This geometrical setup has been integrated with the newly introduced, digital signal processing based MCA system for the routine spectrometric analysis of low concentration rock samples. Stability performance, during the long counting hours, for digital signal processing MCA system and its predecessor NIM bin based MCA system has been monitored, using the concept of statistical process control. Monitored results, over a time span of few months, have been quantified in terms of spectrometer's parameters such as Compton striping constants and Channel sensitivities, used for evaluating primordial radio element concentrations (K(40), U(238) and Th(232)) in geological samples. Results indicate stable dMCA performance, with a tendency of higher relative variance, about mean, particularly for Compton stripping constants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Biochemical process of low level radioactive liquid simulation waste containing detergent

    NASA Astrophysics Data System (ADS)

    Kundari, Noor Anis; Putra, Sugili; Mukaromah, Umi

    2015-12-01

    Research of biochemical process of low level radioactive liquid waste containing detergent has been done. Thse organic liquid wastes are generated in nuclear facilities such as from laundry. The wastes that are cotegorized as hazard and poison materials are also radioactive. It must be treated properly by detoxification of the hazard and decontamination of the radionuclides to ensure that the disposal of the waste meets the requirement of standard quality of water. This research was intended to determine decontamination factor and separation efficiensies, its kinetics law, and to produce a supernatant that ensured the environmental quality standard. The radioactive element in the waste was thorium with activity of 5.10-5 Ci/m3. The radioactive liquid waste which were generated in simulation plant contains detergents that was further processed by aerobic biochemical process using SGB 103 bacteria in a batch reactor equipped with aerators. Two different concentration of samples were processed and analyzed for 212 hours and 183 hours respectively at a room temperature. The product of this process is a liquid phase called as supernatant and solid phase material called sludge. The chemical oxygen demand (COD), biological oxygen demand (BOD), suspended solid (SS), and its alpha activity were analyzed. The results show that the decontamination factor and the separation efficiency of the lower concentration samples are higher compared to the samples with high concentration. Regarding the decontamination factor, the result for 212 hours processing of waste with detergent concentration of 1.496 g/L was 3.496 times, whereas at the detergent concentration of 0.748 g/L was 15.305 times for 183 hours processing. In case of the separation efficiency, the results for both samples were 71.396% and 93.465% respectively. The Bacterial growth kinetics equation follow Monod's model and the decreasing of COD and BOD were first order with the rate constant of 0.01 hour-1.

  4. Biochemical process of low level radioactive liquid simulation waste containing detergent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kundari, Noor Anis, E-mail: nooranis@batan.go.id; Putra, Sugili; Mukaromah, Umi

    Research of biochemical process of low level radioactive liquid waste containing detergent has been done. Thse organic liquid wastes are generated in nuclear facilities such as from laundry. The wastes that are cotegorized as hazard and poison materials are also radioactive. It must be treated properly by detoxification of the hazard and decontamination of the radionuclides to ensure that the disposal of the waste meets the requirement of standard quality of water. This research was intended to determine decontamination factor and separation efficiensies, its kinetics law, and to produce a supernatant that ensured the environmental quality standard. The radioactive elementmore » in the waste was thorium with activity of 5.10{sup −5} Ci/m{sup 3}. The radioactive liquid waste which were generated in simulation plant contains detergents that was further processed by aerobic biochemical process using SGB 103 bacteria in a batch reactor equipped with aerators. Two different concentration of samples were processed and analyzed for 212 hours and 183 hours respectively at a room temperature. The product of this process is a liquid phase called as supernatant and solid phase material called sludge. The chemical oxygen demand (COD), biological oxygen demand (BOD), suspended solid (SS), and its alpha activity were analyzed. The results show that the decontamination factor and the separation efficiency of the lower concentration samples are higher compared to the samples with high concentration. Regarding the decontamination factor, the result for 212 hours processing of waste with detergent concentration of 1.496 g/L was 3.496 times, whereas at the detergent concentration of 0.748 g/L was 15.305 times for 183 hours processing. In case of the separation efficiency, the results for both samples were 71.396% and 93.465% respectively. The Bacterial growth kinetics equation follow Monod’s model and the decreasing of COD and BOD were first order with the rate constant of 0.01 hour{sup −1}.« less

  5. Superhydrophobic surfaces: From nature to biomimetic through VOF simulation.

    PubMed

    Liu, Chunbao; Zhu, Ling; Bu, Weiyang; Liang, Yunhong

    2018-04-01

    The contact angle, surface structure and chemical compositions of Canna leaves were investigated. According to the surface structure of Canna leaves which observed by Scanning Electron Microscopy(SEM), the CFD (Computational Fluid Dynamics)model was established and the method of volume of fluid (VOF) was used to simulate the process of droplet impacting on the surface and established a smooth surface for comparison to verify that the surface structure was an important factor of the superhydrophobic properties. Based on the study of Canna leaf and VOF simulation of its surface structure, the superhydrophobic samples were processed successfully and showed a good superhydrophobic property with a contact angle of 156 ± 1 degrees. A high-speed camera (5000 frames per second) was used to assess droplet movement and determine the contact time of the samples. The contact time for the sample was 13.1 ms. The results displayed that the artificial superhydrophobic surface is perfect for the performance of superhydrophobic properties. The VOF simulation method was efficient, accurate and low cost before machining artificial superhydrophobic samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A high-throughput semi-automated preparation for filtered synaptoneurosomes.

    PubMed

    Murphy, Kathryn M; Balsor, Justin; Beshara, Simon; Siu, Caitlin; Pinto, Joshua G A

    2014-09-30

    Synaptoneurosomes have become an important tool for studying synaptic proteins. The filtered synaptoneurosomes preparation originally developed by Hollingsworth et al. (1985) is widely used and is an easy method to prepare synaptoneurosomes. The hand processing steps in that preparation, however, are labor intensive and have become a bottleneck for current proteomic studies using synaptoneurosomes. For this reason, we developed new steps for tissue homogenization and filtration that transform the preparation of synaptoneurosomes to a high-throughput, semi-automated process. We implemented a standardized protocol with easy to follow steps for homogenizing multiple samples simultaneously using a FastPrep tissue homogenizer (MP Biomedicals, LLC) and then filtering all of the samples in centrifugal filter units (EMD Millipore, Corp). The new steps dramatically reduce the time to prepare synaptoneurosomes from hours to minutes, increase sample recovery, and nearly double enrichment for synaptic proteins. These steps are also compatible with biosafety requirements for working with pathogen infected brain tissue. The new high-throughput semi-automated steps to prepare synaptoneurosomes are timely technical advances for studies of low abundance synaptic proteins in valuable tissue samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Imaging synthetic aperture radar

    DOEpatents

    Burns, Bryan L.; Cordaro, J. Thomas

    1997-01-01

    A linear-FM SAR imaging radar method and apparatus to produce a real-time image by first arranging the returned signals into a plurality of subaperture arrays, the columns of each subaperture array having samples of dechirped baseband pulses, and further including a processing of each subaperture array to obtain coarse-resolution in azimuth, then fine-resolution in range, and lastly, to combine the processed subapertures to obtain the final fine-resolution in azimuth. Greater efficiency is achieved because both the transmitted signal and a local oscillator signal mixed with the returned signal can be varied on a pulse-to-pulse basis as a function of radar motion. Moreover, a novel circuit can adjust the sampling location and the A/D sample rate of the combined dechirped baseband signal which greatly reduces processing time and hardware. The processing steps include implementing a window function, stabilizing either a central reference point and/or all other points of a subaperture with respect to doppler frequency and/or range as a function of radar motion, sorting and compressing the signals using a standard fourier transforms. The stabilization of each processing part is accomplished with vector multiplication using waveforms generated as a function of radar motion wherein these waveforms may be synthesized in integrated circuits. Stabilization of range migration as a function of doppler frequency by simple vector multiplication is a particularly useful feature of the invention; as is stabilization of azimuth migration by correcting for spatially varying phase errors prior to the application of an autofocus process.

  8. A target sample of adolescents and reward processing: same neural and behavioral correlates engaged in common paradigms?

    PubMed

    Nees, Frauke; Vollstädt-Klein, Sabine; Fauth-Bühler, Mira; Steiner, Sabina; Mann, Karl; Poustka, Luise; Banaschewski, Tobias; Büchel, Christian; Conrod, Patricia J; Garavan, Hugh; Heinz, Andreas; Ittermann, Bernd; Artiges, Eric; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Smolka, Michael N; Struve, Maren; Loth, Eva; Schumann, Gunter; Flor, Herta

    2012-11-01

    Adolescence is a transition period that is assumed to be characterized by increased sensitivity to reward. While there is growing research on reward processing in adolescents, investigations into the engagement of brain regions under different reward-related conditions in one sample of healthy adolescents, especially in a target age group, are missing. We aimed to identify brain regions preferentially activated in a reaction time task (monetary incentive delay (MID) task) and a simple guessing task (SGT) in a sample of 14-year-old adolescents (N = 54) using two commonly used reward paradigms. Functional magnetic resonance imaging was employed during the MID with big versus small versus no win conditions and the SGT with big versus small win and big versus small loss conditions. Analyses focused on changes in blood oxygen level-dependent contrasts during reward and punishment processing in anticipation and feedback phases. We found clear magnitude-sensitive response in reward-related brain regions such as the ventral striatum during anticipation in the MID task, but not in the SGT. This was also true for reaction times. The feedback phase showed clear reward-related, but magnitude-independent, response patterns, for example in the anterior cingulate cortex, in both tasks. Our findings highlight neural and behavioral response patterns engaged in two different reward paradigms in one sample of 14-year-old healthy adolescents and might be important for reference in future studies investigating reward and punishment processing in a target age group.

  9. Storage Time and Urine Biomarker Levels in the ASSESS-AKI Study

    PubMed Central

    Liu, Kathleen D.; Siew, Edward D.; Reeves, W. Brian; Himmelfarb, Jonathan; Go, Alan S.; Hsu, Chi-yuan; Bennett, Michael R.; Devarajan, Prasad; Ikizler, T. Alp; Kaufman, James S.; Kimmel, Paul L.; Chinchilli, Vernon M.; Parikh, Chirag R.

    2016-01-01

    Background Although stored urine samples are often used in biomarker studies focused on acute and chronic kidney disease, how storage time impacts biomarker levels is not well understood. Methods 866 subjects enrolled in the NIDDK-sponsored ASsessment, Serial Evaluation, and Subsequent Sequelae in Acute Kidney Injury (ASSESS-AKI) Study were included. Samples were processed under standard conditions and stored at -70°C until analyzed. Kidney injury molecule-1 (KIM-1), neutrophil gelatinase-associated lipocalin (NGAL), interleukin-18 (IL-18), and liver fatty acid binding protein (L-FABP) were measured in urine samples collected during the index hospitalization or an outpatient visit 3 months later. Mixed effects models were used to determine the effect of storage time on biomarker levels and stratified by visit. Results Median storage was 17.8 months (25–75% IQR 10.6–23.7) for samples from the index hospitalization and 14.6 months (IQR 7.3–20.4) for outpatient samples. In the mixed effects models, the only significant association between storage time and biomarker concentration was for KIM-1 in outpatient samples, where each month of storage was associated with a 1.7% decrease (95% CI -3% to -0.3%). There was no relationship between storage time and KIM-1 levels in samples from the index hospitalization. Conclusion There was no significant impact of storage time over a median of 18 months on urine KIM-1, NGAL, IL-18 or L-FABP in hospitalized samples; a statistically significant effect towards a decrease over time was noted for KIM-1 in outpatient samples. Additional studies are needed to determine whether longer periods of storage at -70°C systematically impact levels of these analytes. PMID:27788160

  10. Storage Time and Urine Biomarker Levels in the ASSESS-AKI Study.

    PubMed

    Liu, Kathleen D; Siew, Edward D; Reeves, W Brian; Himmelfarb, Jonathan; Go, Alan S; Hsu, Chi-Yuan; Bennett, Michael R; Devarajan, Prasad; Ikizler, T Alp; Kaufman, James S; Kimmel, Paul L; Chinchilli, Vernon M; Parikh, Chirag R

    2016-01-01

    Although stored urine samples are often used in biomarker studies focused on acute and chronic kidney disease, how storage time impacts biomarker levels is not well understood. 866 subjects enrolled in the NIDDK-sponsored ASsessment, Serial Evaluation, and Subsequent Sequelae in Acute Kidney Injury (ASSESS-AKI) Study were included. Samples were processed under standard conditions and stored at -70°C until analyzed. Kidney injury molecule-1 (KIM-1), neutrophil gelatinase-associated lipocalin (NGAL), interleukin-18 (IL-18), and liver fatty acid binding protein (L-FABP) were measured in urine samples collected during the index hospitalization or an outpatient visit 3 months later. Mixed effects models were used to determine the effect of storage time on biomarker levels and stratified by visit. Median storage was 17.8 months (25-75% IQR 10.6-23.7) for samples from the index hospitalization and 14.6 months (IQR 7.3-20.4) for outpatient samples. In the mixed effects models, the only significant association between storage time and biomarker concentration was for KIM-1 in outpatient samples, where each month of storage was associated with a 1.7% decrease (95% CI -3% to -0.3%). There was no relationship between storage time and KIM-1 levels in samples from the index hospitalization. There was no significant impact of storage time over a median of 18 months on urine KIM-1, NGAL, IL-18 or L-FABP in hospitalized samples; a statistically significant effect towards a decrease over time was noted for KIM-1 in outpatient samples. Additional studies are needed to determine whether longer periods of storage at -70°C systematically impact levels of these analytes.

  11. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  13. Daily time management in children with spina bifida.

    PubMed

    Persson, Marika; Janeslätt, Gunnel; Peny-Dahlstrand, Marie

    2017-12-11

    Spina bifida (SB) often results in a complex disability and can also cause cognitive dysfunction. No previous study has investigated the ability to adapt to time in children with SB. This ability is crucial for an individual's possibility to develop autonomy in life. The purpose of this study was to investigate whether children aged 10-17 with SB have lower time-processing abilities than typically-developing children, and to describe the profile of time-processing in children with SB. Participants comprised a consecutive sample of 21 children (drawn from a geographical cohort of 45) aged 10-17 years (mean: 14 years, SD: 2 years); 13 were boys. The instruments used were KaTid-Y, Time-S, and Time-P. The children with SB had lower time-processing abilities than typically-developing children (52.4% under -2SD), particularly difficulties to orient to and to estimate objective time, to understand time perspectives and with time planning. They also self-rated low use of strategies to adapt to time. The parents rated their children as having extensive difficulties in daily time management. The low time-processing ability found in children with SB is likely to be an important contributing factor to low autonomy and independence.

  14. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    PubMed

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.

  15. Study on processing parameters of glass cutting by nanosecond 532 nm fiber laser

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Gao, Fan; Xiong, Baoxing; Zhang, Xiang; Yuan, Xiao

    2018-03-01

    The processing parameters of soda-lime glass cutting with several nanosecond 532 nm pulsed fiber laser are studied in order to obtain sufficiently large ablation rate and better processing quality. The influences of laser processing parameters on effective cutting speed and cutting quality of 1 2 mm thick soda-lime glass are studied. The experimental results show that larger laser pulse energy will lead to higher effective cutting speed and larger maximum edge collapse of the front side of the glass samples. Compared with that of 1.1 mm thick glass samples, the 2.0 mm thick glass samples is more difficult to cut. With the pulse energy of 51.2 μJ, the maximum edge collapse is more than 200 μm for the 2.0 mm thick glass samples. In order to achieve the high effective cutting speed and good cutting quality at the same time, the dual energy overlapping method is used to obtain the better cutting performance for the 2.0 mm thick glass samples, and the cutting speed of 194 mm/s and the maximum edge collapse of less than 132 μm are realized.

  16. Femtosecond laser machining for characterization of local mechanical properties of biomaterials: a case study on wood

    PubMed Central

    Jakob, Severin; Pfeifenberger, Manuel J.; Hohenwarter, Anton; Pippan, Reinhard

    2017-01-01

    Abstract The standard preparation technique for micro-sized samples is focused ion beam milling, most frequently using Ga+ ions. The main drawbacks are the required processing time and the possibility and risks of ion implantation. In contrast, ultrashort pulsed laser ablation can process any type of material with ideally negligible damage to the surrounding volume and provides 4 to 6 orders of magnitude higher ablation rates than the ion beam technique. In this work, a femtosecond laser was used to prepare wood samples from spruce for mechanical testing at the micrometre level. After optimization of the different laser parameters, tensile and compressive specimens were produced from microtomed radial-tangential and longitudinal-tangential sections. Additionally, laser-processed samples were exposed to an electron beam prior to testing to study possible beam damage. The specimens originating from these different preparation conditions were mechanically tested. Advantages and limitations of the femtosecond laser preparation technique and the deformation and fracture behaviour of the samples are discussed. The results prove that femtosecond laser processing is a fast and precise preparation technique, which enables the fabrication of pristine biological samples with dimensions at the microscale. PMID:28970867

  17. Femtosecond laser machining for characterization of local mechanical properties of biomaterials: a case study on wood

    NASA Astrophysics Data System (ADS)

    Jakob, Severin; Pfeifenberger, Manuel J.; Hohenwarter, Anton; Pippan, Reinhard

    2017-12-01

    The standard preparation technique for micro-sized samples is focused ion beam milling, most frequently using Ga+ ions. The main drawbacks are the required processing time and the possibility and risks of ion implantation. In contrast, ultrashort pulsed laser ablation can process any type of material with ideally negligible damage to the surrounding volume and provides 4 to 6 orders of magnitude higher ablation rates than the ion beam technique. In this work, a femtosecond laser was used to prepare wood samples from spruce for mechanical testing at the micrometre level. After optimization of the different laser parameters, tensile and compressive specimens were produced from microtomed radial-tangential and longitudinal-tangential sections. Additionally, laser-processed samples were exposed to an electron beam prior to testing to study possible beam damage. The specimens originating from these different preparation conditions were mechanically tested. Advantages and limitations of the femtosecond laser preparation technique and the deformation and fracture behaviour of the samples are discussed. The results prove that femtosecond laser processing is a fast and precise preparation technique, which enables the fabrication of pristine biological samples with dimensions at the microscale.

  18. Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2007-09-01

    transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure

  19. Mössbauer characterization of joints of steel pieces in transient liquid phase bonding experiences

    NASA Astrophysics Data System (ADS)

    di Luozzo, N.; Martínez Stenger, P. F.; Canal, J. P.; Fontana, M. R.; Arcondo, B.

    2011-11-01

    Joining of seamless, low carbon, steel tubes were performed by means of Transient Liquid Phase Bonding process employing a foil of Fe-Si-B metallic glass as filler material. The influence of the main parameters of the process was evaluated: temperature, holding time, pressure and post weld heat treatment. Powder samples were obtained from the joint of tubes and characterized employing Mössbauer Spectroscopy in transmission geometry. The sampling was performed both in tubes successfully welded and in those which show joint defects. The results obtained are correlated with the obtained microstructure and the diffusion of Si and B during the process.

  20. Lunar Processing Cabinet 2.0: Retrofitting Gloveboxes into the 21st Century

    NASA Technical Reports Server (NTRS)

    Calaway, M. J.

    2015-01-01

    In 2014, the Apollo 16 Lunar Processing Glovebox (cabinet 38) in the Lunar Curation Laboratory at NASA JSC received an upgrade including new technology interfaces. A Jacobs - Technology Innovation Project provided the primary resources to retrofit this glovebox into the 21st century. NASA Astromaterials Acquisition & Curation Office continues the over 40 year heritage of preserving lunar materials for future scientific studies in state-of-the-art facilities. This enhancement has not only modernized the contamination controls, but provides new innovative tools for processing and characterizing lunar samples as well as supports real-time exchange of sample images and information with the scientific community throughout the world.

  1. Synthesis and characterization of nanocrystalline graphite from coconut shell with heating process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachid, Frischa M., E-mail: frischamw@yahoo.com, E-mail: adhiyudhaperkasa@yahoo.com, E-mail: afandisar@yahoo.com, E-mail: nurulrosyidah92@gmail.com, E-mail: darminto@physics.its.ac.id; Perkasa, Adhi Y., E-mail: frischamw@yahoo.com, E-mail: adhiyudhaperkasa@yahoo.com, E-mail: afandisar@yahoo.com, E-mail: nurulrosyidah92@gmail.com, E-mail: darminto@physics.its.ac.id; Prasetya, Fandi A., E-mail: frischamw@yahoo.com, E-mail: adhiyudhaperkasa@yahoo.com, E-mail: afandisar@yahoo.com, E-mail: nurulrosyidah92@gmail.com, E-mail: darminto@physics.its.ac.id

    Graphite were synthesized and characterized by heating process of coconut shell with varying temperature (400, 800 and 1000°C) and holding time (3 and 5 hours). After heating process, the samples were characterized by X-ray diffraction (XRD) and analyzed by X'pert HighScore Plus Software, Scanning Electron Microcope-Energy Dispersive X-Ray (SEM-EDX) and Transmission Electron Microscope-Energy Dispersive X-Ray (TEM-EDX). Graphite and londsdaelite phase were analyzed by XRD. According to EDX analysis, the sample was heated in 1000°C got the highest content of carbon. The amorphous carbon and nanocrystalline graphite were observed by SEM-EDX and TEM-EDX.

  2. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  3. Friction Stir Processing of Stainless Steel for Ascertaining Its Superlative Performance in Bioimplant Applications.

    PubMed

    Perumal, G; Ayyagari, A; Chakrabarti, A; Kannan, D; Pati, S; Grewal, H S; Mukherjee, S; Singh, S; Arora, H S

    2017-10-25

    Substrate-cell interactions for a bioimplant are driven by substrate's surface characteristics. In addition, the performance of an implant and resistance to degradation are primarily governed by its surface properties. A bioimplant typically degrades by wear and corrosion in the physiological environment, resulting in metallosis. Surface engineering strategies for limiting degradation of implants and enhancing their performance may reduce or eliminate the need for implant removal surgeries and the associated cost. In the current study, we tailored the surface properties of stainless steel using submerged friction stir processing (FSP), a severe plastic deformation technique. FSP resulted in significant microstructural refinement from 22 μm grain size for the as-received alloy to 0.8 μm grain size for the processed sample with increase in hardness by nearly 1.5 times. The wear and corrosion behavior of the processed alloy was evaluated in simulated body fluid. The processed sample demonstrated remarkable improvement in both wear and corrosion resistance, which is explained by surface strengthening and formation of a highly stable passive layer. The methylthiazol tetrazolium assay demonstrated that the processed sample is better in supporting cell attachment, proliferation with minimal toxicity, and hemolysis. The athrombogenic characteristic of the as-received and processed samples was evaluated by fibrinogen adsorption and platelet adhesion via the enzyme-linked immunosorbent assay and lactate dehydrogenase assay, respectively. The processed sample showed less platelet and fibrinogen adhesion compared with the as-received alloy, signifying its high thromboresistance. The current study suggests friction stir processing to be a versatile toolbox for enhancing the performance and reliability of currently used bioimplant materials.

  4. An improved filter elution and cell culture assay procedure for evaluating public groundwater systems for culturable enteroviruses.

    PubMed

    Dahling, Daniel R

    2002-01-01

    Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.

  5. Experimental analysis and modeling of ultrasound assisted freezing of potato spheres.

    PubMed

    Kiani, Hossein; Zhang, Zhihang; Sun, Da-Wen

    2015-09-01

    In recent years, innovative methods such as ultrasound assisted freezing have been developed in order to improve the freezing process. During freezing of foods, accurate prediction of the temperature distribution, phase ratios, and process time is very important. In the present study, ultrasound assisted immersion freezing process (in 1:1 ethylene glycol-water solution at 253.15K) of potato spheres (0.02 m diameter) was evaluated using experimental, numerical and analytical approaches. Ultrasound (25 kHz, 890 W m(-2)) was irradiated for different duty cycles (DCs=0-100%). A finite volume based enthalpy method was used in the numerical model, based on which temperature and liquid fraction profiles were simulated by a program developed using OpenFOAM® CFD software. An analytical technique was also employed to calculate freezing times. The results showed that ultrasound irradiation could decrease the characteristic freezing time of potatoes. Since ultrasound irradiation increased the heat transfer coefficient but simultaneously generated heat at the surface of the samples, an optimum DC was needed for the shortest freezing time which occurred in the range of 30-70% DC. DCs higher than 70% increased the freezing time. DCs lower than 30% did not provide significant effects on the freezing time compared to the control sample. The numerical model predicted the characteristic freezing time in accordance with the experimental results. In addition, analytical calculation of characteristic freezing time exhibited qualitative agreement with the experimental results. As the numerical simulations provided profiles of temperature and water fraction within potatoes frozen with or without ultrasound, the models can be used to study and control different operation situations, and to improve the understanding of the freezing process. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Reward and punishment learning in daily life: A replication study

    PubMed Central

    van Roekel, Eeske; Wichers, Marieke; Oldehinkel, Albertine J.

    2017-01-01

    Day-to-day experiences are accompanied by feelings of Positive Affect (PA) and Negative Affect (NA). Implicitly, without conscious processing, individuals learn about the reward and punishment value of each context and activity. These associative learning processes, in turn, affect the probability that individuals will re-engage in such activities or seek out that context. So far, implicit learning processes are almost exclusively investigated in controlled laboratory settings and not in daily life. Here we aimed to replicate the first study that investigated implicit learning processes in real life, by means of the Experience Sampling Method (ESM). That is, using an experience-sampling study with 90 time points (three measurements over 30 days), we prospectively measured time spent in social company and amount of physical activity as well as PA and NA in the daily lives of 18-24-year-old young adults (n = 69 with anhedonia, n = 69 without anhedonia). Multilevel analyses showed a punishment learning effect with regard to time spent in company of friends, but not a reward learning effect. Neither reward nor punishment learning effects were found with regard to physical activity. Our study shows promising results for future research on implicit learning processes in daily life, with the proviso of careful consideration of the timescale used. Short-term retrospective ESM design with beeps approximately six hours apart may suffer from mismatch noise that hampers accurate detection of associative learning effects over time. PMID:28976985

  7. Rapid sample classification using an open port sampling interface coupled with liquid introduction atmospheric pressure ionization mass spectrometry

    DOE PAGES

    Van Berkel, Gary J.; Kertesz, Vilmos

    2016-11-15

    An “Open Access”-like mass spectrometric platform to fully utilize the simplicity of the manual open port sampling interface for rapid characterization of unprocessed samples by liquid introduction atmospheric pressure ionization mass spectrometry has been lacking. The in-house developed integrated software with a simple, small and relatively low-cost mass spectrometry system introduced here fills this void. Software was developed to operate the mass spectrometer, to collect and process mass spectrometric data files, to build a database and to classify samples using such a database. These tasks were accomplished via the vendorprovided software libraries. Sample classification based on spectral comparison utilized themore » spectral contrast angle method. As a result, using the developed software platform near real-time sample classification is exemplified using a series of commercially available blue ink rollerball pens and vegetable oils. In the case of the inks, full scan positive and negative ion ESI mass spectra were both used for database generation and sample classification. For the vegetable oils, full scan positive ion mode APCI mass spectra were recorded. The overall accuracy of the employed spectral contrast angle statistical model was 95.3% and 98% in case of the inks and oils, respectively, using leave-one-out cross-validation. In conclusion, this work illustrates that an open port sampling interface/mass spectrometer combination, with appropriate instrument control and data processing software, is a viable direct liquid extraction sampling and analysis system suitable for the non-expert user and near real-time sample classification via database matching.« less

  8. Rapid sample classification using an open port sampling interface coupled with liquid introduction atmospheric pressure ionization mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Berkel, Gary J.; Kertesz, Vilmos

    An “Open Access”-like mass spectrometric platform to fully utilize the simplicity of the manual open port sampling interface for rapid characterization of unprocessed samples by liquid introduction atmospheric pressure ionization mass spectrometry has been lacking. The in-house developed integrated software with a simple, small and relatively low-cost mass spectrometry system introduced here fills this void. Software was developed to operate the mass spectrometer, to collect and process mass spectrometric data files, to build a database and to classify samples using such a database. These tasks were accomplished via the vendorprovided software libraries. Sample classification based on spectral comparison utilized themore » spectral contrast angle method. As a result, using the developed software platform near real-time sample classification is exemplified using a series of commercially available blue ink rollerball pens and vegetable oils. In the case of the inks, full scan positive and negative ion ESI mass spectra were both used for database generation and sample classification. For the vegetable oils, full scan positive ion mode APCI mass spectra were recorded. The overall accuracy of the employed spectral contrast angle statistical model was 95.3% and 98% in case of the inks and oils, respectively, using leave-one-out cross-validation. In conclusion, this work illustrates that an open port sampling interface/mass spectrometer combination, with appropriate instrument control and data processing software, is a viable direct liquid extraction sampling and analysis system suitable for the non-expert user and near real-time sample classification via database matching.« less

  9. Optical properties of micro and nano LiNbO3 thin film prepared by spin coating

    NASA Astrophysics Data System (ADS)

    Fakhri, Makram A.; Salim, Evan T.; Abdulwahhab, Ahmed W.; Hashim, U.; Salim, Zaid T.

    2018-07-01

    This paper deals with preparing of Lithium-Niobate thin films based on Sol-Gel technique on a substrate made of quartz, samples have been deposited under three different stirrer times. At 3000 round per minute of spin coating strategy, the deposition processes have been accomplished. The results showed an enhancement in the crystalline structure of the prepared samples with increasing the duration of stirrer time. The AFM measurement has assured that the structure of the prepared samples is more regular distributed, homogeneous and crack-free in their structures. Further, measurements and calculations of lattice constant, energy band gap, refractive index, and optical dielectric constant are also considered and agreed with experimental data collected by the characterized samples.

  10. Rapid Active Sampling Package

    NASA Technical Reports Server (NTRS)

    Peters, Gregory

    2010-01-01

    A field-deployable, battery-powered Rapid Active Sampling Package (RASP), originally designed for sampling strong materials during lunar and planetary missions, shows strong utility for terrestrial geological use. The technology is proving to be simple and effective for sampling and processing materials of strength. Although this originally was intended for planetary and lunar applications, the RASP is very useful as a powered hand tool for geologists and the mining industry to quickly sample and process rocks in the field on Earth. The RASP allows geologists to surgically acquire samples of rock for later laboratory analysis. This tool, roughly the size of a wrench, allows the user to cut away swaths of weathering rinds, revealing pristine rock surfaces for observation and subsequent sampling with the same tool. RASPing deeper (.3.5 cm) exposes single rock strata in-situ. Where a geologist fs hammer can only expose unweathered layers of rock, the RASP can do the same, and then has the added ability to capture and process samples into powder with particle sizes less than 150 microns, making it easier for XRD/XRF (x-ray diffraction/x-ray fluorescence). The tool uses a rotating rasp bit (or two counter-rotating bits) that resides inside or above the catch container. The container has an open slot to allow the bit to extend outside the container and to allow cuttings to enter and be caught. When the slot and rasp bit are in contact with a substrate, the bit is plunged into it in a matter of seconds to reach pristine rock. A user in the field may sample a rock multiple times at multiple depths in minutes, instead of having to cut out huge, heavy rock samples for transport back to a lab for analysis. Because of the speed and accuracy of the RASP, hundreds of samples can be taken in one day. RASP-acquired samples are small and easily carried. A user can characterize more area in less time than by using conventional methods. The field-deployable RASP used a Ni/Cad rechargeable battery. Power usage was less than 1 Wh/ cm3 even when sampling strong basalts, so many samples could be taken on a single battery charge.

  11. Event-Based Processing of Neutron Scattering Data

    DOE PAGES

    Peterson, Peter F.; Campbell, Stuart I.; Reuter, Michael A.; ...

    2015-09-16

    Many of the world's time-of-flight spallation neutrons sources are migrating to the recording of individual neutron events. This provides for new opportunities in data processing, the least of which is to filter the events based on correlating them with logs of sample environment and other ancillary equipment. This paper will describe techniques for processing neutron scattering data acquired in event mode that preserve event information all the way to a final spectrum, including any necessary corrections or normalizations. This results in smaller final errors, while significantly reducing processing time and memory requirements in typical experiments. Results with traditional histogramming techniquesmore » will be shown for comparison.« less

  12. Comparison of TiO₂ and ZnO solar cells sensitized with an indoline dye: time-resolved laser spectroscopy studies of partial charge separation processes.

    PubMed

    Sobuś, Jan; Burdziński, Gotard; Karolczak, Jerzy; Idígoras, Jesús; Anta, Juan A; Ziółek, Marcin

    2014-03-11

    Time-resolved laser spectroscopy techniques in the time range from femtoseconds to seconds were applied to investigate the charge separation processes in complete dye-sensitized solar cells (DSC) made with iodide/iodine liquid electrolyte and indoline dye D149 interacting with TiO2 or ZnO nanoparticles. The aim of the studies was to explain the differences in the photocurrents of the cells (3-4 times higher for TiO2 than for ZnO ones). Electrochemical impedance spectroscopy and nanosecond flash photolysis studies revealed that the better performance of TiO2 samples is not due to the charge collection and dye regeneration processes. Femtosecond transient absorption results indicated that after first 100 ps the number of photoinduced electrons in the semiconductor is 3 times higher for TiO2 than for ZnO solar cells. Picosecond emission studies showed that the lifetime of the D149 excited state is about 3 times longer for ZnO than for TiO2 samples. Therefore, the results indicate that lower performance of ZnO solar cells is likely due to slower electron injection. The studies show how to correlate the laser spectroscopy methodology with global parameters of the solar cells and should help in better understanding of the behavior of alternative materials for porous electrodes for DSC and related devices.

  13. Input-output characterization of an ultrasonic testing system by digital signal analysis

    NASA Technical Reports Server (NTRS)

    Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.

    1986-01-01

    Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.

  14. Permeation-solid adsorbent sampling and GC analysis of formaldehyde.

    PubMed

    Muntuta-Kinyanta, C; Hardy, J K

    1991-12-01

    A passive method with membrane permeation sampling for the determination of time-weighted-average (TWA) concentration of formaldehyde in air is described. The sampling device was constructed by affixing an unbacked dimethyl silicone membrane to the base of a glass tube and by sealing the top with a rubber stopper. Formaldehyde permeates the membrane and reacts with 2-(hydroxymethyl)piperidine (2-HMP) coated on the surface of XAD-2. Sampling times from 15 min to 8 hr have been used. The formaldehyde-oxazolidine produced is thermally desorbed and determined by a packed column gas chromatograph equipped with a flame ionization detector (FID). The response of the monitor is directly proportional to the external concentration of formaldehyde over the concentration range 0.050-100 ppm. The permeation constant (the slope of the permeation curve) of the membrane is 0.333 mug ppm(-1). hr, and the detection limit of the method is 0.03 ppm for an 8-hr sampling period. Relative humidity (RH) (35-94%), temperature (0-82 degrees ) and storage period (0-25 days) do not affect the permeation process for sample collection. Moreover, potential chemical interferences, 10 ppm acetone or acrolein, respectively, have no detectable effect on the process. The method gives TWA concentration directly from the measurements, and the equipment is economical and convenient for personal or multi-location sample collections.

  15. RAPID DETECTION METHOD FOR E.COLI, ENTEROCOCCI AND BACTEROIDES IN RECREATIONAL WATER

    EPA Science Inventory

    Current methodology for determining fecal contamination of drinking water sources and recreational waters rely on the time-consuming process of bacterial multiplication and require at least 24 hours from the time of sampling to the possible determination that the water is unsafe ...

  16. Dark chocolate acceptability: influence of cocoa origin and processing conditions.

    PubMed

    Torres-Moreno, Miriam; Tarrega, Amparo; Costell, Elvira; Blanch, Consol

    2012-01-30

    Chocolate properties can vary depending on cocoa origin, composition and manufacturing procedure, which affect consumer acceptability. The aim of this work was to study the effect of two cocoa origins (Ghana and Ecuador) and two processing conditions (roasting time and conching time) on dark chocolate acceptability. Overall acceptability and acceptability for different attributes (colour, flavour, odour and texture) were evaluated by 95 consumers. Differences in acceptability among dark chocolates were mainly related to differences in flavour acceptability. The use of a long roasting time lowered chocolate acceptability in Ghanaian samples while it had no effect on acceptability of Ecuadorian chocolates. This response was observed for most consumers (two subgroups with different frequency consumption of dark chocolate). However, for a third group of consumers identified as distinguishers, the most acceptable dark chocolate samples were those produced with specific combinations of roasting time and conching time for each of the cocoa geographical origin considered. To produce dark chocolates from a single origin it is important to know the target market preferences and to select the appropriate roasting and conching conditions. Copyright © 2011 Society of Chemical Industry.

  17. Temporal Variability of Microplastic Concentrations in Freshwater Streams

    NASA Astrophysics Data System (ADS)

    Watkins, L.; Walter, M. T.

    2016-12-01

    Plastic pollution, specifically the size fraction less than 5mm known as microplastics, is an emerging contaminant in waterways worldwide. The ability of microplastics to adsorb and transport contaminants and microbes, as well as be ingested by organisms, makes them a concern in both freshwater and marine ecosystems. Recent efforts to determine the extent of microplastic pollution are increasingly focused on freshwater systems, but most studies have reported concentrations at a single time-point; few have begun to uncover how plastic concentrations in riverine systems may change through time. We hypothesize the time of day and season of sampling influences the concentrations of microplastics in water samples and more specifically, that daytime stormflow samples contain the highest microplastic concentrations due to maximized runoff and wastewater discharge. In order to test this hypothesis, we sampled in two similar streams in Ithaca, New York using a 333µm mesh net deployed within the thalweg. Repeat samples were collected to identify diurnal patterns as well as monthly variation. Samples were processed in the laboratory following the NOAA wet peroxide oxidation protocol. This work improves our ability to interpret existing single-time-point survey results by providing information on how microplastic concentrations change over time and whether concentrations in existing stream studies are likely representative of their location. Additionally, these results will inform future studies by providing insight into representative sample timing and capturing temporal trends for the purposes of modeling and of developing regulations for microplastic pollution.

  18. Use of virtual reality intervention to improve reaction time in children with cerebral palsy: A randomized controlled trial.

    PubMed

    Pourazar, Morteza; Mirakhori, Fatemeh; Hemayattalab, Rasool; Bagherzadeh, Fazlolah

    2017-09-21

    The purpose of this study was to investigate the training effects of Virtual Reality (VR) intervention program on reaction time in children with cerebral palsy. Thirty boys ranging from 7 to 12 years (mean = 11.20; SD = .76) were selected by available sampling method and randomly divided into the experimental and control groups. Simple Reaction Time (SRT) and Discriminative Reaction Time (DRT) were measured at baseline and 1 day after completion of VR intervention. Multivariate analysis of variance (MANOVA) and paired sample t-test were performed to analyze the results. MANOVA test revealed significant effects for group in posttest phase, with lower reaction time in both measures for the experimental group. Based on paired sample t-test results, both RT measures significantly improved in experimental group following the VR intervention program. This paper proposes VR as a promising tool into the rehabilitation process for improving reaction time in children with cerebral palsy.

  19. Microwave Processing for Sample Preparation to Evaluate Mitochondrial Ultrastructural Damage in Hemorrhagic Shock

    NASA Astrophysics Data System (ADS)

    Josephsen, Gary D.; Josephsen, Kelly A.; Beilman, Greg J.; Taylor, Jodie H.; Muiler, Kristine E.

    2005-12-01

    This is a report of the adaptation of microwave processing in the preparation of liver biopsies for transmission electron microscopy (TEM) to examine ultrastructural damage of mitochondria in the setting of metabolic stress. Hemorrhagic shock was induced in pigs via 35% total blood volume bleed and a 90-min period of shock followed by resuscitation. Hepatic biopsies were collected before shock and after resuscitation. Following collection, biopsies were processed for TEM by a rapid method involving microwave irradiation (Giberson, 2001). Samples pre- and postshock of each of two animals were viewed and scored using the mitochondrial ultrastructure scoring system (Crouser et al., 2002), a system used to quantify the severity of ultrastructural damage during shock. Results showed evidence of increased ultrastructural damage in the postshock samples, which scored 4.00 and 3.42, versus their preshock controls, which scored 1.18 and 1.27. The results of this analysis were similar to those obtained in another model of shock (Crouser et al., 2002). However, the amount of time used to process the samples was significantly shortened with methods involving microwave irradiation.

  20. An MBE growth facility for real-time in situ synchrotron x-ray topography studies of strained-layer III-V epitaxial materials

    NASA Astrophysics Data System (ADS)

    Whitehouse, C. R.; Barnett, S. J.; Soley, D. E. J.; Quarrell, J.; Aldridge, S. J.; Cullis, A. G.; Emeny, M. T.; Johnson, A. D.; Clarke, G. F.; Lamb, W.; Tanner, B. K.; Cottrell, S.; Lunn, B.; Hogg, C.; Hagston, W.

    1992-01-01

    This paper describes a unique combined UHV MBE growth x-ray topography facility designed to allow the first real-time synchrotron radiation x-ray topography study of strained-layer III-V growth processes. This system will enable unambiguous determination of dislocation nucleation and multiplication processes as a function of controlled variations in growth conditions, and also during post-growth thermal processing. The planned experiments have placed very stringent demands upon the engineering design of the system, and design details regarding the growth chamber; sample manipulator, x-ray optics, and real-time imaging systems are described. Results obtained during a feasibility study are also presented.

  1. An MBE growth facility for real-time in situ synchrotron x-ray topography studies of strained-layer III--V epitaxial materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehouse, C.R.; Barnett, S.J.; Soley, D.E.J.

    1992-01-01

    This paper describes a unique combined UHV MBE growth x-ray topography facility designed to allow the first real-time synchrotron radiation x-ray topography study of strained-layer III--V growth processes. This system will enable unambiguous determination of dislocation nucleation and multiplication processes as a function of controlled variations in growth conditions, and also during post-growth thermal processing. The planned experiments have placed very stringent demands upon the engineering design of the system, and design details regarding the growth chamber; sample manipulator, x-ray optics, and real-time imaging systems are described. Results obtained during a feasibility study are also presented.

  2. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  3. Effect of freezing of sputum samples on flow cytometric analysis of lymphocyte subsets.

    PubMed

    Jaksztat, E; Holz, O; Paasch, K; Kelly, M M; Hargreave, F E; Cox, G; Magnussen, H; Jörres, R A

    2004-08-01

    Sputum samples should be processed shortly after induction to prevent cell degradation. For intermediate storage, freezing of homogenised samples or immediate fixation have been shown to be suitable for cytospins. The aim of this study was to investigate whether freezing or immediate fixation of sputum affect the analysis of lymphocyte subsets by flow cytometry. Selected plugs from 24 sputum samples were homogenised. One aliquot was processed immediately and analysed by flow cytometry. A second aliquot was homogenised, frozen at -20 C after addition of dimethylsulfoxide and stored for a median time of 6 days. In six samples a third aliquot was fixed in formalin after induction and stored for up to 72 h before further processing. Compared to immediate processing, percentages of total lymphocytes and T-suppressor cells were elevated after being frozen, with a minor decrease in the T4/T8 ratio. Proportions of total lymphocytes, T-helper and T-suppressor cells correlated between native and frozen samples, intra-class correlation coefficients being 0.74, 0.85 and 0.70, respectively. The formalin-fixed aliquots could not be analysed with the antibodies used. In conclusion, freezing seems to be a suitable technique to store sputum samples for flow cytometry of CD3, CD4 and CD8 lymphocyte subsets. Its effects were minor compared to the variation between subjects.

  4. Two and three-dimensional quantitative neutron imaging of the water distribution during ponded infiltration

    NASA Astrophysics Data System (ADS)

    Sacha, Jan; Snehota, Michal; Jelinkova, Vladimira

    2016-04-01

    Information on spatial and temporal water and air distribution in a soil sample during hydrological processes is important for evaluating current and developing new water transport models. Modern imaging techniques such as neutron imaging (NI) allow relatively short acquisition times and high resolution of images. At the same time, the appropriate data processing has to be applied to obtain results free of bias and artifacts. In this study a ponded infiltration experiments were conducted on two soil samples packed into the quartz glass columns of inner diameter of 29 and 34 mm, respectively. First sample was prepared by packing of fine and coarse fractions of sand and the second sample was packed using coarse sand and disks of fine porous ceramic. Ponded infiltration experiments conducted on both samples were monitored by neutron radiography to produce two dimensional (2D) projection images during the transient phase of infiltration. During the steady state flow stage of experiments neutron tomography was utilized to obtain three-dimensional (3D) information on gradual water redistribution. The acquired radiographic images were normalized for background noise and spatial inhomogeneity of the detector, fluctuations of the neutron flux in time and for spatial inhomogeneity of the neutron beam. The radiograms of dry sample were subtracted from all subsequent radiograms to determine water thickness in the 2D projection images. All projections were corrected for beam hardening and neutron scattering by empirical method of Kang et al. (2013). Parameters of the correction method uses were identified by two different approaches. The first approach was based on fitting the NI derived water thickness representing the water filled region in the layer of water above the sample surface to actual water thickness. In the second approach the NI derived volume of water in the entire sample in given time was fitted to corresponding gravimetrically determined amount of water in the sample. Tomography images were reconstructed from the both corrected and uncorrected water thickness maps to obtain the 3D spatial distribution of water content within the sample. Without the correction the beam hardening and scattering effects overestimated the water content values close to the sample perimeter and underestimated the values close to the center of the sample, however the total water content of whole sample was the same in both cases.

  5. Mixed feed and its ingredients electron beam decontamination

    NASA Astrophysics Data System (ADS)

    Bezuglov, V. V.; Bryazgin, A. A.; Vlasov, A. Yu; Voronin, L. A.; Ites, Yu V.; Korobeynikov, M. V.; Leonov, S. V.; Leonova, M. A.; Tkachenko, V. O.; Shtarklev, E. A.; Yuskov, Yu G.

    2017-01-01

    Electron beam treatment is used for food processing for decades to prevent or minimize food losses and prolong storage time. This process is also named cold pasteurization. Mixed feed ingredients supplied in Russia regularly occur to be contaminated. To reduce contamination level the contaminated mixed feed ingredients samples were treated by electron beam with doses from 2 to 12 kGy. The contamination levels were decreased to the level that ensuring storage time up to 1 year.

  6. Recognition of facial emotions among maltreated children with high rates of post-traumatic stress disorder

    PubMed Central

    Masten, Carrie L.; Guyer, Amanda E.; Hodgdon, Hilary B.; McClure, Erin B.; Charney, Dennis S.; Ernst, Monique; Kaufman, Joan; Pine, Daniel S.; Monk, Christopher S.

    2008-01-01

    Objective The purpose of this study is to examine processing of facial emotions in a sample of maltreated children showing high rates of post-traumatic stress disorder (PTSD). Maltreatment during childhood has been associated independently with both atypical processing of emotion and the development of PTSD. However, research has provided little evidence indicating how high rates of PTSD might relate to maltreated children’s processing of emotions. Method Participants’ reaction time and labeling of emotions were measured using a morphed facial emotion identification task. Participants included a diverse sample of maltreated children with and without PTSD and controls ranging in age from 8 to 15 years. Maltreated children had been removed from their homes and placed in state custody following experiences of maltreatment. Diagnoses of PTSD and other disorders were determined through combination of parent, child, and teacher reports. Results Maltreated children displayed faster reaction times than controls when labeling emotional facial expressions, and this result was most pronounced for fearful faces. Relative to children who were not maltreated, maltreated children both with and without PTSD showed enhanced response times when identifying fearful faces. There was no group difference in labeling of emotions when identifying different facial emotions. Conclusions Maltreated children show heightened ability to identify fearful faces, evidenced by faster reaction times relative to controls. This association between maltreatment and atypical processing of emotion is independent of PTSD diagnosis. PMID:18155144

  7. On the Application of Different Event-Based Sampling Strategies to the Control of a Simple Industrial Process

    PubMed Central

    Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián

    2009-01-01

    This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975

  8. Development and Applications of a Mobile Ecogenomic Sensor

    NASA Astrophysics Data System (ADS)

    Yamahara, K.; Preston, C. M.; Pargett, D.; Jensen, S.; Roman, B.; Walz, K.; Birch, J. M.; Hobson, B.; Kieft, B.; Zhang, Y.; Ryan, J. P.; Chavez, F.; Scholin, C. A.

    2016-12-01

    Modern molecular biological analytical methods have revolutionized our understanding of organism diversity in the ocean. Such advancements have profound implications for use in environmental research and resource management. However, the application of such technology to comprehensively document biodiversity and understand ecosystem processes in an ocean setting will require repeated observations over vast space and time scales. A fundamental challenge associated with meeting that requirement is acquiring discrete samples over spatial scales and frequencies necessary to document cause-and-effect relationships that link biological processes to variable physical and chemical gradients in rapidly changing water masses. Accomplishing that objective using ships alone is not practical. We are working to overcome this fundamental challenge by developing a new generation of biological instrumentation, the third generation ESP (3G ESP). The 3G ESP is a robotic device that automates sample collection, preservation, and/or in situ processing for real-time target molecule detection. Here we present the development of the 3G ESP and its integration with a Tethys-class Long Range AUV (LRAUV), and demonstrate its ability to collect and preserve material for subsequent metagenomic and quantitative PCR (qPCR) analyses. Further, we elucidate the potential of employing multiple mobile ecogenomic sensors to monitor ocean biodiversity, as well as following ecosystems over time to reveal time/space relationships of biological processes in response to changing environmental conditions.

  9. Analysis of munitions constituents in groundwater using a field-portable GC-MS.

    PubMed

    Bednar, A J; Russell, A L; Hayes, C A; Jones, W T; Tackett, P; Splichal, D E; Georgian, T; Parker, L V; Kirgan, R A; MacMillan, D K

    2012-05-01

    The use of munitions constituents (MCs) at military installations can produce soil and groundwater contamination that requires periodic monitoring even after training or manufacturing activities have ceased. Traditional groundwater monitoring methods require large volumes of aqueous samples (e.g., 2-4 L) to be shipped under chain of custody, to fixed laboratories for analysis. The samples must also be packed on ice and shielded from light to minimize degradation that may occur during transport and storage. The laboratory's turn-around time for sample analysis and reporting can be as long as 45 d. This process hinders the reporting of data to customers in a timely manner; yields data that are not necessarily representative of current site conditions owing to the lag time between sample collection and reporting; and incurs significant shipping costs for samples. The current work compares a field portable Gas Chromatograph-Mass Spectrometer (GC-MS) for analysis of MCs on-site with traditional laboratory-based analysis using High Performance Liquid Chromatography with UV absorption detection. The field method provides near real-time (within ~1 h of sampling) concentrations of MCs in groundwater samples. Mass spectrometry provides reliable confirmation of MCs and a means to identify unknown compounds that are potential false positives for methods with UV and other non-selective detectors. Published by Elsevier Ltd.

  10. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  11. Metaproteome analysis of endodontic infections in association with different clinical conditions.

    PubMed

    Provenzano, José Claudio; Siqueira, José F; Rôças, Isabela N; Domingues, Romênia R; Paes Leme, Adriana F; Silva, Márcia R S

    2013-01-01

    Analysis of the metaproteome of microbial communities is important to provide an insight of community physiology and pathogenicity. This study evaluated the metaproteome of endodontic infections associated with acute apical abscesses and asymptomatic apical periodontitis lesions. Proteins persisting or expressed after root canal treatment were also evaluated. Finally, human proteins associated with these infections were identified. Samples were taken from root canals of teeth with asymptomatic apical periodontitis before and after chemomechanical treatment using either NaOCl or chlorhexidine as the irrigant. Samples from abscesses were taken by aspiration of the purulent exudate. Clinical samples were processed for analysis of the exoproteome by using two complementary mass spectrometry platforms: nanoflow liquid chromatography coupled with linear ion trap quadrupole Velos Orbitrap and liquid chromatography-quadrupole time-of-flight. A total of 308 proteins of microbial origin were identified. The number of proteins in abscesses was higher than in asymptomatic cases. In canals irrigated with chlorhexidine, the number of identified proteins decreased substantially, while in the NaOCl group the number of proteins increased. The large majority of microbial proteins found in endodontic samples were related to metabolic and housekeeping processes, including protein synthesis, energy metabolism and DNA processes. Moreover, several other proteins related to pathogenicity and resistance/survival were found, including proteins involved with adhesion, biofilm formation and antibiotic resistance, stress proteins, exotoxins, invasins, proteases and endopeptidases (mostly in abscesses), and an archaeal protein linked to methane production. The majority of human proteins detected were related to cellular processes and metabolism, as well as immune defense. Interrogation of the metaproteome of endodontic microbial communities provides information on the physiology and pathogenicity of the community at the time of sampling. There is a growing need for expanded and more curated protein databases that permit more accurate identifications of proteins in metaproteomic studies.

  12. Residence time as a key for comprehensive assessment of the relationship between changing land use and nitrates in regional groundwater systems.

    PubMed

    Cao, Yingjie; Tang, Changyuan; Song, Xianfang; Liu, Changming; Zhang, Yinghua

    2013-04-01

    In this study, an approach is put forward to study the relationship between changing land use and groundwater nitrate contamination in the Sanjiang Plain. This approach emphasizes the importance of groundwater residence time when relating the nitrates to the changing land use. The principles underlying the approach involve the assessment of groundwater residence time by CFCs and the Vogel age model and the reconstruction of the land use at the groundwater recharge time by interpolation. Nitrate trend analysis shows that nitrates have begun to leach into the aquifers since agricultural activities boomed after the 1950s. Hydrochemical analysis implies that the possible process relating to the nitrate reduction in the groundwater is the oxidation of Fe(ii)-silicates. However, the chemical kinetics of the oxidation of Fe(ii)-silicates is slow, so this denitrification process contributes little to the nitrate variations. Stepwise regression shows that the nitrate concentrations of samples had no direct relationship with the land use at the groundwater sampling time, but had a relatively strong relationship with the land use at the groundwater recharge time. Dry land is recognized as the dominant factor contributing to the elevated concentration of nitrates. The nitrogen isotope for nitrate (δ(15)N-NO3) gives a more direct result of the identification of nitrate sources: the use of manure in agricultural activities. Principle component (PC) regression shows that the process of the dry land exploitation is the major process that controls the nitrate contamination in the Sanjiang Plain.

  13. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    PubMed

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. RoboDiff: combining a sample changer and goniometer for highly automated macromolecular crystallography experiments

    PubMed Central

    Nurizzo, Didier; Bowler, Matthew W.; Caserotto, Hugo; Dobias, Fabien; Giraud, Thierry; Surr, John; Guichard, Nicolas; Papp, Gergely; Guijarro, Matias; Mueller-Dieckmann, Christoph; Flot, David; McSweeney, Sean; Cipriani, Florent; Theveneau, Pascal; Leonard, Gordon A.

    2016-01-01

    Automation of the mounting of cryocooled samples is now a feature of the majority of beamlines dedicated to macromolecular crystallography (MX). Robotic sample changers have been developed over many years, with the latest designs increasing capacity, reliability and speed. Here, the development of a new sample changer deployed at the ESRF beamline MASSIF-1 (ID30A-1), based on an industrial six-axis robot, is described. The device, named RoboDiff, includes a high-capacity dewar, acts as both a sample changer and a high-accuracy goniometer, and has been designed for completely unattended sample mounting and diffraction data collection. This aim has been achieved using a high level of diagnostics at all steps of the process from mounting and characterization to data collection. The RoboDiff has been in service on the fully automated endstation MASSIF-1 at the ESRF since September 2014 and, at the time of writing, has processed more than 20 000 samples completely automatically. PMID:27487827

  15. Stable Lévy motion with inverse Gaussian subordinator

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Gajda, J.

    2017-09-01

    In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.

  16. Effects of sample handling methods on substance P concentrations and immunoreactivity in bovine blood samples.

    PubMed

    Mosher, Ruby A; Coetzee, Johann F; Allen, Portia S; Havel, James A; Griffith, Gary R; Wang, Chong

    2014-02-01

    To determine the effects of protease inhibitors and holding times and temperatures before processing on the stability of substance P in bovine blood samples. Blood samples obtained from a healthy 6-month-old calf. Blood samples were dispensed into tubes containing exogenous substance P and 1 of 6 degradative enzyme inhibitor treatments: heparin, EDTA, EDTA with 1 of 2 concentrations of aprotinin, or EDTA with 1 of 2 concentrations of a commercially available protease inhibitor cocktail. Plasma was harvested immediately following collection or after 1, 3, 6, 12, or 24 hours of holding at ambient (20.3° to 25.4°C) or ice bath temperatures. Total substance P immunoreactivity was determined with an ELISA; concentrations of the substance P parent molecule, a metabolite composed of the 9 terminal amino acids, and a metabolite composed of the 5 terminal amino acids were determined with liquid chromatography-tandem mass spectrometry. Regarding blood samples processed immediately, no significant differences in substance P concentrations or immunoreactivity were detected among enzyme inhibitor treatments. In blood samples processed at 1 hour of holding, substance P parent molecule concentration was significantly lower for ambient temperature versus ice bath temperature holding conditions; aprotinin was the most effective inhibitor of substance P degradation at the ice bath temperature. The ELISA substance P immunoreactivity was typically lower for blood samples with heparin versus samples with other inhibitors processed at 1 hour of holding in either temperature condition. Results suggested that blood samples should be chilled and plasma harvested within 1 hour after collection to prevent substance P degradation.

  17. Evaluation of conditioning time and temperature on gelatinized starch and vitamin retention in a pelleted swine diet.

    PubMed

    Lewis, L L; Stark, C R; Fahrenholz, A C; Bergstrom, J R; Jones, C K

    2015-02-01

    Two key feed processing parameters, conditioning temperature and time, were altered to determine their effects on concentration of gelatinized starch and vitamin retention in a pelleted finishing swine diet. Diet formulation (corn–soybean meal based with 30% distillers dried grains with solubles) was held constant. Treatments were arranged in a 2 × 3 factorial design plus a control with 2 conditioning temperatures (77 vs. 88°C) and 3 conditioner retention times (15, 30, and 60 s). In addition, a mash diet not subjected to conditioning served as a control for a total of 7 treatments. Samples were collected after conditioning but before pelleting (hot mash), after pelleting but before cooling (hot pellet), and after pelleting and cooling (cold pellet) and analyzed for percentage total starch, percentage gelatinized starch, and riboflavin, niacin, and vitamin D3 concentrations. Total percentage starch was increased by greater conditioning temperature (P = 0.041) but not time (P > 0.10), whereas higher temperature and longer time both increased (P < 0.05) percentage gelatinized starch, with increasing time resulting in a linear increase in percentage starch gelatinization (P = 0.013). The interaction between conditioning temperature and time increased percentage gelatinized starch (P = 0.003) but not percentage total starch (P > 0.10). Sample location also affected both percentage total starch and gelatinized starch (P < 0.05), with the greatest increase in percentage gelatinized starch occurring between hot mash and hot pellet samples. As expected, the pelleting process increased percentage gelatinized starch (P = 0.035; 7.3 vs. 11.7% gelatinized starch for hot mash vs. hot pellet samples, respectively), but there was no difference in total starch concentrations (P > 0.10). Finally, neither conditioning temperature nor time affected riboflavin, niacin, or vitamin D3 concentrations (P > 0.10). In summary, both increasing conditioningtemperature and time effect percentage gelatinized starch, but not to the extent of forcing the diet through a pelleting die.

  18. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  19. The JCMT nearby galaxies legacy survey - X. Environmental effects on the molecular gas and star formation properties of spiral galaxies

    NASA Astrophysics Data System (ADS)

    Mok, Angus; Wilson, C. D.; Golding, J.; Warren, B. E.; Israel, F. P.; Serjeant, S.; Knapen, J. H.; Sánchez-Gallego, J. R.; Barmby, P.; Bendo, G. J.; Rosolowsky, E.; van der Werf, P.

    2016-03-01

    We present a study of the molecular gas properties in a sample of 98 H I - flux selected spiral galaxies within ˜25 Mpc, using the CO J = 3 - 2 line observed with the James Clerk Maxwell Telescope. We use the technique of survival analysis to incorporate galaxies with CO upper limits into our results. Comparing the group and Virgo samples, we find a larger mean H2 mass in the Virgo galaxies, despite their lower mean H I mass. This leads to a significantly higher H2 to H I ratio for Virgo galaxies. Combining our data with complementary Hα star formation rate measurements, Virgo galaxies have longer molecular gas depletion times compared to group galaxies, due to their higher H2 masses and lower star formation rates. We suggest that the longer depletion times may be a result of heating processes in the cluster environment or differences in the turbulent pressure. From the full sample, we find that the molecular gas depletion time has a positive correlation with the stellar mass, indicative of differences in the star formation process between low- and high-mass galaxies, and a negative correlation between the molecular gas depletion time and the specific star formation rate.

  20. Looking into individual coffee beans during the roasting process: direct micro-probe sampling on-line photo-ionisation mass spectrometric analysis of coffee roasting gases.

    PubMed

    Hertz-Schünemann, Romy; Streibel, Thorsten; Ehlert, Sven; Zimmermann, Ralf

    2013-09-01

    A micro-probe (μ-probe) gas sampling device for on-line analysis of gases evolving in confined, small objects by single-photon ionisation time-of-flight mass spectrometry (SPI-TOFMS) was developed. The technique is applied for the first time in a feasibility study to record the formation of volatile and flavour compounds during the roasting process within (inside) or in the direct vicinity (outside) of individual coffee beans. A real-time on-line analysis of evolving volatile and semi-volatile organic compounds (VOC and SVOC) as they are formed under the mild pyrolytic conditions of the roasting process was performed. The soft-ionisation mass spectra depict a molecular ion signature, which is well corresponding with the existing knowledge of coffee roasting and evolving compounds. Additionally, thereby it is possible to discriminate between Coffea arabica (Arabica) and Coffea canephora (Robusta). The recognized differences in the roasting gas profiles reflect the differences in the precursor composition of the coffee cultivars very well. Furthermore, a well-known set of marker compounds for Arabica and Robusta, namely the lipids kahweol and cafestol (detected in their dehydrated form at m/z 296 and m/z 298, respectively) were observed. If the variation in time of different compounds is observed, distinctly different evolution behaviours were detected. Here, phenol (m/z 94) and caffeine (m/z 194) are exemplary chosen, whereas phenol shows very sharp emission peaks, caffeine do not have this highly transient behaviour. Finally, the changes of the chemical signature as a function of the roasting time, the influence of sampling position (inside, outside) and cultivar (Arabica, Robusta) is investigated by multivariate statistics (PCA). In summary, this pilot study demonstrates the high potential of the measurement technique to enhance the fundamental knowledge of the formation processes of volatile and semi-volatile flavour compounds inside the individual coffee bean.

  1. Real-time optically sectioned wide-field microscopy employing structured light illumination and a CMOS detector

    NASA Astrophysics Data System (ADS)

    Mitic, Jelena; Anhut, Tiemo; Serov, Alexandre; Lasser, Theo; Bourquin, Stephane

    2003-07-01

    Real-time optically sectioned microscopy is demonstrated using an AC-sensitive detection concept realized with smart CMOS image sensor and structured light illumination by a continuously moving periodic pattern. We describe two different detection systems based on CMOS image sensors for the detection and on-chip processing of the sectioned images in real time. A region-of-interest is sampled at high frame rate. The demodulated signal delivered by the detector corresponds to the depth discriminated image of the sample. The measured FWHM of the axial response depends on the spatial frequency of the projected grid illumination and is in the μm-range. The effect of using broadband incoherent illumination is discussed. The performance of these systems is demonstrated by imaging technical as well as biological samples.

  2. Anaerobic co-digestion of sewage sludge and molasses

    NASA Astrophysics Data System (ADS)

    Kalemba, Katarzyna; Barbusiński, Krzysztof

    2017-11-01

    The efficiency of simultaneous digestion of sewage sludge and by-product of refining sugar beets (molasses) was investigated. The study was conducted for 28 days under mesophilic conditions. 0.5%, 1%, 1.5%, 2% and 3% (m/m) of molasses was added to the mixture of sludge. The result of the study showed that addition of molasses had positive effect the biogas production. The biggest biogas yield was achieved in sample with 0.5% of molasses (95.69 mL/g VS). In this sample biogas production increased by 21% in comparison with reference sample (without molasses). The biggest methane content (73%) was also observed in the sample with 0.5% of molasses. For comparison in reference sample was produced biogas with 70% content of methane. The dose over 0.5% of molasses caused inhibition of fermentation process. The minimal degree (38%) of degradation of organic matter was achieved in reference sample (38.53%) and in sample with 0.5% of molasses (39.71%) but in other samples was in the range of 35.61-36.76 % (from 3% to 1%, respectively). Digestion process have adverse effect on dewatering properties of sludge. Before co-digestion capillary suction time was from 31 s to 55 s, and after process increased from 36 s to 556 s (from 0% to 3% of molasses, respectively).

  3. Sample-based engine noise synthesis using an enhanced pitch-synchronous overlap-and-add method.

    PubMed

    Jagla, Jan; Maillard, Julien; Martin, Nadine

    2012-11-01

    An algorithm for the real time synthesis of internal combustion engine noise is presented. Through the analysis of a recorded engine noise signal of continuously varying engine speed, a dataset of sound samples is extracted allowing the real time synthesis of the noise induced by arbitrary evolutions of engine speed. The sound samples are extracted from a recording spanning the entire engine speed range. Each sample is delimitated such as to contain the sound emitted during one cycle of the engine plus the necessary overlap to ensure smooth transitions during the synthesis. The proposed approach, an extension of the PSOLA method introduced for speech processing, takes advantage of the specific periodicity of engine noise signals to locate the extraction instants of the sound samples. During the synthesis stage, the sound samples corresponding to the target engine speed evolution are concatenated with an overlap and add algorithm. It is shown that this method produces high quality audio restitution with a low computational load. It is therefore well suited for real time applications.

  4. Integrated sample-to-detection chip for nucleic acid test assays.

    PubMed

    Prakash, R; Pabbaraju, K; Wong, S; Tellier, R; Kaler, K V I S

    2016-06-01

    Nucleic acid based diagnostic techniques are routinely used for the detection of infectious agents. Most of these assays rely on nucleic acid extraction platforms for the extraction and purification of nucleic acids and a separate real-time PCR platform for quantitative nucleic acid amplification tests (NATs). Several microfluidic lab on chip (LOC) technologies have been developed, where mechanical and chemical methods are used for the extraction and purification of nucleic acids. Microfluidic technologies have also been effectively utilized for chip based real-time PCR assays. However, there are few examples of microfluidic systems which have successfully integrated these two key processes. In this study, we have implemented an electro-actuation based LOC micro-device that leverages multi-frequency actuation of samples and reagents droplets for chip based nucleic acid extraction and real-time, reverse transcription (RT) PCR (qRT-PCR) amplification from clinical samples. Our prototype micro-device combines chemical lysis with electric field assisted isolation of nucleic acid in a four channel parallel processing scheme. Furthermore, a four channel parallel qRT-PCR amplification and detection assay is integrated to deliver the sample-to-detection NAT chip. The NAT chip combines dielectrophoresis and electrostatic/electrowetting actuation methods with resistive micro-heaters and temperature sensors to perform chip based integrated NATs. The two chip modules have been validated using different panels of clinical samples and their performance compared with standard platforms. This study has established that our integrated NAT chip system has a sensitivity and specificity comparable to that of the standard platforms while providing up to 10 fold reduction in sample/reagent volumes.

  5. DNA extraction for streamlined metagenomics of diverse environmental samples.

    PubMed

    Marotz, Clarisse; Amir, Amnon; Humphrey, Greg; Gaffney, James; Gogul, Grant; Knight, Rob

    2017-06-01

    A major bottleneck for metagenomic sequencing is rapid and efficient DNA extraction. Here, we compare the extraction efficiencies of three magnetic bead-based platforms (KingFisher, epMotion, and Tecan) to a standardized column-based extraction platform across a variety of sample types, including feces, oral, skin, soil, and water. Replicate sample plates were extracted and prepared for 16S rRNA gene amplicon sequencing in parallel to assess extraction bias and DNA quality. The data demonstrate that any effect of extraction method on sequencing results was small compared with the variability across samples; however, the KingFisher platform produced the largest number of high-quality reads in the shortest amount of time. Based on these results, we have identified an extraction pipeline that dramatically reduces sample processing time without sacrificing bacterial taxonomic or abundance information.

  6. Method and apparatus for maintaining multi-component sample gas constituents in vapor phase during sample extraction and cooling

    DOEpatents

    Farthing, William Earl [Pinson, AL; Felix, Larry Gordon [Pelham, AL; Snyder, Todd Robert [Birmingham, AL

    2008-02-12

    An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.

  7. Method and apparatus maintaining multi-component sample gas constituents in vapor phase during sample extraction and cooling

    DOEpatents

    Farthing, William Earl; Felix, Larry Gordon; Snyder, Todd Robert

    2009-12-15

    An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.

  8. Cost-effective sampling of (137)Cs-derived net soil redistribution: part 2 - estimating the spatial mean change over time.

    PubMed

    Chappell, A; Li, Y; Yu, H Q; Zhang, Y Z; Li, X Y

    2015-06-01

    The caesium-137 ((137)Cs) technique for estimating net, time-integrated soil redistribution by the processes of wind, water and tillage is increasingly being used with repeated sampling to form a baseline to evaluate change over small (years to decades) timeframes. This interest stems from knowledge that since the 1950s soil redistribution has responded dynamically to different phases of land use change and management. Currently, there is no standard approach to detect change in (137)Cs-derived net soil redistribution and thereby identify the driving forces responsible for change. We outline recent advances in space-time sampling in the soil monitoring literature which provide a rigorous statistical and pragmatic approach to estimating the change over time in the spatial mean of environmental properties. We apply the space-time sampling framework, estimate the minimum detectable change of net soil redistribution and consider the information content and cost implications of different sampling designs for a study area in the Chinese Loess Plateau. Three phases (1954-1996, 1954-2012 and 1996-2012) of net soil erosion were detectable and attributed to well-documented historical change in land use and management practices in the study area and across the region. We recommend that the design for space-time sampling is considered carefully alongside cost-effective use of the spatial mean to detect and correctly attribute cause of change over time particularly across spatial scales of variation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Study on extrusion process of SiC ceramic matrix

    NASA Astrophysics Data System (ADS)

    Dai, Xiao-Yuan; Shen, Fan; Ji, Jia-You; Wang, Shu-Ling; Xu, Man

    2017-11-01

    In this thesis, the extrusion process of SiC ceramic matrix has been systematically studied.The effect of different cellulose content on the flexural strength and pore size distribution of SiC matrix was discussed.Reselts show that with the increase of cellulose content, the flexural strength decreased.The pore size distribution in the sample was 1um-4um, and the 1um-2um concentration was more concentrated. It is found that the cellulose content has little effect on the pore size distribution.When the cellulose content is 7%, the flexural strength of the sample is 40.9Mpa. At this time, the mechanical properties of the sample are the strongest.

  10. Standardization and application of real-time polymerase chain reaction for rapid detection of bluetongue virus.

    PubMed

    Lakshmi, I Karthika; Putty, Kalyani; Raut, Satya Samparna; Patil, Sunil R; Rao, P P; Bhagyalakshmi, B; Jyothi, Y Krishna; Susmitha, B; Reddy, Y Vishnuvardhan; Kasulanati, Sowmya; Jyothi, J Shiva; Reddy, Y N

    2018-04-01

    The present study was designed to standardize real-time polymerase chain reaction (PCR) for detecting the bluetongue virus from blood samples of sheep collected during outbreaks of bluetongue disease in the year 2014 in Andhra Pradesh and Telangana states of India. A 10-fold serial dilution of Plasmid PUC59 with bluetongue virus (BTV) NS3 insert was used to plot the standard curve. BHK-21 and KC cells were used for in vitro propagation of virus BTV-9 at a TCID50/ml of 10 5 ml and RNA was isolated by the Trizol method. Both reverse transcription-PCR and real-time PCR using TaqMan probe were carried out with RNA extracted from virus-spiked culture medium and blood to compare the sensitivity by means of finding out the limit of detection (LoD). The results were verified by inoculating the detected and undetected dilutions onto cell cultures with further cytological (cytopathic effect) and molecular confirmation (by BTV-NS1 group-specific PCR). The standardized technique was then applied to field samples (blood) for detecting BTV. The slope of the standard curve obtained was -3.23, and the efficiency was 103%. The LoD with RT-PCR was 8.269E×10 3 number of copies of plasmid, whereas it was 13 with real-time PCR for plasmid dilutions. Similarly, LoD was determined for virus-spiked culture medium, and blood with both the types of PCR and the values were 10 3 TCID 50/ml and 10 4 TCID 50/ml with RT-PCR and 10° TCID 50/ml and 10 2 TCID 50/ml with real-time PCR, respectively. The standardized technique was applied to blood samples collected from BTV suspected animals; 10 among 20 samples were found positive with Cq values ranging from 27 to 39. The Cq value exhibiting samples were further processed in cell cultures and were confirmed to be BT positive. Likewise, Cq undetected samples on processing in cell cultures turned out to be BTV negative. Real-time PCR was found to be a very sensitive as well as reliable method to detect BTV present in different types of samples, including blood samples collected from BTV-infected sheep, compared to RT-PCR. The LoD of BTV is likely influenced by sample type, possibly by the interference by the other components present in the sample.

  11. Glass transition dynamics of stacked thin polymer films

    NASA Astrophysics Data System (ADS)

    Fukao, Koji; Terasawa, Takehide; Oda, Yuto; Nakamura, Kenji; Tahara, Daisuke

    2011-10-01

    The glass transition dynamics of stacked thin films of polystyrene and poly(2-chlorostyrene) were investigated using differential scanning calorimetry and dielectric relaxation spectroscopy. The glass transition temperature Tg of as-stacked thin polystyrene films has a strong depression from that of the bulk samples. However, after annealing at high temperatures above Tg, the stacked thin films exhibit glass transition at a temperature almost equal to the Tg of the bulk system. The α-process dynamics of stacked thin films of poly(2-chlorostyrene) show a time evolution from single-thin-film-like dynamics to bulk-like dynamics during the isothermal annealing process. The relaxation rate of the α process becomes smaller with increase in the annealing time. The time scale for the evolution of the α dynamics during the annealing process is very long compared with that for the reptation dynamics. At the same time, the temperature dependence of the relaxation time for the α process changes from Arrhenius-like to Vogel-Fulcher-Tammann dependence with increase of the annealing time. The fragility index increases and the distribution of the α-relaxation times becomes smaller with increase in the annealing time for isothermal annealing. The observed change in the α process is discussed with respect to the interfacial interaction between the thin layers of stacked thin polymer films.

  12. Fabric phase sorptive extraction followed by UHPLC-MS/MS for the analysis of benzotriazole UV stabilizers in sewage samples.

    PubMed

    Montesdeoca-Esponda, Sarah; Sosa-Ferrera, Zoraida; Kabir, Abuzar; Furton, Kenneth G; Santana-Rodríguez, José Juan

    2015-10-01

    A fast and sensitive sample preparation strategy using fabric phase sorptive extraction followed by ultra-high-performance liquid chromatography and tandem mass spectrometry detection has been developed to analyse benzotriazole UV stabilizer compounds in aqueous samples. Benzotriazole UV stabilizer compounds are a group of compounds added to sunscreens and other personal care products which may present detrimental effects to aquatic ecosystems. Fabric phase sorptive extraction is a novel solvent minimized sample preparation approach that integrates the advantages of sol-gel derived hybrid inorganic-organic nanocomposite sorbents and the flexible, permeable and hydrophobic surface chemistry of polyester fabric. It is a highly sensitive, fast, efficient and inexpensive device that can be reused and does not suffer from coating damage, unlike SPME fibres or stir bars. In this paper, we optimized the extraction of seven benzotriazole UV filters evaluating the majority of the parameters involved in the extraction process, such as sorbent chemistry selection, extraction time, back-extraction solvent, back-extraction time and the impact of ionic strength. Under the optimized conditions, fabric phase sorptive extraction allows enrichment factors of 10 times with detection limits ranging from 6.01 to 60.7 ng L(-1) and intra- and inter-day % RSDs lower than 11 and 30 % for all compounds, respectively. The optimized sample preparation technique followed by ultra-high-performance liquid chromatography and tandem mass spectrometry detection was applied to determine the target analytes in sewage samples from wastewater treatment plants with different purification processes of Gran Canaria Island (Spain). Two UV stabilizer compounds were measured in ranges 17.0-60.5 ng mL(-1) (UV 328) and 69.3-99.2 ng mL(-1) (UV 360) in the three sewage water samples analysed.

  13. Assessment of powder blend uniformity: Comparison of real-time NIR blend monitoring with stratified sampling in combination with HPLC and at-line NIR Chemical Imaging.

    PubMed

    Bakri, Barbara; Weimer, Marco; Hauck, Gerrit; Reich, Gabriele

    2015-11-01

    Scope of the study was (1) to develop a lean quantitative calibration for real-time near-infrared (NIR) blend monitoring, which meets the requirements in early development of pharmaceutical products and (2) to compare the prediction performance of this approach with the results obtained from stratified sampling using a sample thief in combination with off-line high pressure liquid chromatography (HPLC) and at-line near-infrared chemical imaging (NIRCI). Tablets were manufactured from powder blends and analyzed with NIRCI and HPLC to verify the real-time results. The model formulation contained 25% w/w naproxen as a cohesive active pharmaceutical ingredient (API), microcrystalline cellulose and croscarmellose sodium as cohesive excipients and free-flowing mannitol. Five in-line NIR calibration approaches, all using the spectra from the end of the blending process as reference for PLS modeling, were compared in terms of selectivity, precision, prediction accuracy and robustness. High selectivity could be achieved with a "reduced" approach i.e. API and time saving approach (35% reduction of API amount) based on six concentration levels of the API with three levels realized by three independent powder blends and the additional levels obtained by simply increasing the API concentration in these blends. Accuracy and robustness were further improved by combining this calibration set with a second independent data set comprising different excipient concentrations and reflecting different environmental conditions. The combined calibration model was used to monitor the blending process of independent batches. For this model formulation the target concentration of the API could be achieved within 3 min indicating a short blending time. The in-line NIR approach was verified by stratified sampling HPLC and NIRCI results. All three methods revealed comparable results regarding blend end point determination. Differences in both mean API concentration and RSD values could be attributed to differences in effective sample size and thief sampling errors. This conclusion was supported by HPLC and NIRCI analysis of tablets manufactured from powder blends after different blending times. In summary, the study clearly demonstrates the ability to develop efficient and robust quantitative calibrations for real-time NIR powder blend monitoring with a reduced set of powder blends while avoiding any bias caused by physical sampling. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Improvement of Electropolishing of 1100 Al Alloy for Solar Thermal Applications

    NASA Astrophysics Data System (ADS)

    Aguilar-Sierra, Sara María; Echeverría E, Félix

    2018-03-01

    Aluminum sheets-based mirrors are finding applicability in high-temperature solar concentrating technologies because they are cost-effective, lightweight and have high mechanical properties. Nonetheless, the reflectance percentages obtained by electropolishing are not close to the reflectance values of the currently used evaporated films. Therefore, controlling key factors affecting electropolishing processes became essential in order to achieve highly reflective aluminum surfaces. This study investigated the effect of both the electropolishing process and previous heat treatment on the total reflectance of the AA 1100 aluminum alloy. An acid electrolyte and a modified Brytal process were evaluated. Total reflectance was measured by means of UV-Vis spectrophotometry. Reflectance values higher than 80% at 600 nm were achieved for both electrolytes. Optical microscopy and scanning electron microscopy images showed uneven dissolution for the acid electropolished samples causing a reflectance drop in the 200-450 nm region. The influence of heat treatment, previously to electropolishing, was tested at two different temperatures and various holding times. It was found that reflectance increases around 15% for the heat-treated and electropolished samples versus the non-heat-treated ones. A heat treatment at low temperature combined with a short holding time was enough to improve the sample total reflectance.

  15. Handling Heavenly Jewels - 35 Years of Antarctic Meteorite Processing at Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Satterwhite, C. E.; McBridge, K. M.; Harrington, R.; Schwarz, C. M.

    2011-01-01

    The ANSMET program began in 1976, and since that time more than 18,000 meteorites have been processed in the Meteorite Processing Lab at Johnson Space Center in Houston, TX[1]. The meteorites are collected and returned to JSC on a freezer truck and remain frozen until they are initially processed. Initial Processing of Meteorites: Initial processing involves drying the meteorites in a nitrogen glove box for 24 to 48 hours, photographing, measuring, weighing and writing a description of the interior and exterior. The meteorite is broken and a representative sample is sent to the Smithsonian Institution for classification. Newsletter & Requests: Once initial processing has been complete and the meteorites have been classified, the information is published in the Antarctic Meteorite Newsletter[2,3]. The newsletter is published twice yearly and is sent electronically to researchers around the world and is also available on line. Researchers are asked to fill out a request form and submit it to the Meteorite Working Group secretary. All sample requests will be reviewed by either the meteorite curator or the Meteorite Working Group de-pending on the type of meteorite and the research being conducted. Processing for Sample Requests: In the meteorite processing lab, meteorite samples are prepared several different ways. Most samples are prepared as chips obtained by use of stainless steel chisels in a chipping bowl or rock splitter. In special situations where a researcher needs a slab the meteorite samples can be bandsawed in a dry nitrogen glove box with a diamond blade, no liquids are ever introduced into the cabinet. The last type of sample preparation is thin/thick sections. The meteorite thin section lab at JSC can prepare standard 30-micron thin sections, thick sections of variable thickness (100 to 200 microns), or demountable sections using superglue. Information for researchers: It is important that re-searchers fill the sample request form completely, in order to make sure the meteorite is processed correctly[4]. Re-searchers should list any special requirements on the form, i.e. packaging of samples (poly vs. stainless), thick sections and thickness needed, superglue needed, interior chips, exterior chips, fusion crust, contamination issues, all concerns should be listed so processing can be done accurately and any concerns the researcher has can be addressed be-fore the meteorites are broken.

  16. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  17. Image processing for identification and quantification of filamentous bacteria in in situ acquired images.

    PubMed

    Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo

    2016-06-11

    In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.

  18. High-resolution correlation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.

    2007-09-01

    In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.

  19. Microbial community structure in fermentation process of Shaoxing rice wine by Illumina-based metagenomic sequencing.

    PubMed

    Xie, Guangfa; Wang, Lan; Gao, Qikang; Yu, Wenjing; Hong, Xutao; Zhao, Lingyun; Zou, Huijun

    2013-09-01

    To understand the role of the community structure of microbes in the environment in the fermentation of Shaoxing rice wine, samples collected from a wine factory were subjected to Illumina-based metagenomic sequencing. De novo assembly of the sequencing reads allowed the characterisation of more than 23 thousand microbial genes derived from 1.7 and 1.88 Gbp of sequences from two samples fermented for 5 and 30 days respectively. The microbial community structure at different fermentation times of Shaoxing rice wine was revealed, showing the different roles of the microbiota in the fermentation process of Shaoxing rice wine. The gene function of both samples was also studied in the COG database, with most genes belonging to category S (function unknown), category E (amino acid transport and metabolism) and unclassified group. The results show that both the microbial community structure and gene function composition change greatly at different time points of Shaoxing rice wine fermentation. © 2013 Society of Chemical Industry.

  20. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    NASA Astrophysics Data System (ADS)

    Rahman, Nur Aira Abd; Yussup, Nolida; Salim, Nazaratul Ashifa Bt. Abdullah; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh@Shaari, Syirrazie Bin Che; Azman, Azraf B.; Ismail, Nadiah Binti

    2015-04-01

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on `Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)'. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.

  1. Data Validation Package May 2016 Groundwater Sampling at the Lakeview, Oregon, Processing Site August 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linard, Joshua; Hall, Steve

    2016-08-01

    This biennial event includes sampling five groundwater locations (four monitoring wells and one domestic well) at the Lakeview, Oregon, Processing Site. For this event, the domestic well (location 0543) could not be sampled because no one was in residence during the sampling event (Note: notification was provided to the resident prior to the event). Per Appendix A of the Groundwater Compliance Action Plan, sampling is conducted to monitor groundwater quality on a voluntary basis. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated).more » One duplicate sample was collected from location 0505. Water levels were measured at each sampled monitoring well. The constituents monitored at the Lakeview site are manganese and sulfate. Monitoring locations that exceeded the U.S. Environmental Protection Agency (EPA) Secondary Maximum Contaminant Levels for these constituents are listed in Table 1. Review of time-concentration graphs included in this report indicate that manganese and sulfate concentrations are consistent with historical measurements.« less

  2. The combined positive impact of Lean methodology and Ventana Symphony autostainer on histology lab workflow

    PubMed Central

    2010-01-01

    Background Histologic samples all funnel through the H&E microtomy staining area. Here manual processes intersect with semi-automated processes creating a bottleneck. We compare alternate work processes in anatomic pathology primarily in the H&E staining work cell. Methods We established a baseline measure of H&E process impact on personnel, information management and sample flow from historical workload and production data and direct observation. We compared this to performance after implementing initial Lean process modifications, including workstation reorganization, equipment relocation and workflow levelling, and the Ventana Symphony stainer to assess the impact on productivity in the H&E staining work cell. Results Average time from gross station to assembled case decreased by 2.9 hours (12%). Total process turnaround time (TAT) exclusive of processor schedule changes decreased 48 minutes/case (4%). Mean quarterly productivity increased 8.5% with the new methods. Process redesign reduced the number of manual steps from 219 to 182, a 17% reduction. Specimen travel distance was reduced from 773 ft/case to 395 ft/case (49%) overall, and from 92 to 53 ft/case in the H&E cell (42% improvement). Conclusions Implementation of Lean methods in the H&E work cell of histology can result in improved productivity, improved through-put and case availability parameters including TAT. PMID:20181123

  3. Ion-induced particle desorption in time-of-flight medium energy ion scattering

    NASA Astrophysics Data System (ADS)

    Lohmann, S.; Primetzhofer, D.

    2018-05-01

    Secondary ions emitted from solids upon ion impact are studied in a time-of-flight medium energy ion scattering (ToF-MEIS) set-up. In order to investigate characteristics of the emission processes and to evaluate the potential for surface and thin film analysis, experiments employing TiN and Al samples were conducted. The ejected ions exhibit a low initial kinetic energy of a few eV, thus, requiring a sufficiently high acceleration voltage for detection. Molecular and atomic ions of different charge states originating both from surface contaminations and the sample material are found, and relative yields of several species were determined. Experimental evidence that points towards a predominantly electronic sputtering process is presented. For emitted Ti target atoms an additional nuclear sputtering component is suggested.

  4. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  5. On the use of secondary capture-recapture samples to estimate temporary emigration and breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.; North, P.M.; Nichols, J.D.

    1995-01-01

    The use of the Cormack- Jolly-Seber model under a standard sampling scheme of one sample per time period, when the Jolly-Seber assumption that all emigration is permanent does not hold, leads to the confounding of temporary emigration probabilities with capture probabilities. This biases the estimates of capture probability when temporary emigration is a completely random process, and both capture and survival probabilities when there is a temporary trap response in temporary emigration, or it is Markovian. The use of secondary capture samples over a shorter interval within each period, during which the population is assumed to be closed (Pollock's robust design), provides a second source of information on capture probabilities. This solves the confounding problem, and thus temporary emigration probabilities can be estimated. This process can be accomplished in an ad hoc fashion for completely random temporary emigration and to some extent in the temporary trap response case, but modelling the complete sampling process provides more flexibility and permits direct estimation of variances. For the case of Markovian temporary emigration, a full likelihood is required.

  6. Critical time scales for advection-diffusion-reaction processes.

    PubMed

    Ellery, Adam J; Simpson, Matthew J; McCue, Scott W; Baker, Ruth E

    2012-04-01

    The concept of local accumulation time (LAT) was introduced by Berezhkovskii and co-workers to give a finite measure of the time required for the transient solution of a reaction-diffusion equation to approach the steady-state solution [A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Biophys. J. 99, L59 (2010); A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Phys. Rev. E 83, 051906 (2011)]. Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb [A. McNabb and G. C. Wake, IMA J. Appl. Math. 47, 193 (1991)]. Although McNabb's initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one-dimensional linear advection-diffusion-reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform-to-uniform transitions; these results provide a practical interpretation for MAT by directly linking the stochastic microscopic processes to a meaningful macroscopic time scale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.

  7. Flexible automated approach for quantitative liquid handling of complex biological samples.

    PubMed

    Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H

    2007-11-01

    A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.

  8. Influenza A Virus Isolation, Culture and Identification

    PubMed Central

    Eisfeld, Amie J.; Neumann, Gabriele; Kawaoka, Yoshihiro

    2017-01-01

    SUMMARY Influenza A viruses (IAV) cause epidemics and pandemics that result in considerable financial burden and loss of human life. To manage annual IAV epidemics and prepare for future pandemics, improved understanding of how IAVs emerge, transmit, cause disease, and acquire pandemic potential is urgently needed. Fundamental techniques essential for procuring such knowledge are IAV isolation and culture from experimental and surveillance samples. Here, we present a detailed protocol for IAV sample collection and processing, amplification in chicken eggs and mammalian cells, and identification from samples containing unknown pathogens. This protocol is robust, and allows for generation of virus cultures that can be used for downstream analyses. Once experimental or surveillance samples are obtained, virus cultures can be generated and the presence of IAV can be verified in 3–5 days. Increased time-frames may be required for less experienced laboratory personnel, or when large numbers of samples will be processed. PMID:25321410

  9. Processing temperature and moisture content effects on the texture and microscopic appearance of cooked fowl meat gels.

    PubMed

    Voller, L M; Dawson, P L; Han, I Y

    1996-12-01

    New aseptic processes are being used and refined to produce convenient, shelf stable liquid products containing meat particles. These processes utilize high temperature, short time thermal treatments to minimize food quality change; however, little research has been conducted on the effects of this process on the texture of meat from mature hens traditionally used for canning. The objective of this study was to examine textural and structural changes in meat structure due to different high temperature (HT) heat treatments and meat moisture contents were examined by use of electron microscopy and torsion analyses. Cooked gels of different moisture contents (71.2 to 74.8%) were formulated from spent fowl breast meat and exposed to processing temperatures of 120 or 124 C. The HT processing resulted in stronger (tougher) meat gels that were more deformable (more chewy) than gels that were not processed by HT. Water added prior to cooking was not retained in samples that were cooked and then processed at 124 C, but was retained in the samples processed at 120 C. Electron micrographs showed a more organized and open gel structure in the samples with higher moisture content and lower temperature (120 C) processing compared to the lower moisture and higher (124 C) temperature treatments.

  10. Challenges in creating an opt-in biobank with a registrar-based consent process and a commercial EHR

    PubMed Central

    Corsmo, Jeremy; Barnes, Michael G; Pollick, Carrie; Chalfin, Jamie; Nix, Jeremy; Smith, Christopher; Ganta, Rajesh

    2012-01-01

    Residual clinical samples represent a very appealing source of biomaterial for translational and clinical research. We describe the implementation of an opt-in biobank, with consent being obtained at the time of registration and the decision stored in our electronic health record, Epic. Information on that decision, along with laboratory data, is transferred to an application that signals to biobank staff whether a given sample can be kept for research. Investigators can search for samples using our i2b2 data warehouse. Patient participation has been overwhelmingly positive and much higher than anticipated. Over 86% of patients provided consent and almost 83% requested to be notified of any incidental research findings. In 6 months, we obtained decisions from over 18 000 patients and processed 8000 blood samples for storage in our research biobank. However, commercial electronic health records like Epic lack key functionality required by a registrar-based consent process, although workarounds exist. PMID:22878682

  11. ALMA Correlator Real-Time Data Processor

    NASA Astrophysics Data System (ADS)

    Pisano, J.; Amestica, R.; Perez, J.

    2005-10-01

    The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.

  12. Assessment of microbiological contamination of fresh, minimally processed, and ready-to-eat lettuces (Lactuca sativa), Rio de Janeiro State, Brazil.

    PubMed

    Brandão, Marcelo L L; Almeida, Davi O; Bispo, Fernanda C P; Bricio, Silvia M L; Marin, Victor A; Miagostovich, Marize P

    2014-05-01

    This study aimed to assess the microbiological contamination of lettuces commercialized in Rio de Janeiro, Brazil, in order to investigate detection of norovirus genogroup II (NoV GII), Salmonella spp., total and fecal coliforms, such as Escherichia coli. For NoV detection samples were processed using the adsorption-elution concentration method associated to real-time quantitative polymerase chain reaction (qPCR). A total of 90 samples of lettuce including 30 whole fresh lettuces, 30 minimally processed (MP) lettuces, and 30 raw ready-to-eat (RTE) lettuce salads were randomly collected from different supermarkets (fresh and MP lettuce samples), food services, and self-service restaurants (RTE lettuce salads), all located in Rio de Janeiro, Brazil, from October 2010 to December 2011. NoV GII was not detected and PP7 bacteriophage used as internal control process (ICP) was recovered in 40.0%, 86.7%, and 76.7% of those samples, respectively. Salmonella spp. was not detected although fecal contamination has been observed by fecal coliform concentrations higher than 10(2) most probable number/g. E. coli was detected in 70.0%, 6.7%, and 30.0% of fresh, MP, and RTE samples, respectively. This study highlights the need to improve hygiene procedures at all stages of vegetable production and to show PP7 bacteriophage as an ICP for recovering RNA viruses' methods from MP and RTE lettuce samples, encouraging the evaluation of new protocols that facilitate the establishment of methodologies for NoV detection in a greater number of food microbiology laboratories. The PP7 bacteriophage can be used as an internal control process in methods for recovering RNA viruses from minimally processed and ready-to-eat lettuce samples. © 2014 Institute of Food Technologists®

  13. Cooperative processing in primary somatosensory cortex and posterior parietal cortex during tactile working memory.

    PubMed

    Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di

    2015-08-01

    In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Planetary protection, legal ambiguity and the decision making process for Mars sample return

    NASA Technical Reports Server (NTRS)

    Race, M. S.

    1996-01-01

    As scientists and mission planners develop planetary protection requirements for future Mars sample return missions, they must recognize the socio-political context in which decisions about the mission will be made and pay careful attention to public concerns about potential back contamination of Earth. To the extent that planetary protection questions are unresolved or unaddressed at the time of an actual mission, they offer convenient footholds for public challenges in both legal and decision making realms, over which NASA will have little direct control. In this paper, two particular non-scientific areas of special concern are discussed in detail: 1) legal issues and 2) the decision making process. Understanding these areas is critical for addressing legitimate public concerns as well as for fulfilling procedural requirements regardless whether sample return evokes public controversy. Legal issues with the potential to complicate future missions include: procedural review under National Environmental Policy Act (NEPA); uncertainty about institutional control and authority; conflicting regulations and overlapping jurisdictions; questions about international treaty obligations and large scale impacts; uncertanities about the nature of the organism; and constitutional and regulatory concerns about quarantine, public health and safety. In light of these important legal issues, it is critical that NASA consider the role and timing of public involvement in the decision making process as a way of anticipating problem areas and preparing for legitimate public questions and challenges to sample return missions.

  15. Effect of nitrogen fertilisation on the overall quality of minimally processed globe artichoke heads.

    PubMed

    Lombardo, Sara; Restuccia, Cristina; Muratore, Giuseppe; Barbagallo, Riccardo N; Licciardello, Fabio; Pandino, Gaetano; Scifò, Giovanna O; Mazzaglia, Agata; Ragonese, Francesca; Mauromicale, Giovanni

    2017-01-01

    Although nitrogen (N) fertilisation is essential for promoting crop yield, it may also affect the produce quality. Here, the influence of three N fertiliser rates (0 kg ha -1 as a control, 200 kg ha -1 and 400 kg ha -1 referred to as N 0 , N 200 and N 400 , respectively) on the overall quality of minimally processed globe artichoke heads was investigated during refrigerated storage for 12 days. Throughout the storage time, N fertilised samples had higher inulin contents than those unfertilised. In addition, the respiratory quotient of N 200 and N 400 samples was 2-fold and 2.5-fold lower than N 0 ones, whose values were close to the normal range for vegetables. All the samples reported good microbiological standards, although N 200 and N 400 achieved lower mesophilic and psychotropic counts than N 0 throughout the storage time. After 8 and 12 days of refrigerated storage, the N 200 samples showed the highest scores of positive sensory descriptors. A fertiliser level of 200 kg N ha -1 is suitable for obtaining minimally processed globe artichoke heads with good nutritional, sensory and microbiological quality, characterised by low endogenous oxidase activities. Proper packaging systems and procedures are, however, crucial for extending the product shelf-life and, thus, promoting its exportation on a wider scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  16. Phosphorus as sintering activator in powder metallurgical steels: characterization of the distribution and its technological impact.

    PubMed

    Krecar, Dragan; Vassileva, Vassilka; Danninger, Herbert; Hutter, Herbert

    2004-06-01

    Powder metallurgy is a highly developed method of manufacturing reliable ferrous parts. The main processing steps in a powder metallurgical line are pressing and sintering. Sintering can be strongly enhanced by the formation of a liquid phase during the sintering process when using phosphorus as sintering activator. In this work the distribution (effect) of phosphorus was investigated by means of secondary ion mass spectrometry (SIMS) supported by Auger electron spectroscopy (AES) and electron probe micro analysis (EPMA). To verify the influence of the process conditions (phosphorus content, sintering atmosphere, time) on the mechanical properties, additional measurements of the microstructure (pore shape) and of impact energy were performed. Analysis of fracture surfaces was performed by means of scanning electron microscopy (SEM). The concentration of phosphorus differs in the samples from 0 to 1% (w/ w). Samples with higher phosphorus concentrations (1% (w/ w) and above) are also measurable by EPMA, whereas the distributions of P at technically relevant concentrations and the distribution of possible impurities are only detectable (visible) by means of SIMS. The influence of the sintering time on the phosphorus distribution will be demonstrated. In addition the grain boundary segregation of P was measured by AES at the surface of in-situ broken samples. It will be shown that the distribution of phosphorus depends also on the concentration of carbon in the samples.

  17. A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming

    PubMed Central

    Chen, Chun-Chi; Lin, Shih-Hao

    2013-01-01

    This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403

  18. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less

  19. Provenance information as a tool for addressing engineered nanoparticle reproducibility challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Donald R.; Munusamy, Prabhakaran; Thrall, Brian D.

    Nanoparticles of various types are of increasing research and technological importance in biological and other applications. Difficulties in the production and delivery of nanoparticles with consistent and well defined properties appear in many forms and have a variety of causes. Among several issues are those associated with incomplete information about the history of particles involved in research studies including the synthesis method, sample history after synthesis including time and nature of storage and the detailed nature of any sample processing or modification. In addition, the tendency of particles to change with time or environmental condition suggests that the time betweenmore » analysis and application is important and some type of consistency or verification process can be important. The essential history of a set of particles can be identified as provenance information tells the origin or source of a batch of nano-objects along with information related to handling and any changes that may have taken place since it was originated. A record of sample provenance information for a set of particles can play a useful role in identifying some of the sources and decreasing the extent of particle variability and the observed lack of reproducibility observed by many researchers.« less

  20. Novel method for the high-throughput processing of slides for the comet assay

    PubMed Central

    Karbaschi, Mahsa; Cooke, Marcus S.

    2014-01-01

    Single cell gel electrophoresis (the comet assay), continues to gain popularity as a means of assessing DNA damage. However, the assay's low sample throughput and laborious sample workup procedure are limiting factors to its application. “Scoring”, or individually determining DNA damage levels in 50 cells per treatment, is time-consuming, but with the advent of high-throughput scoring, the limitation is now the ability to process significant numbers of comet slides. We have developed a novel method by which multiple slides may be manipulated, and undergo electrophoresis, in batches of 25 rather than individually and, importantly, retains the use of standard microscope comet slides, which are the assay convention. This decreases assay time by 60%, and benefits from an electrophoresis tank with a substantially smaller footprint, and more uniform orientation of gels during electrophoresis. Our high-throughput variant of the comet assay greatly increases the number of samples analysed, decreases assay time, number of individual slide manipulations, reagent requirements and risk of damage to slides. The compact nature of the electrophoresis tank is of particular benefit to laboratories where bench space is at a premium. This novel approach is a significant advance on the current comet assay procedure. PMID:25425241

  1. Novel method for the high-throughput processing of slides for the comet assay.

    PubMed

    Karbaschi, Mahsa; Cooke, Marcus S

    2014-11-26

    Single cell gel electrophoresis (the comet assay), continues to gain popularity as a means of assessing DNA damage. However, the assay's low sample throughput and laborious sample workup procedure are limiting factors to its application. "Scoring", or individually determining DNA damage levels in 50 cells per treatment, is time-consuming, but with the advent of high-throughput scoring, the limitation is now the ability to process significant numbers of comet slides. We have developed a novel method by which multiple slides may be manipulated, and undergo electrophoresis, in batches of 25 rather than individually and, importantly, retains the use of standard microscope comet slides, which are the assay convention. This decreases assay time by 60%, and benefits from an electrophoresis tank with a substantially smaller footprint, and more uniform orientation of gels during electrophoresis. Our high-throughput variant of the comet assay greatly increases the number of samples analysed, decreases assay time, number of individual slide manipulations, reagent requirements and risk of damage to slides. The compact nature of the electrophoresis tank is of particular benefit to laboratories where bench space is at a premium. This novel approach is a significant advance on the current comet assay procedure.

  2. Trends and advances in food analysis by real-time polymerase chain reaction.

    PubMed

    Salihah, Nur Thaqifah; Hossain, Mohammad Mosharraf; Lubis, Hamadah; Ahmed, Minhaz Uddin

    2016-05-01

    Analyses to ensure food safety and quality are more relevant now because of rapid changes in the quantity, diversity and mobility of food. Food-contamination must be determined to maintain health and up-hold laws, as well as for ethical and cultural concerns. Real-time polymerase chain reaction (RT-PCR), a rapid and inexpensive quantitative method to detect the presence of targeted DNA-segments in samples, helps in determining both accidental and intentional adulterations of foods by biological contaminants. This review presents recent developments in theory, techniques, and applications of RT-PCR in food analyses, RT-PCR addresses the limitations of traditional food analyses in terms of sensitivity, range of analytes, multiplexing ability, cost, time, and point-of-care applications. A range of targets, including species of plants or animals which are used as food ingredients, food-borne bacteria or viruses, genetically modified organisms, and allergens, even in highly processed foods can be identified by RT-PCR, even at very low concentrations. Microfluidic RT-PCR eliminates the separate sample-processing step to create opportunities for point-of-care analyses. We also cover the challenges related to using RT-PCR for food analyses, such as the need to further improve sample handling.

  3. The dynamics of multimodal integration: The averaging diffusion model.

    PubMed

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  4. Gender Differences in the Motivational Processing of Facial Beauty

    ERIC Educational Resources Information Center

    Levy, Boaz; Ariely, Dan; Mazar, Nina; Chi, Won; Lukas, Scott; Elman, Igor

    2008-01-01

    Gender may be involved in the motivational processing of facial beauty. This study applied a behavioral probe, known to activate brain motivational regions, to healthy heterosexual subjects. Matched samples of men and women were administered two tasks: (a) key pressing to change the viewing time of average or beautiful female or male facial…

  5. Young Learners' Response Processes When Taking Computerized Tasks for Speaking Assessment

    ERIC Educational Resources Information Center

    Lee, Shinhye; Winke, Paula

    2018-01-01

    We investigated how young language learners process their responses on and perceive a computer-mediated, timed speaking test. Twenty 8-, 9-, and 10-year-old non-native English-speaking children (NNSs) and eight same-aged, native English-speaking children (NSs) completed seven computerized sample TOEFL® Primary™ speaking test tasks. We investigated…

  6. Characterization of bacterial community dynamics in a full-scale drinking water treatment plant.

    PubMed

    Li, Cuiping; Ling, Fangqiong; Zhang, Minglu; Liu, Wen-Tso; Li, Yuxian; Liu, Wenjun

    2017-01-01

    Understanding the spatial and temporal dynamics of microbial communities in drinking water systems is vital to securing the microbial safety of drinking water. The objective of this study was to comprehensively characterize the dynamics of microbial biomass and bacterial communities at each step of a full-scale drinking water treatment plant in Beijing, China. Both bulk water and biofilm samples on granular activated carbon (GAC) were collected over 9months. The proportion of cultivable cells decreased during the treatment processes, and this proportion was higher in warm season than cool season, suggesting that treatment processes and water temperature probably had considerable impact on the R2A cultivability of total bacteria. 16s rRNA gene based 454 pyrosequencing analysis of the bacterial community revealed that Proteobacteria predominated in all samples. The GAC biofilm harbored a distinct population with a much higher relative abundance of Acidobacteria than water samples. Principle coordinate analysis and one-way analysis of similarity indicated that the dynamics of the microbial communities in bulk water and biofilm samples were better explained by the treatment processes rather than by sampling time, and distinctive changes of the microbial communities in water occurred after GAC filtration. Furthermore, 20 distinct OTUs contributing most to the dissimilarity among samples of different sampling locations and 6 persistent OTUs present in the entire treatment process flow were identified. Overall, our findings demonstrate the significant effects that treatment processes have on the microbial biomass and community fluctuation and provide implications for further targeted investigation on particular bacteria populations. Copyright © 2016. Published by Elsevier B.V.

  7. Does participation in art classes influence performance on two different cognitive tasks?

    PubMed

    Schindler, Manuel; Maihöfner, Christian; Bolwerk, Anne; Lang, Frieder R

    2017-04-01

    Effects of two mentally stimulating art interventions on processing speed and visuo-spatial cognition were compared in three samples. In a randomized 10-week art intervention study with a pre-post follow-up design, 113 adults (27 healthy older adults with subjective memory complaints, 50 healthy older adults and 36 healthy younger adults) were randomly assigned to one of two groups: visual art production or cognitive art evaluation, where the participants either produced or evaluated art. ANOVAs with repeated measures were computed to observe effects on the Symbol-Digit Test, and the Stick Test. Significant Time effects were found with regard to processing speed and visuo-spatial cognition. Additionally, there was found a significant Time × Sample interaction for processing speed. The effects proved robust after testing for education and adding sex as additional factor. Mental stimulation by participation in art classes leads to an improvement of processing speed and visuo-spatial cognition. Further investigation is required to improve understanding of the potential impact of art intervention on cognitive abilities across adulthood.

  8. Process Parameter Optimization of Extrusion-Based 3D Metal Printing Utilizing PW-LDPE-SA Binder System.

    PubMed

    Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan

    2017-03-16

    Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.

  9. Process Parameter Optimization of Extrusion-Based 3D Metal Printing Utilizing PW–LDPE–SA Binder System

    PubMed Central

    Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan

    2017-01-01

    Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665

  10. GPU-Meta-Storms: computing the structure similarities among massive amount of microbial community samples using GPU.

    PubMed

    Su, Xiaoquan; Wang, Xuetao; Jing, Gongchao; Ning, Kang

    2014-04-01

    The number of microbial community samples is increasing with exponential speed. Data-mining among microbial community samples could facilitate the discovery of valuable biological information that is still hidden in the massive data. However, current methods for the comparison among microbial communities are limited by their ability to process large amount of samples each with complex community structure. We have developed an optimized GPU-based software, GPU-Meta-Storms, to efficiently measure the quantitative phylogenetic similarity among massive amount of microbial community samples. Our results have shown that GPU-Meta-Storms would be able to compute the pair-wise similarity scores for 10 240 samples within 20 min, which gained a speed-up of >17 000 times compared with single-core CPU, and >2600 times compared with 16-core CPU. Therefore, the high-performance of GPU-Meta-Storms could facilitate in-depth data mining among massive microbial community samples, and make the real-time analysis and monitoring of temporal or conditional changes for microbial communities possible. GPU-Meta-Storms is implemented by CUDA (Compute Unified Device Architecture) and C++. Source code is available at http://www.computationalbioenergy.org/meta-storms.html.

  11. Real-time quantum cascade laser-based infrared microspectroscopy in-vivo

    NASA Astrophysics Data System (ADS)

    Kröger-Lui, N.; Haase, K.; Pucci, A.; Schönhals, A.; Petrich, W.

    2016-03-01

    Infrared microscopy can be performed to observe dynamic processes on a microscopic scale. Fourier-transform infrared spectroscopy-based microscopes are bound to limitations regarding time resolution, which hampers their potential for imaging fast moving systems. In this manuscript we present a quantum cascade laser-based infrared microscope which overcomes these limitations and readily achieves standard video frame rates. The capabilities of our setup are demonstrated by observing dynamical processes at their specific time scales: fermentation, slow moving Amoeba Proteus and fast moving Caenorhabditis elegans. Mid-infrared sampling rates between 30 min and 20 ms are demonstrated.

  12. The Development of the improved equipment for the measurement radionuclides of xenon in atmospheric air

    NASA Astrophysics Data System (ADS)

    Pakhomov, S. A.; Dubasov, Y. V.

    2009-04-01

    The Radium Khlopin Institute have developed the mobile (vehicle based) equipment attended for the providing of the monitoring of radioactive xenon isotopes in atmospheric air on territories, neighboring with NPP. This equipment comprises the improved sampling installation with sample-processing unit and specialized spectrometer of β-γ-coincidences. The principal specificity of sampling installation is the using of the gas-cooling machine attended for the reaching of the cryogenic temperatures, which works without helium, using for cooling the processed air itself. The capacity of sampling reaches 20 cubic meters per hour with the xenon extraction factor of 75%. The duration of the sampling cycle forms 3 - 7 hours depending of the xenon volume requirements. The sample-processing unit is designed on preparative gas chromatograph scheme. Duration of sample-processing procedure does not exceed one and half hour. The volume of the prepared sample is around half liter, it contains 3 - 7 cubic centimeters of the xenon, depending of sampling cycle time. For measurements of xenon radioisotopes containing in obtained sample, was developed a β-γ-coincidences spectrometer on the base of the "ORTEC" HP Ge detector equipped with scintillation β-detector designed as Marinelli chamber of 700 cm3 volume. This spectrometer allows to reduce the ambient background more than in 20 times, with γ-channel efficiency reduction not more than in 1.5 times. The minimum detectable activity of 133Хе (MDA), evaluated by Currie formula for probability 95 % is 0.05 Bq at the exposition of 20 hours. Spectrometer is also intended for determination of the stable krypton and xenon concentrations in β-chamber by X-ray-fluorescent method. Therefore, in a shield of the spectrometer collimating pinhole is made and 241Am source is installed. To improve the sensitivity of the analysis beryllium window is made in β-chamber wall, adjoining to the HPGe detector. X-ray-fluorescent analysis allows to surely define Xe volumetric concentration of 0.05% in β-cell, that is equivalent less then 0,5 cm3 of Xe. The first approbation of described equipment was fulfilled in St. Petersburg at autumn of 2007 year and have shown that the spectrometer allows to measure 133Xe concentration at the level of 2 mBq/m3, and this value is in a good agreement with the results of other measurements. Described equipment was practically approbated in field conditions on 2008 year during the expeditionary work carryout in Sosnovyi Bor, Udomlya and Polyarnie Zori - the cities of North-West of Russia, which are located in close neighboring with acting NPP.

  13. Salmonellae Associated with Further-processed Turkey Products1

    PubMed Central

    Bryan, Frank L.; Ayres, John C.; Kraft, Allen A.

    1968-01-01

    “Further-processed” turkey products, prepared from chilled, eviscerated, and thawed carcasses at two commercial turkey-processing plants, were evaluated, for the presence of salmonellae. These organisms were isolated from swab samples from 12% of chilled, eviscerated turkey carcasses, 27% of finished products, and 24% of processing equipment. The same serotypes as those found throughout a plant on any one visit were recovered from 31% of rinse-samples taken from hands and gloves of processing personnel. Salmonellae were found in samples taken on 37 of 48 visits; a greater number of recoveries were made on days when freshly killed turkeys were processed (87%) than when frozen-defrosted carcasses were processed (59%). The predominant serotype isolated from meat and environment usually changed from visit to visit. Salmonella sandiego and Salmonella anatum were the most frequent among 23 serotypes recovered. Most of the isolated serotypes are commonly associated with turkeys and have been incriminated as causative agents of human salmonellosis. The implication is that further-processed turkey products, if inadequately cooked by the consumer and if improperly refrigerated between the time of manufacture and consumption, could directly transmit salmonellae. These same products might also contaminate other foods by introducing salmonellae into food-preparation areas. PMID:5688832

  14. ABSORPTION ANALYZER

    DOEpatents

    Brooksbank, W.A. Jr.; Leddicotte, G.W.; Strain, J.E.; Hendon, H.H. Jr.

    1961-11-14

    A means was developed for continuously computing and indicating the isotopic assay of a process solution and for automatically controlling the process output of isotope separation equipment to provide a continuous output of the desired isotopic ratio. A counter tube is surrounded with a sample to be analyzed so that the tube is exactly in the center of the sample. A source of fast neutrons is provided and is spaced from the sample. The neutrons from the source are thermalized by causing them to pass through a neutron moderator, and the neutrons are allowed to diffuse radially through the sample to actuate the counter. A reference counter in a known sample of pure solvent is also actuated by the thermal neutrons from the neutron source. The number of neutrons which actuate the detectors is a function of a concentration of the elements in solution and their neutron absorption cross sections. The pulses produced by the detectors responsive to each neu tron passing therethrough are amplified and counted. The respective times required to accumulate a selected number of counts are measured by associated timing devices. The concentration of a particular element in solution may be determined by utilizing the following relation: T2/Ti = BCR, where B is a constant proportional to the absorption cross sections, T2 is the time of count collection for the unknown solution, Ti is the time of count collection for the pure solvent, R is the isotopic ratlo, and C is the molar concentration of the element to be determined. Knowing the slope constant B for any element and when the chemical concentration is known, the isotopic concentration may be readily determined, and conversely when the isotopic ratio is known, the chemical concentrations may be determined. (AEC)

  15. A multi-threshold sampling method for TOF-PET signal processing

    NASA Astrophysics Data System (ADS)

    Kim, H.; Kao, C. M.; Xie, Q.; Chen, C. T.; Zhou, L.; Tang, F.; Frisch, H.; Moses, W. W.; Choong, W. S.

    2009-04-01

    As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to eight threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25 mm3 LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ˜18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an ˜9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ˜300 ps coincidence timing resolution, ˜14% energy resolution at 511 keV, and ˜5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.

  16. Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.

    PubMed

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.

  17. The Mediation of Mothers’ Self-Fulfilling Effects on Their Children’s Alcohol Use: Self-Verification, Informational Conformity and Modeling Processes

    PubMed Central

    Madon, Stephanie; Guyll, Max; Buller, Ashley A.; Scherr, Kyle C.; Willard, Jennifer; Spoth, Richard

    2010-01-01

    This research examined whether self-fulfilling prophecy effects are mediated by self-verification, informational conformity, and modeling processes. The authors examined these mediational processes across multiple time frames with longitudinal data obtained from two samples of mother – child dyads (N1 = 487; N2 = 287). Children’s alcohol use was the outcome variable. The results provided consistent support for the mediational process of self-verification. In both samples and across several years of adolescence, there was a significant indirect effect of mothers’ beliefs on children’s alcohol use through children’s self-assessed likelihood of drinking alcohol in the future. Comparatively less support was found for informational conformity and modeling processes as mediators of mothers’ self-fulfilling effects. The potential for self-fulfilling prophecies to produce long lasting changes in targets’ behavior via self-verification processes are discussed. PMID:18665708

  18. The mediation of mothers' self-fulfilling effects on their children's alcohol use: self-verification, informational conformity, and modeling processes.

    PubMed

    Madon, Stephanie; Guyll, Max; Buller, Ashley A; Scherr, Kyle C; Willard, Jennifer; Spoth, Richard

    2008-08-01

    This research examined whether self-fulfilling prophecy effects are mediated by self-verification, informational conformity, and modeling processes. The authors examined these mediational processes across multiple time frames with longitudinal data obtained from two samples of mother-child dyads (N-sub-1 = 486; N-sub-2 = 287), with children's alcohol use as the outcome variable. The results provided consistent support for the mediational process of self-verification. In both samples and across several years of adolescence, there was a significant indirect effect of mothers' beliefs on children's alcohol use through children's self-assessed likelihood of drinking alcohol in the future. Comparatively less support was found for informational conformity and modeling processes as mediators of mothers' self-fulfilling effects. The potential for self-fulfilling prophecies to produce long-lasting changes in targets' behavior via self-verification processes are discussed. (c) 2008 APA, all rights reserved

  19. Fabrication of self-assembled photonic-crystal structures by centrifugation and spin coating

    NASA Astrophysics Data System (ADS)

    Xu, Yan; Schneider, Garrett J.; Wetzel, Eric D.; Prather, Dennis W.

    2003-11-01

    We have developed a simple, low-cost process for the fabrication of high-quality three-dimensional artificial-opal and inverse-opal photonic crystals. The process is based on the self-assembly of a template from a uniform suspension of polystyrene microspheres, which is sintered for added strength and subsequently back-filled with high-index material. The template formation is assisted by a combination of centrifugation and spin-annealing, which requires relatively short process times and inexpensive laboratory equipment. The process has been used to fabricate polycrystalline photonic crystals with photonic stop gaps in the mid-IR portion of the spectrum. Details of the fabrication process and fabricated samples will be presented. In addition, Fourier-transform IR reflection spectroscopy has been used to characterize the samples; the results are shown to be in excellent agreement with band structure diffraction calculations.

  20. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

Top